id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.11994 | Gender and Prestige Bias in Coronavirus News Reporting | Journalists play a vital role in surfacing issues of societal importance, but
their choices of what to highlight and who to interview are influenced by
societal biases. In this work, we use natural language processing tools to
measure these biases in a large corpus of news articles about the Covid-19
pandemic. Specifically, we identify when experts are quoted in news and extract
their names and institutional affiliations. We enrich the data by classifying
each expert's gender, the type of organization they belong to, and for academic
institutions, their ranking. Our analysis reveals disparities in the
representation of experts in news. We find a substantial gender gap, where men
are quoted three times more than women. The gender gap varies by partisanship
of the news source, with conservative media exhibiting greater gender bias. We
also identify academic prestige bias, where journalists turn to experts from
highly-ranked academic institutions more than experts from less prestigious
institutions, even if the latter group has more public health expertise.
Liberal news sources exhibit slightly more prestige bias than conservative
sources. Equality of representation is essential to enable voices from all
groups to be heard. By auditing bias, our methods help identify blind spots in
news coverage. | Rebecca Dorn, Yiwen Ma, Fred Morstatter, Kristina Lerman | 2023-01-27T21:18:09Z | http://arxiv.org/abs/2301.11994v1 | # Gender and Prestige Bias in Coronavirus News Reporting
###### Abstract.
Journalists play a vital role in surfacing issues of societal importance, but their choices of what to highlight and who to interview are influenced by societal biases. In this work, we use natural language processing tools to measure these biases in a large corpus of news articles about the Covid-19 pandemic. Specifically, we identify when experts are quoted in news and extract their names and institutional affiliations. We enrich the data by classifying each expert's gender, the type of organization they belong to, and for academic institutions, their ranking. Our analysis reveals disparities in the representation of experts in news. We find a substantial gender gap, where men are quoted three times more than women. The gender gap varies by partisanship of the news source, with conservative media exhibiting greater gender bias. We also identify academic prestige bias, where journalists turn to experts from highly-ranked academic institutions more than experts from less prestigious institutions, even if the latter group has more public health expertise. Liberal news sources exhibit slightly more prestige bias than conservative sources. Equality of representation is essential to enable voices from all groups to be heard. By auditing bias, our methods help identify blind spots in news coverage.
gender bias; prestige bias; ideological bias; news reporting; expert sources; named entity recognition; dependency parsing +
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
## 1. Introduction
In times of crisis people turn to news media for information and to make sense of the world; journalists, in turn, seek out experts and opinion leaders to interview and then help communicate their knowledge to the public. Mass media does not simply convey information to the public but also shapes what is seen and what is deemed important (Mohammad et al., 2018). The interplay between mass media and the public creates a cycle that amplifies attention to concerns and influences public policy. Given the media's role in identifying issues of societal importance, it is therefore critical that it equitably reflects the interests of all stakeholders.
Representation of groups and individual social identity in the media is one of the fundamental questions of equity. Does the media adequately represent issues that are important to women, ethnic minorities, the elderly, and the disadvantaged? Does it capture the lived experience of these groups, the challenges they face? Or does it focus on the concerns of the privileged few? One mechanism for improving equity is to ensure that the pool of journalists and reporters reflects society's diversity. However, journalists are predominantly men and often choose to interview subjects whose gender identity matches their own (Krishnan et al., 2018).
Another mechanism to improve equity is to diversify the pool of subjects that journalists pay attention to. For example, by talking to women, journalists willface their views and concerns. This is important, because women typically bear a larger share of care responsibilities, and their concerns may bring up issues with childcare, for instance, that may not be visible to men. Moreover, if journalists solely focus on sources from the same few prestigious academic institutions, they lose the geographic and socio-economic diversity that comes from interviewing experts from a range of institutions. This may introduce additional blind spots in news coverage.
Auditing gender representation in the news--or the representation of other identities--has proven difficult due to the challenges of extracting representations from the text of the news stories. Previous studies have identified gender bias in news reporting (Mohammad et al., 2018); however, they have generally relied on manually curated data or were limited to certain media types, and thus do not scale to the size of the media ecosystem. Addressing the question of bias in the news media at scale calls for automated methods.In this study we use natural language processing (NLP) methods to automate media analysis, which enables us to scale our bias audit of news across longer time periods and across more media sources. We focus on gender and academic prestige bias in the coverage of the Covid-19 pandemic. When the novel coronavirus emerged, little was known about the severity of the disease it caused, what mitigations were effective and their benefits and costs. As researchers learned more about the disease, public officials used these findings as a basis for policy recommendations. Journalist sought out experts from the research community and government agencies to communicate the research findings, policy recommendations, and their trade-offs to the public. We analyze thousands of news stories from six popular media sources along the breadth of US political spectrum to identify the experts the journalists turned to. We analyze three left leaning news sources and three right leaning sources to enable analysis by partisan bias and accommodate a variety of linguistic styles.
Our analysis reveals a gender gap in news coverage where women appear much less frequently among the experts quoted
by journalists than men. The gender gap varies by political ideology of the news source, with liberal media coming closer to gender parity than conservative media. In addition to gender, we look at the institutional affiliations of the experts and classify their academic prestige. We identify prestige bias, in which experts from the higher-ranked academic institutions are quoted more frequently than experts with less prestigious affiliations. We find that prestige bias varies slightly by ideology of the reporting source.
One possible explanation for the observed bias is that women are a minority in science and medicine. However, women make up the majority of doctoral students and junior faculty in public health and biomedical sciences (Hao et al., 2018), both of which are fields relevant to the Covid-19 pandemic. Graduate-level public health degrees have been awarded to more women than men since 1979, with 73% of such degrees awarded to women in 2017 (Borda et al., 2017). Therefore, the gender disparity we observe is likely not due to a shortage of experts but due to individual biases of reporters and media sources.
Our analysis of the gender and prestige of experts quoted in the news during the Covid-19 pandemic answers the following research questions:
* **Gender Bias:** Are women underrepresented among experts whom journalists turn to for information about the pandemic?
* **Ideological Gender Bias:** Does the gender gap vary by ideological leaning of news source?
* **Prestige Bias:** Is there media preference for experts from highly ranked institutions?
* **Ideological Prestige Bias:** Does the prestige gap change with political leaning of news outlet?
## 2. Related Work
There has been work analyzing the gender composition of experts in television news. Scott et al. discovered that from September 25 to October 6, 2006 and May 14 to May 25, 2007, 14.7% of people featured in PBS NewsHour were women (Sott et al., 2007). The authors also found that 13.7% of experts had academic affiliations, 4.3% from think tanks and 42.9% with governmental affiliations.
The role of gender in international news media use of non-coronavirus specific experts has been documented. Niemi et al. found that less than 30% of experts interviewed in Finnish news journalism are women (Niemi et al., 2018). Lidia Manoso Pacheco found a high correlation between journalist and subject gender in 68 British and English newspaper articles (Niemi et al., 2018). Kitzinger et al. analyzed 51 in-depth profiles of men and women scientists and found that 5 men are used for every 1 woman scientist (Katzinger et al., 2018).
Only manual analyses of American Coronavirus news experts exist. Fletcher et al. (Fletcher et al., 2018) reviewed a total of 4,463 articles from 9 U.S. news sources dating April 1, 2020 to April 15, 2020 and found 35.9% of the 2,297 experts were women. In a special report from Luba Kassova that looked at the frequency of men and women in 2,100 quotes between March 1, 2020 and April 15, 2020, men were quoted three times as much as women (Fletcher et al., 2018). Kassova additionally found that women are less likely to be protagonists in news stories and more likely to provide subjective views over expertise.
Large scale analysis of North American news experts exist, though not specific to Coronavirus. Asr. et al. introduced a tool for large scale gender analysis in news quotes in The Gender Gap Tracker (Fletcher et al., 2018), which takes a sentence and returns people quoted and mentioned with their inferred gender identities. Methods of extraction include syntactic, heuristic and floating quote approaches. The software is illustrated on seven Canadian news outlets, where the authors found that men are represented three times as much as women from October 2018 to September 2020.
Large-scale tools have been used to analyze the difference in how men and women are featured in the news. LDA topic modelling is performed on two years worth of American and Canadian news articles by Rao et al. (Rao et al., 2018). Persons quoted and their genders are gathered using The Gender Gap Tracker. Contrary to our results, the authors found that women are more represented in articles related to healthcare. An analysis of gender, fame and sentiment is done by Shor et al. (Shor et al., 2018). The dataset used combines 14 million persons mentioned throughout 1,323 news outlets with a manual analysis of select Wikipedia pages. The authors looked at sentiment scores for adjectives used with each person, and found that as women become more famous the media attention received becomes increasingly negative. Separately, Shor et al. analyzed gender and public interest while controlling for occupation and age (Shor et al., 2018). The authors looked at over 20,000 persons from over 2,000 news sources. They found that when men and women have similar occupations and ages, women obtain higher public interest but less media coverage.
One of the most frequently observed forms of social learning is where people observe and mimic seemingly competent and therefore admirable individuals (Jimenez et al., 2018). Jimenez et al. explained how first order cues of prestige (initially observable traits) are used to assume prestige when quality information is lacking, though these cues may be wrong and deceptive (Jimenez et al., 2018). Additionally, upward mobility in academia is limited. In a survey of n = 348 universities, 20% of faculty positions are inhabited by 8 universities (Sott et al., 2018). The same survey found that only 5% to 23% of faculty members from United States universities hold doctorates from less prestigious institutions, and that 64% of Universities have no departments listed as top 10 (Sott et al., 2018).
## 3. Methods
### Data
The AYLIEN Coronavirus Dataset consists of 1,673,353 news articles related to the Coronavirus pandemic collected from over 440 international news sources. This data is aggregated, analyzed, and enriched by AYLIEN using AYLIEN's News Intelligence Platform1. We use the article attributes raw article text, article title, news source name, and publication date and time. We analyze AYLIEN Coronavirus related news articles from six US-based news sources: Huffington Post (HUFF), Cable News Network (CNN), The New York Times (NYT), The New York Post (NYP), Fox News (FOX), and Breitbar News Network (BREIT) between January 6, 2020, and July 31, 2020. These six news outlets are chosen because they collectively exemplify an ideological spectrum in news reporting while all having some partisan bias. This allows us to separate news outlets into two distinct groups. Additionally, having 6 news outlets ensures we cover a variety of linguistic style. This subset totals 66,368 articles: 9,897 articles from the New York Times, 17,765 from
CNN, 19,911 from Fox News, 7,609 from Breitbart, 13,391 from New York Post and 6,625 from the Huffington Post.
### Expert Quote Extraction
Fig. 1 shows an example of how journalists quote experts using three different sentence structures. The components of interest are reported speech, reported verb, person and organization. Reported speech (RSPEECH) directly quotes or indirectly reconstructs the words of the speaker. A reporting verb (RVERB) is used to introduce or conclude reported speech (e.g. "report", "acclaim", "told"). The person is the speaker being quoted. An organization is the institution associated with the speaker. We consider expert quotes to be any permutation of these components. We find sentences quoting experts by taking the union of two approaches:
#### 3.2.1. Named Entity Recognition (NER)
The three most common reporting verbs are "said", "say" and "says". The most common pattern quoting experts is:
"[_RSPEECH_]," ("said"|"say"|"says") [PERSON]
Where \(|\) denotes logical _or_ and [PERSON] denotes speaker. This pattern is captured using the following regular expression:
"s[(a-zA-20-9?,_s_()])"s;(said|say|says)([a-zA-zA-z0-9??,_s_()])"
The NLP library SpaCy offers an NER library pretrained on web text with entity labels including person, organization, date and location (Bordes et al., 2016). We use SpaCy's NER on sentences following this pattern and look for PERSON entities listed outside of quotation marks.
#### 3.2.2. The Gender Gap Tracker
The second method we use to find speakers is that of The Gender Gap Tracker Project (Bordes et al., 2016). The syntactic method from The Gender Gap Tracker identifies quotes following a causal complement structure, where a dependent verb is featured with an internal subject. Sentences following this structure are only kept if they feature one of 262 reporting verbs. The second Gender Gap Tracker method we utilize identifies reported speech introduced directly before or after the reporting verb "acclording to." Due to the difficulty in finding affiliated organizations, we choose to omit the floating quote method which finds sentences where reported speech takes a full sentence and the speaker is introduced elsewhere.
When an expert is quoted in a news article, the journalist typically introduces the expert, specifying their position and affiliation. To help focus our data collection only on expert speakers, we require speakers to be present alongside an organizational affiliation. On all sentences collected, we run NER and retain only those sentences where NER identifies an organization (ORG entity).
### Classifying Gender
The Python library gender-guesser implements a gender prediction program built on a database of 45,376 names with each name's most likely gender identity (Bordes et al., 2016). The possible gender predictions for a single person are "male", "female", "andy" (undrogynous) and "unknown". For each person quoted, we run gender-guesser on the first string before a space (i.e., first name) to obtain that name's most common gender association (Krishnan et al., 2017).
The gender labels include "male" and "female" though would be more accurately described as man/masculine and woman/feminine. We acknowledge that gender is non-binary and not captured by a person's first name. Classifying by common gender affiliation with names captures reader perception of gender, not the expert speakers' actual gender identification. The discussion section further elaborates on the inability of a single androgynous category to adequately capture non-binary non-cisgender gender identities.
### Classifying Organization Prestige
During the Covid-19 pandemic, scientists, epidemiologists, and public health experts from a variety of different organizations worked to define our understanding of the disease and to define public policy. These experts came from academic institutions (e.g., Brown University), federal bodies (e.g., the Centers for Disease Control and Prevention), and a variety of think tanks (e.g., the Hoover Institution). Journalists turned to these experts for information and guidance to share with the public.
We use fuzzy string matching, a mechanism that generates similarity scores between two strings, to determine whether organization affiliations reference academic institutions, federal bodies, or think tanks. For example, fuzzy string matching would find that "The University of Maryland - College Park" matches to "The University of Maryland" with a score of 90. Journalists typically introduce organizations with their full names, thus we do not accomodate for organization abbreviations.
#### 3.4.1. Academic Institutions
We use Times Higher Educations' 2015 World University Rankings2. This list gives 400 University names as well as their ranking. Rankings are determined by factors including teaching, research, citations, industry income, and international outlook.
Footnote 2: [https://www.timeshighereduction.com/world-university-rankings/2016/world-ranking/methodology](https://www.timeshighereduction.com/world-university-rankings/2016/world-ranking/methodology)
#### 3.4.2. Federal Bodies
We compile a list of Federal Bodies by web scraping the U.S. Government Services and Information's index of Federal Departments and Agencies3. This list includes only federal agencies therefore nothing at the state level.
Footnote 3: [https://www.usus.gov/federal-agencies](https://www.usus.gov/federal-agencies)
#### 3.4.3. Think Tanks
One of the most popular think tank definitions is by McGann and Weaver. "non-governmental, not-for-profit research organisations with substantial organisational autonomy from government and from societal interests such as firms, interest groups, and political parties" (McGann and Weaver, 2016; McGann and Weaver, 2016). Think tanks frequently focus on public policy. We use the open source database On Think Tanks4, which includes over 3,200 global think tanks and provides fields including region, topic, website and office address.
Footnote 4: [https://onthinktanks.org/open-think-tank-directory/](https://onthinktanks.org/open-think-tank-directory/)
For each sentence, we measure similarity between NER-identified organization and organization names listed in these databases. We manually review a sample of NER-extracted organizations, the organization name most closely matching and the distance metric calculated for the two strings. For all three databases, we consider a match if the similarity score is greater than or equal to 90. To minimize noise, organizations consisting of two or fewer characters in the name are ignored. We sample 25 random organizations of two or fewer characters to ensure minimal impact. We find that
the most common two-character string is "*"s", followed closely by strings "m" and "AP".
## 4. Results
We extract 89,130 expert sources (pairs of speakers and their affiliated organizations): 19,137 pairs from HUFF, 17,156 from CNN, 18,828 from NYT, 4,129 from NYP, 22,226 from FOX and 7,654 from BREIT. The Gender Gap Tracker accounts for 26.7% of these extractions, and Named Entity Recognition-based for the rest. Our methods improve the number of extractions by 65,263 pairs. The scale increase from adding our method helps promote accuracy and efficiency in studies of inequality.
For precision evaluation, we run our method on 100 randomly sampled articles and manually annotate each extraction. Extractions are labeled correct if they contain RSPEECH from a PERSON with an ORG affiliation. The precision from this sample is 64.7%. The method most commonly fails for instances where the ORG is the news outlet rather than a professional affiliation. For example: _"The government took a very important step, but they waited too long for this decision," Dr. Jose Luis Vargas Segura, a pulmonologist, told Fox News."_finding Fox News as the affiliated ORG. We also sample 100 academic extractions, labeling whether the instance contains RSPEECH, a PERSON and their affiliated university. The accuracy for this is much higher at 87%.
### Gender Bias
36.8% of extracted speakers have no identifiable gender in genderguesser. To reduce unknown genders, we take the union of each news outlet's 25 most frequently mentioned people with unknown gender and manually label the gender where the person is recognizable. Most of the names are easily identifiable public figures (e.g., "Trump", "Biden", and "Cuomo"). After this procedure, 26.4% of extracted sentences have no persons with an identifiable gender.
The majority of androgynous names are Asian names popular both as first and last names. We look at the 25 most frequent names with androgynous labels and manually labeled their gender, if known. We find that the androgynous category captures a unique subset of non-gender-identifying more than androgynous names, so we merge androgynous and unknown gender categories.
Figure 2 breaks down experts quoted in the news by gender. The 26.4% of instances with unknown gender are omitted to better grasp the immediate disparity between men and women. The left plot represents the total mentions of all individuals by gender: women represent 24% of all mentions of experts in the news. To identify unique experts, we iterate through all experts while maintaining a list of previously quoted people. For each name, we check whether the person quoted fuzzy string matches to anyone previously quoted with a score of 90 or more. The left pie chart in Fig. 2 shows the gender breakdown of unique experts, where experts are counted once over all mentions. Women's representation improves with
Figure 1. Examples of Expert Quotes. Examples capture three varieties of quote structure. RSPEECH (Reported Speech) is the portion of the quote containing an exact quote or reconstruction of what the speaker previously said. RVERB (Reporting Verb) refers to the verb introducing or concluding reported speech (“say", "said", "explains", etc.). PERSON refers to the speaker of the (reported) quote. ORG refers to the organization affiliated with the speaker. Quotes are considered expert quotes if it has the presence of RSPEECH, RVERB, PERSON and ORG. We consider a sentence as containing both RSPEECH and RVERB if it contains one of 262 Reporting Verbs, as a Reporting Verb implies the presence of Reported Speech. We use Named Entity Recognition (NER) to determine whether a sentence features a PERSON and ORG.
Figure 2. Gender bias in news. Percentage of men and women in all identified expert quotes. We show the composition in total mentions (speakers counted each time they are referenced) and unique mentions (speakers counted once over all mentions). Unique mentions are determined by checking whether each expert’s name has a string similarity (via fuzzy string matching) score of 90 or higher to previously mentioned experts. Men are overrepresented in both total and unique mentions. The stronger affinity towards men in total mentions demonstrates that journalists quote the same men repeatedly.
unique mentions at 31%. However, this still shows that women are under-represented in the news, considering that the fields of epidemiology, bio-medicine, and public health--all relevant to the pandemic--have achieved gender parity (or better) (Bianchi et al., 2017; Bianchi et al., 2017). Instead, the news media turns to the same group of male experts. The over-representation of men reinforces the idea that science requires traditionally masculine traits and denies fair coverage (and therefore career advancement opportunities) to women.
Sentences quoting men have on average 240 characters per sentence and those quoting women have an average length of 236 characters. This difference is found significant using a two sided t-test (p < 0.01). We also observe that 4.6% of sentences with expert women also feature an expert man, while only 1.3% of sentences with an expert man appear with an expert woman.
### Ideological Bias
Out of all our extractions, 27.6% have an organization matching to our academic, federal and think tank databases. Analysis of the organizational breakdown reveals journalists are most likely to reach out to experts affiliated with federal agencies (60.5%), then academic institutions (21.6%), and think tanks (17.9%). One possible explanation is that federal agencies make recommendations for pandemic safety procedures, which are then communicated to the public by reporters.
Fig. 3 shows gender composition by organization type. The bars show average gender representation over 1,000 bootstrapped samples of the data set. The category of unknown gender is included. Experts associated with federal bodies (e.g., CDC, FDA) exhibit the strongest disparity by gender with the lowest percentage of women. Experts from academic institutions manifest less gender disparity, with the highest percentage of women. The lowest percentage of men occurs for experts affiliated with think tanks, which could be due to the high number of persons with "unknown" gender.
Fig. 4 shows how each news outlet distributes attention over experts from academic institutions, federal bodies and think tanks. Quotes with unknown organization types are not included. We observe that federal bodies are always the most common sources of expertise. NYT quotes federal experts 40.6%, and all other outlets utilize federal affiliated experts at least 60.8%. Additionally, we observe that right-leaning outlets typically turn to experts from federal agencies more than left-leaning outlets. Academic institutions are the second most common organization type for experts after federal bodies, except for BREIT and FOX which utilizes academic experts 9.9% and 14%, respectively.
Fig. 5 shows gender bias across the ideological spectrum of news outlets, where HUFF, CNN and NYT are classified as liberal (left-leaning) sources, and NYP, FOX, and BREIT as conservative (right-leaning), as reported in Media Bias Fact Check5. The effect of news
Figure 4. Preferred Organization Type for Expertise. Distribution of organization types affiliated with news sources in expert quotes. Sources are listed from top to bottom by political leaning reported in Media Bias Fact Check. Across the board, Federal Bodies are the most common type of expertise, though The New York Times has lowest proportion. Breitbart News is the only news outlet with higher use of think tanks than academic institutions.
Figure 5. Ideology and Gender Bias. Ratio of Women to Men experts quoted by a news source. Smaller ratios signal under-representation of women. Error bars included are from bootstrapping 1000 times. Outlets are ordered left to right by political ideology. Left leaning outlets have the greatest ratio of women cited. The difference in median ratio of news outlets is found significant by the Kruskal-Wallis Test (p < 0.01).
Figure 3. Gender Composition by Organization. Gender distribution separated by type of organization. Quotes matched to organization types by fuzzy string matching to databases of organization names (Times Higher Educations’ 2015 World University Rankings, Index of Federal Departments and Agencies, and On Think Tanks). Error bars determined through bootstrapping 1,000 times. All organization types exhibit gender bias, with federal bodies containing the lowest proportion of women.
outlet ideology on gender representation is measured by the ratio of the number of women quoted to the number of men. A ratio of 1.0 signifies equal representation of men and women, smaller ration signal over-representation of men.
All news sources exhibit over-representation of men with ratios at most.387. BREIT has the largest gender disparity with a ratio of 0.264, and NYT has the least gender disparity with the share of women experts at 0.387. We use the Kruskal-Wallis H-Test to compare medians for the share of women experts for left-leaning and right-leaning outlets (pictured in blue and red, respectively, in Fig. 5). The Kruskal-Wallis test reports a statistic of 8.547 (p \(<\) 0.01) signifying a statistically significant moderate effect. We conclude left-leaning news outlets exhibit less gender disparity than the right-leaning outlets.
### Prestige Bias
We now take a closer look at experts from academic institutions. Fig. 6 shows the number of times an academic institution is mentioned in the news as a function of its placement in the Times Higher Educations' World Rankings. Spearman correlation measures monotonicity between two variables and scores between -1 and 1 (0 means no correlation). The scatter plot shows a downward trend, with a Spearman coefficient of -0.379 (p \(<\) 0.01), indicating more prestigious (higher-ranked) institutions generally receive more mentions in the news than less prestigious (lower-ranked) institutions.
We measure prestige bias using the Gini coefficient. Gini is a popular statistical measure of inequality, here attention to academic institutions. A small Gini coefficient means attention (number of mentions of an institution) is equally distributed across universities of any rank, while a Gini coefficient close to one means one university gets all the attention while the rest receive no mentions. The Gini coefficient of mentions of institutions in our data is 0.568, suggesting existence of prestige bias: journalists prefer to turn to experts from the same high-ranking institutions again and again.
But what if news outlets are turning to prestige within a domain relevant to the pandemic, like public health? For this case, we rank institutions by prestige in the field of public health using the US News' ranking of US schools of public health6 in Figure 7. If journalists were seeking out public health experts, we would expect them to pay more attention to experts from these 48 institutions with higher-ranked schools of public health, resulting in a much lower Gini coefficient. However, the Gini coefficient drops to 0.537, suggesting that prestige bias is driven by extraneous factors such as the institution's "brand name" rather than expertise in the relevant field of public health.
Figure 8. Ideology and Prestige Bias. Boxplot bins the mentions of academic institutions by their rankings, and shows the distributions of the share of mentions of those institutions made by left- and right-leaning news sources. Yellow dots represent group means. Left-leaning news outlets display stronger preference for experts from prestigious institutions (top-50 ranked universities).
Figure 6. Prestige Bias. Number of mentions of an academic institution in the news as a function of its ranking (for institutions ranked by the Times Higher Educations’ World Rankings) shows journalists pay more attention to higher-ranking institutions. Lower rankings signal higher prestige.
Figure 7. Public Health Ranking and Prestige. Number of academic institution mentions by public health ranking. In top 48 public health institutions, only a handful with high prestige are heavily utilized by journalists.
#### 4.3.1. Ideology and Prestige Bias
We analyze overlap between news outlet ideological leaning and tendency to mention higher ranked universities. The boxplot in Fig. 8 shows the distribution of academic expert mentions made by the left-leaning and right-leaning news outlets. The universities which experts are affiliated with are binned by school rank. The boxplot shows the distribution over the share of institution mentions within each bin made by the news sources. The boxplot shows the interquartile range, outliers and median for each bin's total mentions. The means within each bin are displayed with yellow points. Prestige bias exists at both ends of the ideological spectrum, though left-leaning news outlets display more prestige bias, i.e., stronger preference for experts from the top-50 academic institutions.
We control for political orientation of news outlet in comparing academic institution mentions and rankings. Left-leaning news sources have a Gini coefficient of 0.573 and Spearman coefficient -0.439 (p \(<\) 0.01). Right-leaning news sources have a Gini coefficient of 0.562 and Spearman coefficient -0.317 (p \(<\) 0.01). This suggests that journalists from conservative sources divide their attention more evenly across institutions than liberal journalists, though the difference is small.
#### 4.3.2. Gender and Prestige Bias
Next we examine whether prestige bias varies with expert gender. Fig. 9 shows the cumulative distribution of the share of mentions of experts of either gender affiliated with top-\(n\) academic institutions. Values of \(n\) are 5, 10, 15, etc. We observe almost no difference in how men and women's coverage varies with prestige. For each gender, top-50 highest ranked universities account for half of the academic expert mentions (49.6% for women and 50.1% for men). For women, the Gini coefficient of university mentions is 0.56 and Spearman correlation coefficient between the number of mentions and ranking is -409 (p \(<\) 0.01). For men, the Gini coefficient is 0.572 and Spearman coefficient -0.397 (p \(<\) 0.01). This disparity shows that prestige inequality is slightly higher for men than women.
We expected that women would need to be from more prestigious institutions to be considered qualified experts. However, we see in Fig. 9 that there is no significant difference in the prestige distribution for men and women. This lack of difference reveals that gender bias is not substantially amplified within expert mentions from highly ranked universities.
## 5. Discussion and Conclusion
Involving a diverse set of perspectives in the research process enhances quality of research. However, women make up the minority of faculty in most science departments, especially in the more senior and leadership positions (Krishnan et al., 2016). Additionally, the reward structure of science itself creates disparities through the "Matthew effect" (Krishnan et al., 2016), in which highly regarded scientists obtain disproportionate resources and become more likely to produce more successful work. We see this in an example where reviewers in a single-blind peer review process are more likely to accept for publication papers from authors from more prestigious universities (Krishnan et al., 2016). The researchers from a few prestigious institutions hold a greater influence in shaping scientific research than authors from the less prestigious schools with more diverse populations (Krishnan et al., 2016).
Our analysis of a large pandemic-related news corpus shows that women are heard from less frequently than men. Women compose 24% of expert mentions, though the representation rises to 31% for unique experts. This suggests that a few men, possibly public figures such as Donald Trump or Andrew Cuomo, are disproportionately represented. Rendering women with less visibility than men paves the way for women's concerns, such as reopening childcare centers and schools, to receive less attention from policy makers.
We observe two different types of ideological bias. The representation of women, measured by the ratio of women included to men, is always higher in left leaning sources than right. Additionally, left leaning news sources display higher prestige bias than right leaning ones. All news sources could improve in representation.
We showed that journalists reporting on Covid-19 paid much more attention to experts with more prestigious affiliations. The gender representation found is a starkly different than that of public health, which is a field one would hope Covid-19 reporting relies upon. When ranking experts by prestige of their institution in the field of public health, ideally the distribution would be somewhat even. However, we observe only a marginally smaller ranking coefficient. This suggests that journalists are either seeking out irrelevant expertise, or wildly misrepresenting the public health field. Journalists have a unique ability to hand pick their subjects, thereby shaping public perception of who constitutes scientific expertise. By focusing their--and the public's--attention on the same small group of high-ranked universities, they risk perpetuating the cycle of advantage for the privileged minority. To our knowledge, this is the first large scale study of prestige bias in news reporting.
Our study has a number of limitations. Gender classification is a major limitation. It has been shown that Named Entity Recognition has worse performance identifying women's names as PERSON entities compared to men's names (Krishnan et al., 2016). As a result, it is likely that our extractions obtained through NER are under-representative of the number of women in the data set. Another gender-based limitation is that the gender predictor used has a misleading androgynous
Figure 9. Gender and Prestige Bias. Cumulative distribution of mentions for the top 100 institutions broken down by gender. Shows minimal difference in prestige bias between men and women in academia. Roughly one third of quotations come from top 20 institutions, regardless of gender. Men are overrepresented among the quotations from top 10 institutions.
category. Rather than capturing names with equitable gender balance or high association with non-binary people, the androgynous category captures popular Asian last names. The gender classifier is based on a dataset built around cisgender people with historically Western names, meaning our study inherently focuses on cisgender people from Western countries. Such exclusion of non-cisgender people in research continues a long legacy of transgender erasure (Bordes and McGaugh, 2017).
Our work can be expanded by auditing the gender and institutional prestige of Coronavirus experts who are active online on Twitter. We hope to compare network structure by gender category and see how engagement-increasing behaviors differ by gender. We are also interested in hate speech analysis of how scientists of different genders are interacted with on Twitter. Twitter also gives users opportunities to provide their pronouns, allowing us to look at under representations of the gender queer community in scientific research and expert positions.
This large scale analysis of Covid-19 expertise helps us better understand information ecosystems in times of crisis. We observe that men are the dominant sources of expertise, and that a positive feedback loop may occur in news media where men with research success are featured more and therefore are better positioned for further success (and further features in the news media). By automating this analysis, we demonstrate the utility of NLP tools. We hope these findings will help news media more faithfully represent society's diversity.
## Ethics Statement
This work uses publicly available published news articles from well known news outlets. Thus, the data set raises few ethical issues around privacy. Ethical concerns around gender inference mechanisms are discussed further in the Conclusion and Discussion portion. The code for this paper will be made available on GitHub.
## Acknowledgements
This work was supported, in part, by the Defense Advanced Research Projects Agency under contract W911NF192027.
|
2306.04480 | Exploring the Compositional Generalization in Context Dependent
Text-to-SQL Parsing | In the context-dependent Text-to-SQL task, the generated SQL statements are
refined iteratively based on the user input utterance from each interaction.
The input text from each interaction can be viewed as component modifications
to the previous SQL statements, which could be further extracted as the
modification patterns. Since these modification patterns could also be combined
with other SQL statements, the models are supposed to have the compositional
generalization to these novel combinations. This work is the first exploration
of compositional generalization in context-dependent Text-to-SQL scenarios. To
facilitate related studies, we constructed two challenging benchmarks named
\textsc{CoSQL-CG} and \textsc{SParC-CG} by recombining the modification
patterns and existing SQL statements. The following experiments show that all
current models struggle on our proposed benchmarks. Furthermore, we found that
better aligning the previous SQL statements with the input utterance could give
models better compositional generalization ability. Based on these
observations, we propose a method named \texttt{p-align} to improve the
compositional generalization of Text-to-SQL models. Further experiments
validate the effectiveness of our method. Source code and data are available. | Aiwei Liu, Wei Liu, Xuming Hu, Shuang Li, Fukun Ma, Yawen Yang, Lijie Wen | 2023-05-29T12:36:56Z | http://arxiv.org/abs/2306.04480v1 | # Exploring the Compositional Generalization in Context Dependent Text-to-SQL Parsing
###### Abstract
In the context-dependent Text-to-SQL task, the generated SQL statements are refined iteratively based on the user input utterance from each interaction. The input text from each interaction can be viewed as component modifications to the previous SQL statements, which could be further extracted as the modification patterns. Since these modification patterns could also be combined with other SQL statements, the models are supposed to have the compositional generalization to these novel combinations. This work is the first exploration of compositional generalization in context-dependent Text-to-SQL scenarios. To facilitate related studies, we constructed two challenging benchmarks named CoSQL-CG and SParc-CG by recombining the modification patterns and existing SQL statements. The following experiments show that all current models struggle on our proposed benchmarks. Furthermore, we found that better aligning the previous SQL statements with the input utterance could give models better compositional generalization ability. Based on these observations, we propose a method named p-align to improve the compositional generalization of Text-to-SQL models. Further experiments validate the effectiveness of our method. Source code and data are available 1
Footnote 1: [https://github.com/THU-BPM/CD-Text2SQL-CG](https://github.com/THU-BPM/CD-Text2SQL-CG)
+Equally Contributed.
Footnote †: \({}^{\dagger}\) Corresponding author.
## 1 Introduction
Recently, the poor generalization of semantic parsing models to out-of-distribution samples is under increasing attention Keysers et al. (2020); 2). These examples are usually obtained by recombining existing structures. For example, in the SCAN dataset Lake and Baroni (2018), models may fail to parse "jump twice and walk" even though "jump twice" and "walk" could be parsed successfully. The ability to generalize to novel combinations is also known as compositional generalization. Text-to-SQL Yu et al. (2018) allows non-expert users to access the information from the database by converting the user input text into SQL statements executed in the database. As a typical semantic parsing task, the study of its compositional generalization is of great importance.
Existing works explore the compositional generalization of Text-to-SQL only in the scenario that precisely maps stand-alone utterances to SQL queries. Shaw et al. (2021) define the atom and compound for SQL statements and propose the TMCD split to repartition the dataset. Gan et al. (2022) annotate the alignment of sub-sentence and sub-SQL in the spider dataset Yu et al. (2018) and then recombine these sub-SQLs and sub-sentences. In these settings, the SQL statements and user questions in the constructed test split tend to be much more complex. However, it is difficult for users to express complex queries in a stand-alone sentence. In real scenarios, users often start with a simple query and continuously combine additional query conditions with subsequent questions.
In this work, we focus on the study of compositional generalization in context-dependent Text-to-SQL tasks, which is more natural and applica
Figure 1: During the inference phase, the base queries and their modifications could be re-combined. Models with compositional generalization ability should successfully parse these novel combinations.
ble. In the context-dependent Text-to-SQL task (Yu et al., 2019), the generated SQL statements are refined based on the user input text during each interaction. The input text from each interaction can be viewed as component modifications to the previous SQL statement, which could be further extracted as the modification patterns. Since these modification patterns could also be combined with other SQL statements, the models are supposed to have the compositional generalization to these novel combinations. For example, in Figure 1, the modifications and the queries of the first turn in the training phrase could be re-combined in the inference phrase. Applicable models are supposed to successfully parse these novel combinations.
To better investigate compositional generalization in the context-dependent Text-to-SQL, we first construct compositional generalization benchmarks based on the existing datasets. First, we extract the modification patterns from the training dataset and then recombine them with the existing SQL statements in the development set. Note that in the compositional generalization setting, only the recombination results not existing in the training set are kept. To generate the corresponding utterances, we use a semi-automatic approach. The utterances are initially generated by a pre-trained model fine-tuned on the training data, and then reviewed and verified by human experts. As a result, we create two benchmarks, CoSQL-CG and Sparc-CG, specifically for the datasets CoSQL(Yu et al., 2019) and SParC(Yu et al., 2019). Our experiments reveal that current state-of-the-art models perform poorly on these benchmarks, emphasizing the significance of enhancing compositional generalization capabilities.
We further explore how to improve the compositional generalization in context-dependent Text-to-SQL tasks. Inspired by the previous works to improve compositional generalization by fine-grained alignment of inputs and outputs (Zheng and Lapata, 2022; Akyurek and Andreas, 2021), we propose a method to better align the current text with the previous SQL statements. We follow the common practice of most competitive Text-to-SQL models which take the concatenation of all utterances as input. Specifically, our proposed p-align method extracts the embedding of the text from each interaction after the encoding process and then decodes them into the corresponding SQL statements separately. Further experiment results show that our p-align method could effectively improve the compositional generalization of current models, which also demonstrates that better alignment of text and SQL statements and the introduction of previous SQL statements are of great importance.
To summarize, the main contributions of our paper are as follows:
* To the best of our knowledge, we are the first to explore compositional generalization in context-dependent Text-to-SQL.
* We construct two benchmarks named CoSQL-CG and Sparc-CG to better facilitate the relevant research.
* We propose a simple and effective method named p-align to improve the compositional generalization ability of models.
## 2 Related Work
### Context dependent Text-to-SQL
Most current research on Text-to-SQL is conducted under the context-independent setting, with many recent methods achieving excellent results on the Spider dataset (Yu et al., 2018), including graph-based methods such as LGESQL(Cao et al., 2021), RAT-SQL (Wang et al., 2020) and ISESL-SQL (Liu et al., 2022), as well as sequence-to-sequence-based methods like PICARD (Scholak et al., 2021). Recently, with the presentation of two datasets CoSQL(Yu et al., 2019) and SParC(Yu et al., 2019), the Text-to-SQL parsing under the context-dependent setting has attracted much attention, which is more realistic and applicable. Subsequently, various methods have been proposed. Among them, SCoRe(Yu et al., 2021) and STAR(Cai et al., 2022) aim to train better pre-trained models to improve the parsing ability of models. Also, many sequence-to-sequence methods based on T5 pre-trained model like PICARD (Scholak et al., 2021) and RASAT (Qi et al., 2022) have achieved great success. Meanwhile, more methods pay more attention to contextual information or conversation history during encoding, including IGSQL(Cai and Wan, 2020), HIE-SQL(Zheng et al., 2022), and IST-SQL(Wang et al., 2021). Meanwhile, other rewriting-based methods like DELTA(Chen et al., 2021) and CQR-SQL(Xiao et al., 2022) reformulate the current and the historical texts into an individual sentence. Different from the previous works, we mainly focus
on exploring compositional generalization under context-dependent text-to-SQL settings.
### Compositional Generalization
Compositional Generalization is an important metric for evaluating the robustness of the model Liu et al. (2022) in the field of natural language processing. For semantic parsing tasks, the ability to generalize to structures generated by systematically combining known atomic components is of vital importance. Lake and Baroni (2018) propose the SCAN dataset, which maps word sequences into navigation command sequences (e.g., jump twice \(\rightarrow\) JUMP JUMP). Their training/evaluation split are constructed in a compositional generalization way. Keysers et al. (2020), introduce CFQ dataset and propose distribution-based compositionality assessment to measure compositional generalization. Hupkes et al. (2020) summerize five different compositionally generalization splits and combine them to generate PCFG SET. Many works focus on improving the compositional generalization of models. This is usually achieved by introducing more detailed lexicon or lexicon-style alignments Zheng and Lapata (2022); Akyurek and Andreas (2021) or adopting a grammar-based decoder Herzig and Berant (2021); Qiu et al. (2022); Guo et al. (2020). Another line of work attempts to synthesize examples utilizing grammar and generative models for data augmentation Qiu et al. (2022); Andreas (2020); Jia and Liang (2016).
Recently, the compositional generalization of Text-to-SQL parsing has gained more and more interest. Shaw et al. (2021) define the atom and compound for SQL statements and propose the TMCD split to repartition the dataset. Gan et al. (2022) annotate the alignment of sub-sentence and sub-SQL in the spider dataset Yu et al. (2018) and then recombine these sub-SQLs and sub-sentences. The above works only focus on the Text-to-SQL parsing in the context-independent setting, which precisely maps stand-alone utterances to SQL queries. However, it is difficult for users to express complex queries in a stand-alone sentence. In this work, we first explore the compositional generalization for context-dependent Text-to-SQL Parsing.
## 3 Compositional Generalization in Context-dependent Text-to-SQL
To facilitate the understanding of the following sections, we provide a more detailed explanation of compositional generalization in context-dependent Text-to-SQL parsing in this section.
The template split is a typical compositional generalization setting, where the structure templates in the training and test set are completely separated. Our compositional generalization scenario can be viewed as an extension of the template split, where the combination of basic SQL templates and modification templates in the training and test set are separated. Note that basic SQL and modification templates in the test set all appear in the training set individually. For instance, in figure 1, in the inference phrase, although all the templates are seen during training, their combinations are novel.
From another point of view, our compositional generalization scenario could also be viewed as a special case of TMCD split Shaw et al. (2021), where the SQL templates and modification templates could be seen as atoms and their combination results are the compounds. Note the utterance to the SQL templates (first atom) are provided during training, which could be further utilized to improve the compositional generalization (Section 5).
## 4 Benchmark construction
Since there are few data satisfying the compositional generalization setting in the origin SParC and CoSQL development set. We first construct new benchmarks to facilitate the related research.
As illustrated in Figure 2, the benchmark construction process can be divided into four steps. The first step is to filter out context-independent examples; next, modification patterns are extracted from the remaining examples; after that, these modification patterns are combined with other SQL statements, and finally, corresponding utterances are generated.
### Filter out context-independent examples
It is observed that a significant number of examples in the SParC or CoSQL datasets are context-independent, meaning that no context information is needed to generate the current queries. In this work, we propose a schema-linking-based method to filter out these context-independent examples.
Schema linking is a common technique in Text-to-SQL which links the exact or partial occurrences of the column/table names in the question such as the ARILINES and Abbreviation in Figure 2(a). Our main motivation is that if current data is context-dependent, there are some column/table
names not linked to the current question but linked to history questions (context), such as the first example in Figure 2(a). Specifically, the schemas in the target query are represented as \(\mathrm{S}\). We use the n-gram matching method to find occurrences \(\mathrm{S}\) in the current question, where the matched schemas could be represented as \(\mathrm{S}_{\mathrm{c}}\). Similarly, the matched schemas in the history questions are represented as \(\mathrm{S}_{\mathrm{p}}\). The current example is context-dependent only if \(\mathrm{S}_{\mathrm{p}}-\mathrm{S}_{\mathrm{c}}\neq\emptyset\). Finally, we keep 4270 and 2347 context-dependent examples in SParC and CoSQL training set respectively.
### Generate Modification Pattern
After filtering out context-independent data, the next step is to generate modification patterns from the remaining context-dependent examples.
As shown in Figure 2(b), we first parse current and previous SQL statements into abstract syntax trees and then compare the tree structures to get the modified components. Specifically, a top-down traversal algorithm is adopted to find the different nodes. The nodes along with their children constitute the modified component. Then the generated modification component is anonymized to generate the modification template. Finally, we generate 409 and 191 modification templates for SParC and CoSQL respectively.
### Re-combine SQL statements
With the generated modification patterns, the next step is to re-combine these patterns with other SQL statements to generate new SQL statements.
First, modification patterns are filled with new table/column names sampled from target database schemas to generate new modifications. Then the modifications are directly combined with the other SQL statements. Note that in the previous modification pattern generation process, the relationship of the schema is kept (e.g. primary key and foreign key relationships) and the table/column name sampling results must conform to the above relationship constraints. As mentioned in Section 3, the combination process requires that the base SQL templates and modification templates are all shown in the training set but their combinations are novel. Finally, we generate 5958 and 2594 combination results in SParC and CoSQL respectively.
### Utterance generation
The final step of our benchmark construction is to generate the context-dependent utterance for the generated SQL statements. Since pre-trained language models have shown great ability in text generation, we first utilize a fine-tuned T5 model (Raffel et al., 2020) to generate the context-dependent utterance. More specifically, the input to the T5 model is the concatenation of the modification, previous SQL statement, and previous utterance.
For the utterance generated by the T5 model may be noisy, we further invite human experts to filter and revise the generated data. The first task of human experts is to remove SQL statements that don't fit realistic scenarios. For example, the statement SELECT Count(loser_entry) FROM matches ORPER BY matches.winner_age is invalid because the function Count() and the clause ORPER BY usually do not appear together. The second task of the human experts is to revise the utterances generated by the T5 model as shown in Figure 2(d). To ensure annotation consistency, we introduce two experts to double-check the annotated results. Finally, after the filtering and revising process, we get 372 and 267 questions for SParC
Figure 2: The benchmark construction process can be divided into four steps. The first step is to filter the context-independent data; then the next step is to generate modification patterns from the remained examples; after that, the modification patterns are recombined with other queries and the last step is to generate the corresponding utterances.
and CoSQL datasets respectively, which further construct our SParC-CG and CoSQL-CG benchmarks. More detailed statistics of the benchmarks will be described in the experiment section.
## 5 Methods
After constructing the SParC-CG and CoSQL-CG, we further explore how to improve the compositional generalization in context-dependent Text-to-SQL parsing. According to the previous works Zheng and Lapata (2022); Akyurek and Andreas (2021), the key to improving the compositional generalization is to construct better component alignment between inputs and outputs. In the context-dependent Text-to-SQL settings, the utterance-query pair of previous interactions could be utilized to align input utterances and output queries. Based on this motivation, we propose p-align to improve the compositional generalization of existing Text-to-SQL models. Note that our method follows the common practice of most competitive Text-to-SQL models which take the concatenation of all utterances as input.
Specifically, given the input utterances \(X=[X_{1},X_{2},...,X_{n}]\) at the n-th interaction, where \(X_{n}=[x_{1},....x_{j}]\) is an utterance with j words, the encoder aims to generate embeddings for each word such that \(\mathbf{X}=\mathbf{H}(X)\). In the origin decoding process, the result query y could be represented as an action sequence \([a_{1},...a_{t}]\) and the whole decoding process could be represented as the product of probabilities for each generation step as follows:
\[\prod_{t=1}^{T}p\left(a_{t}\mid\left\{a_{1},\ldots,a_{t-1}\right\},\mathbf{X} \right). \tag{1}\]
In our p-align method, the utterance embeddings of each interaction are extracted to decode the corresponding SQL statements. As shown in Figure 3, the decoder process of our p-align could be represented as:
\[\sum_{i=1}^{n}\prod_{t=1}^{T_{i}}p\left(a_{t}^{i}\mid\left\{a_{1}^{i},\ldots, a_{t-1}^{i}\right\},\mathbf{X}_{\leq i}\right). \tag{2}\]
In this way, our p-align method aligns corresponding parts of the input utterance to the previous queries and thus improves the compositional generalization ability of models.
## 6 Experiment
In this section, we first perform more detailed statistics on our constructed SParC-CG and CoSQL-CG. Then we further analyze our benchmarks with current competitive Text-to-SQL models. Finally, several experiments are conducted to verify the effectiveness of our p-align method.
### Benchmark statistics
The detailed statistics of SParC-CG and CoSQL-CG are shown in Table 1. We mainly count three metrics here: # Question, # Non-CG Questions, and # CG Questions, where # Question is the total
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \# Questions & \# Non-CG Questions & \# CG Questions \\ \hline
**SParC** & 1625 & 491 & 31 \\
**SParC-CG** & 921 & 491 & **372** \\
**CoSQL** & 1300 & 207 & 14 \\
**CoSQL-CG** & 471 & 207 & **167** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The detailed statistics of SParC-CG and CoSQL-CG benchmark.
Figure 4: Distributions of different modification patterns in SParC-CG and CoSQL-CG benchmark.
Figure 3: The whole process of our p-align method. The input to the encoding process is the concatenation of the utterance from all interactions. In the decoding process, the utterance embeddings of each interaction are extracted to decode the corresponding SQL.
number of questions, # CG Questions is the number of questions that meet the definition of compositional generalization in Section 3 and # Non-CG Questions is the number of in-domain questions (the templates and combination of templates are both seen in training). The Non-CG questions in SParC-CG and CoSQL-CG are obtained directly from the SParC and CoSQL datasets. The number of CG questions in our benchmarks is far more than in that SParC and CoSQL. Note that a large portion of the data in the SParC and CoSQL datasets is context-independent or has no context, which makes the sum of # Non-CG Questions and # CG Questions relatively small.
We present the components distributions of modification patterns of SParC-CG and CoSQL-CG in Figure 4. The most common component in modification patterns is _where_. _Orderby_ and _groupby_ also take a large proportion. There are also many modification patterns that include multiple components, such as _where-groupby_ and _where-orderby_. Finally, the distributions of modification patterns in SParC-CG and CoSQL-CG are similar, which illustrates our benchmark construction's consistency. Note that the _select_ components are not counted, as they are included in almost all modifications.
### Experiment Setup
**Models.** We adopt many current competitive Text-to-SQL models to explore the impact of compositional generalization. SPiC Liu et al. (2020) is a simple model which explores different methods to incorporate context questions, where SPiC (Concat) concatenates context questions with current questions, SPiC (Turn) employs a turn-level encoder to capture the inter-dependencies among questions in different turns and SPiC (Gate) uses a gate mechanism to compute the importance of each question. SCoRe and STaR Cai et al. (2022) are two specialized pre-trained models for RAT-SQL and LGESQLCao et al. (2021) respectively. PICARD Scholak et al. (2021) and RASAT Qi et al. (2022) are two seq2seq based models based on pre-trained T5 model Raffel et al. (2020).
**Evaluation Metric.** We mainly use the _question match_ (QM) Yu et al. (2019) as our evaluation metric, which is the exact set matching score Yu et al. (2018) over all questions. The exact set matching score decomposes predicted queries into SQL components such as SELECT and WHERE and then computes scores for each component. For each model, we report the QM on the origin SParC/CoSQL development set as well as the Non-CG and CG benchmarks. Note that the _interaction match_Yu et al. (2019) is not reported in our paper because we are only interested in the scores of the model on questions satisfying the compositional generalization condition.
### Evaluation on SParC-CG/CoSQL-CG
We report the question match accuracy on SParC and CoSQL datasets under three benchmarks: Dev, Non-CG, and CG in Table 2.
Based on the above results, we summarize the following observations. (1) The accuracy of all models significantly decreases under the compositional generalization setting. Specifically, the QM on SParC-CG and CoSQL-CG decreases 39.3 and 33.6 on average compared to the origin development set, which indicates current models lack the compositional generalization ability. (2) The models perform better on the Non-CG benchmarks than the origin development set (8.4 and 6.5 on average for SParC and CoSQL respectively), which demonstrates that in-domain data are easily generalized. (3) Concat could bett
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods / Datasets} & \multicolumn{3}{c}{SparC} & \multicolumn{3}{c}{CoSQL} \\ \cline{2-7} & **Dev** & **Non-CG** & **CG** & **Dev** & **Non-CG** & **CG** \\ \hline SPiC (Concat) + BERT-Large Liu et al. (2020) & 55.3 & 63.4 & 18.9(36.4\(\downarrow\)) & 45.2 & 52.3 & 13.3(31.9\(\downarrow\)) \\ SPiC (Turn) + BERT-Large Liu et al. (2020) & 54.6 & 62.1 & 18.2(36.4\(\downarrow\)) & 44.8 & 51.3 & 12.2(32.6\(\downarrow\)) \\ SPiC (Gate) + BERT-Large Liu et al. (2020) & 54.3 & 62.4 & 17.3(37.0\(\downarrow\)) & 44.2 & 51.8 & 12.4(31.8\(\downarrow\)) \\ RAT-SQL + SCoRe Yu et al. (2021) & 60.4 & 69.6 & 22.4(38.0\(\downarrow\)) & 52.1 & 55.6 & 20.4(31.7\(\downarrow\)) \\ LGESQL + ELECTRA-Large Cao et al. (2021) & 65.0 & 73.4 & 25.3(39.7\(\downarrow\)) & 54.4 & 62.4 & 21.0(33.4\(\downarrow\)) \\ LGESQL + STAR Cai et al. (2022) & 66.9 & 75.4 & 25.8(41.1\(\downarrow\)) & 59.7 & 68.4 & 26.3(33.4\(\downarrow\)) \\ PICARD + T5-3B Scholak et al. (2021) & - & - & - & 56.9 & 58.1 & 21.5(35.4\(\downarrow\)) \\ RASAT + T5-3B Qi et al. (2022) & 66.7 & 75.8 & 22.0(44.7\(\downarrow\)) & 58.8 & 67.9 & 20.4(38.4\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Question match accuracy of current competitive models on three different benchmarks: Dev, Non-CG, and CG. For all the models, we adopt the given parameters.
questions than Turn and Gate. Therefore, our p-align is only designed for the Concat method. (4) The grammar tree-based decoder (LGESQL) and the larger language model (T5-3B) could help improve the compositional generalization ability.
### Detailed Evaluation
**Evaluation****ay Different Levels of Difficulty.** The SQL queries could be divided into four difficulty levels based on the complexity of SQL statements: easy, medium, hard and extra hard. To better demonstrate the performance in the compositional generalization setting, we conduct further evaluations on different levels of difficulties. As shown in Figure 4(a)-b, the STaR model performs worse on the CG benchmark than on the original development set at all difficulties, which further indicates the model's compositional generalization ability requires further improvement. Meanwhile, there is an obvious improvement in the Non-CG benchmark compared to the original development set.
**Evaluation****at Different Turns.** We further illustrate the question match accuracy on three benchmarks with the increase of conversation turns in Figure 4(c)-d. The accuracy decreases sharply on the CG benchmark and the origin development set while staying stable on the non-CG benchmark. This suggests that the compositional generalization ability of models decreases with the increase of conversation turns.
**Evaluation****on different components.** To better investigate the poor performance of the current competitive models under the compositional generalization setting, we further report the question match accuracy on different detailed SQL components in Table 3. The reported results are the average results over STaR and RASAT on three benchmarks of SParC. As demonstrated in the table, nearly all components' accuracy significantly decreases under the compositional generalization setting, which illustrates the impact of compositional generalization on the models is balanced.
### Evaluation of p-align method
Table 4 shows the results of different models with & without p-align on three benchmarks of SParC and CoSQL. We choose SPiC (Concat) + BERT-Base, SPiC (Concat) + BERT-large and LGESQL+ELECTRA-Large as our base models because other models are either customized pretrained models (STaR and SCoRe) or with a too large model(T5-3B). All the hyperparameters are the same as the original model.
Overall, our p-align method significantly improves the performance of the model on the CG benchmarks, with an average improvement of 3.2 and 2.3 on the SParC-CG and CoSQL-CG benchmarks respectively. While the improvement on DEV and Non-CG benchmarks is relatively small, at 0.77 and 0.35 on average respectively, this suggests that our method is particularly effective in compositional generalization settings. These results support our hypothesis that improving alignment between utterances and queries can enhance the model's compositional generalization abilities, and should be considered as a potential direction for future research.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Error component** & **STaR** & **RASAT** & **LGESQL** \\ \hline Context Info & 24 & 15 & 25 \\ Modification Info & 149 & 136 & 139 \\ Context \& Modification Info & 112 & 128 & 127 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The results of different models w. & w/o p-align on three benchmarks of SParC and CoSQL.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**SQL Components** & **DEV** & **Non-CG** & **CG** \\ \hline SELECT & 84.6 & 88.2 & 60.2 \\ SELECT (no AGG) & 86.3 & 89.3 & 62.9 \\ WHERE & 80.6 & 91.8 & 62.5 \\ WHERE(no OP) & 85.1 & 95.3 & 69.2 \\ GROUP BY (no HAVING) & 81.1 & 85.7 & 66.4 \\ GROUP BY & 76.9 & 81.6 & 54.5 \\ ORDER BY & 78.2 & 82.0 & 58.3 \\ AND/OR & 99.0 & 99.3 & 91.2 \\ KEYWORDS & 86.3 & 92.8 & 67.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy on the different SQL components. The reported results are the average results over STaR and RASAT on three benchmarks of SParC.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Methods** & **DEV** & **Non-CG** & **CG** \\ \hline \multicolumn{4}{c}{SParC} \\ \hline SPIC (Concat) + BERT-Base & 47.6 & 53.5 & 8.9 \\ w. p-align & 50.6 & 54.1 & 16.4(7.5\(\uparrow\)) \\ spic (Concat) + BERT-Large & 55.3 & 63.4 & 19.5 \\ w. p-align & 56.1 & 63.8 & 20.6(1.1\(\uparrow\)) \\ LGESQL + ELECTRA-Large & 65.0 & 73.4 & 25.3 \\ w. p-align & 64.8 & 73.0 & 26.2(0.9\(\uparrow\)) \\ \hline \multicolumn{4}{c}{COSQL} \\ \hline SPIC (Concat) + BERT-Base & 39.2 & 35.0 & 5.2 \\ w. p-align & 40.5 & 36.2 & 9.6(4.4\(\uparrow\)) \\ spic (Concat) + BERT-Large & 45.2 & 52.3 & 12.2 \\ w. p-align & 45.5 & 52.7 & 14.4(2.2\(\uparrow\)) \\ LGESQL + ELECTRA-Large & 54.4 & 62.4 & 21.0 \\ w. p-align & 53.8 & 62.3 & 21.2(0.2\(\uparrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The results of different models w. & w/o p-align on three benchmarks of SParC and CoSQL.
### Error analysis
To evaluate the compositional generalization ability of current models, we selected four incorrect prediction results from the SParC-CG benchmark. For each example, we provided the context, the current question, the correct query, and the prediction results from STaR and RASAT.
As illustrated in Figure 6, in the first two scenarios, the models struggle to accurately interpret the changes brought about by current questions, despite maintaining a grasp of the context information. Conversely, in the third case, the models are able to interpret the modifications of the current question, but fail to take into account the context information. The fourth case represents the worst-case scenario, with the models unable to correctly parse either the modifications or the context information. Note that the incorrect results predicted by both models in the first three cases are similar, indicating that the failure of the current models to perform well in a compositional generalization setting is a widespread issue, not an isolated incident.
The presented case study categorizes three scenarios where current models make incorrect predictions, which include: failing to consider contextual information, inability to interpret modifications, and failing to understand both modifications and context. We further conduct statistical analysis on the SParC-CG benchmark in Table 5 and found that the majority of errors occur when models cannot interpret modifications. Additionally, when models neglect context, they also tend to misinterpret modifications. Interestingly, the proportion of errors for the different models evaluated in the study is quite similar, indicating that the compositional generalization challenges faced by these models are consistent across them.
## 7 Conclusion
In this study, we conduct the first exploration of compositional generalization in context-dependent Text-to-SQL scenarios. To support further research in this area, we construct two benchmarks named SParC-CG and CoSQL-CG composed of out-of
Figure 5: The results on different benchmarks by varying the difficulty levels of the data (a-b) and by varying the conversation turns(c-d). We use the STAR model here as an example.
Figure 6: Four examples from SParC-CG benchmark and the corresponding wrong prediction results of STaR and RASAT. These examples are categorized according to the different errors.
distribution examples. Additionally, we introduce the p-align method to enhance the compositional generalization capabilities of existing models. Further experiments show that current models perform poorly on our constructed benchmarks and demonstrate the effectiveness of our p-align method. Also, with the recent advancements in generative language models, such as GPT3.5 and GPT4 (OpenAI, 2023), explorations into these models Liu et al. (2023) should also constitute a significant part of future work.
## Acknowledgement
The work was supported by the National Key Research and Development Program of China (No. 2019YFB1704003), the National Nature Science Foundation of China (No. 62021002), Tsinghua BNRist and Beijing Key Laboratory of Industrial Bigdata System and Application.
## 8 Limitations
In this paper, the approach to improve the compositional generalization under the context-dependent setting is insufficient. We only construct a better component alignment between inputs and outputs for models taking the concatenation of all utterances as input. However, it is important to note that other methods, such as using a turn-level encoder or implementing a gate mechanism, should also be considered. Additionally, other types of methods are also ignored. Future research could explore data augmentation techniques Hu et al. (2022) and enhanced training objectives, such as meta-learning Hu et al. (2021) and contrastive learningLiu et al. (2022c); Li et al. (2023); Hu et al. (2020), as potential avenues for improvement.
|
2308.12996 | The disordered Dicke model | We introduce and study the disordered Dicke model in which the spin-boson
couplings are drawn from a random distribution with some finite width.
Regarding the quantum phase transition we show that when the standard deviation
$\sigma$ of the coupling strength gradually increases, the critical value of
the mean coupling strength $\mu$ gradually decreases and after a certain
$\sigma$ there is no quantum phase transition at all; the system always lies in
the super-radiant phase. We derive an approximate expression for the quantum
phase transition in the presence of disorder in terms of $\mu$ and $\sigma$,
which we numerically verify. Studying the thermal phase transition in the
disordered Dicke model, we obtain an analytical expression for the critical
temperature in terms of the mean and standard deviation of the coupling
strength. We observe that even when the mean of the coupling strength is zero,
there is a finite temperature transition if the standard deviation of the
coupling is sufficiently high. Disordered couplings in the Dicke model will
exist in quantum dot superlattices, and we also sketch how they can be
engineered and controlled with ultracold atoms or molecules in a cavity. | Pragna Das, Sebastian Wüster, Auditya Sharma | 2023-08-24T18:00:08Z | http://arxiv.org/abs/2308.12996v1 | # The disordered Dicke model
###### Abstract
We introduce and study the disordered Dicke model in which the spin-boson couplings are drawn from a random distribution with some finite width. Regarding the quantum phase transition we show that when the standard deviation \(\sigma\) of the coupling strength gradually increases, the critical value of the mean coupling strength \(\mu\) gradually decreases and after a certain \(\sigma\) there is no quantum phase transition at all; the system always lies in the super-radiant phase. We derive an approximate expression for the quantum phase transition in the presence of disorder in terms of \(\mu\) and \(\sigma\), which we numerically verify. Studying the thermal phase transition in the disordered Dicke model, we obtain an analytical expression for the critical temperature in terms of the mean and standard deviation of the coupling strength. We observe that even when the mean of the coupling strength is zero, there is a finite temperature transition if the standard deviation of the coupling is sufficiently high. Disordered couplings in the Dicke model will exist in quantum dot superlattices, and we also sketch how they can be engineered and controlled with ultracold atoms or molecules in a cavity.
## I Introduction
The Dicke model [1], which describes the interaction between light and matter, is of fundamental importance within the field of quantum optics. It exhibits a variety of interesting phase transitions covering quantum phase transitions [2; 3; 4; 5; 6] (QPT), excited-state quantum phase transitions [7; 8] (ESQPT) and thermal phase transitions [7; 8; 9; 10; 11] (TPT). The QPT takes place in the thermodynamic limit of infinite atom number, \(N\rightarrow\infty\), where the system goes from the normal phase (NP) to the super-radiant phase (SP) [12] at some critical coupling strength \(g_{c}\)[3] between spins and bosons. If temperature is introduced to the system, for \(g>g_{c}\), there is a critical temperature \(T_{c}\), above which the system returns to the NP from the SP, whereas for \(g<g_{c}\) the system lies in the NP for all temperatures [7; 8; 9; 13; 14; 15; 16].
Here, we generalize the standard Dicke model towards disorder in the coupling strength \(g\), for which we propose several practical realisations. While the role of disorder in the more general spin-boson model has been considered both in theoretical [17; 18; 19; 20; 21; 22] and experimental [23; 24; 25] studies, the exploration of disorder-induced phenomena within this context is still at a nascent stage. We focus on those here, with the aid of tools from quantum information theory such as mutual information [26; 27; 28] between two spins as a function of temperature, whose usefulness has been demonstrated for clean Dicke models earlier [8; 29].
In the usual clean Dicke model, it is well known that the QPT [2; 3; 4; 5; 6] occurs at some critical light matter coupling strength. We find that for the disordered Dicke model both the mean and the standard deviation of the random coupling distribution play a crucial role in the QPT. If either one of them or both are high, then the ground state exhibits super-radiant behaviour. To show this, we numerically calculate the ground state energy and average boson number as a function of the coupling distribution for the disordered Dicke model. We verify our numerics with the aid of available analytical results [3] for the ground state energy and the average boson number across the QPT in the usual Dicke model. By carrying out a disorder-averaging of their results, we obtain an approximate expression for the critical line of the QPT in the disordered model. The behaviour of the observables (ground state energy and average boson number) around the critical line that we obtained using a Taylor series expansion shows broad agreement with our numerics. Moreover, we show how a symmetry of the Hamiltonian can be exploited along with a heuristic argument to obtain the line of quantum criticality in a more accurate manner.
To understand the thermal phase transition, we follow methods for which the basis was laid in Ref. [9; 30], and calculate the partition function of the disordered Dicke model to obtain the critical temperature in terms of the disorder coupling strength. Numerically we calculate the mutual information between two spins for the disordered Dicke model by a method similar to our earlier work [8]
Figure 1: Schematic of the disordered Dicke model where \(N\) two levels atoms are coupled to a single mode bosonic field with different spin-boson coupling strengths \(g_{k}\), and two possible realisations. The frequency of the bosonic mode is \(\omega\) and the gap between two levels \(|1\rangle\) and \(|2\rangle\) of each atom is \(\omega_{0}\). (left) These could be ultra-cold molecules whose fixed transition dipoles are randomly oriented wrt. the cavity field direction (green), or (right) atoms under the influence of an additional external field that breaks their symmetry, such as the magnetic field around a wire (red).
for the usual Dicke model. When the width of the disorder is sufficiently high, there is a finite temperature transition from the SP to the NP even if the mean of the coupling strength is zero. We can predict the critical temperature of this transition analytically, signatures for which are also seen in the mutual information found numerically.
Earlier studies of disorder in the Dicke model considered the multi-mode case [31]. In contrast, we sketch several possible realisations of disorder in the _single mode_ Dicke model. It can naturally arise in semiconductor quantum dot lattices (see for e.g. [32]), where each quantum dot can have a varied orientation relative to propagating electric fields, yet due to the small structure all dots effectively radiate into a single mode, causing superradiance. One can also engineer controlled realisations, by transforming a random spatial distribution of atoms within an optical cavity [33] relative to a varying electric field amplitude into a distribution of couplings. Other possibilities include ultra-cold molecules whose fixed transition dipoles are randomly oriented with respect to the cavity field direction.
The organization of the article is as follows. In the next section we will discuss the system Hamiltonian for the disordered Dicke model. In section III and IV we present our results regarding the two types of phase transitions: QPT and TPT. In section V we outline several possible experimental realisations. Finally in section VI we provide a summary of our work.
## II Model Hamiltonian and quantifiers
In Fig. 1 we show a schematic of the disordered Dicke model. The Hamiltonian consists of a single-mode bosonic field coupled to \(N\) atoms with a coupling strength that is modeled as a random variable. The Hamiltonian can be written as
\[H=\omega a^{\dagger}a+\frac{\omega_{0}}{2}\sum_{i=1}^{N}\sigma_{z}^{(i)}+\frac {1}{\sqrt{N}}(a^{\dagger}+a)\sum_{i=1}^{N}g_{i}\sigma_{x}^{(i)}, \tag{1}\]
where the operators \(a\) and \(a^{\dagger}\) are the bosonic annihilation and creation operators respectively, following the commutation relation \([a,a^{\dagger}]=1\) and \(J_{x,z}=\sum_{i=1}^{N}\frac{1}{2}\sigma_{x,z}^{(i)}\) are the angular momentum operators of a pseudospin with length \(j\), composed of \(N=2j\) spin-\(\frac{1}{2}\) atoms described by Pauli matrices \(\sigma_{x,z}^{(i)}\) acting on site \(i\). Here, the \(g_{i}\)'s are random numbers drawn from two types of distributions. In the first distribution, the \(g_{i}\)'s are drawn from a uniform unit box distribution with finite width (\(2\epsilon\)) and height \(A\) such that \(2\epsilon A=1\). The parameters \(\epsilon\) and \(A\) are chosen so that \(2\epsilon=(\mu+\epsilon)-(\mu-\epsilon)\), \(\epsilon=\sqrt{3}\sigma\) and hence \(A=\frac{1}{2\sqrt{3}\sigma}\) where \(\mu\) and \(\sigma\) are the mean and the standard deviation. In the second distribution, we consider \(g_{i}\propto\cos\theta_{i}\), where \(\theta_{i}\) are angles randomly drawn from a Gaussian distribution \(p(\theta)\sim\exp([-(\theta-\theta_{0})^{2}/\sigma_{\theta}^{2}])\). Both can be engineered e.g. in optical cavities as sketched in Fig. 1 and discussed in section V. Due to the disorder in the coupling strengths, \(J^{2}\) is not a conserved quantity of the Hamiltonian 1 and hence we have to consider all possible spin configurations. For \(N\) spins, the corresponding dimension of the spin sub-space is \(2^{N}\) and the bosonic sub-space dimension is \(n_{\rm max}+1\), where \(n_{\rm max}\) is the maximal occupation we allow for the bosonic field. Hence the total Hilbert space dimension for our numerical calculations is \(N_{D}=2^{N}(n_{\rm max}+1)\).
In the next sections we explore the QPT and TPT separately, based on the properties of eigenvalues and eigenstates of Eq. (1). We study useful quantifiers such as ground state energy, average boson number and mutual information between two spins. For a mixed state (like a temperature equilibriated state), the mutual information has been shown [8] to be an appropriate quantity, although it contains both quantum and classical correlations. We shall use the mutual information [8; 26; 27; 28; 34] between two spins from a mixed density matrix. Defining the reduced density matrices of any two selected spins to be \(\rho_{1}\) and \(\rho_{2}\) and the reduced density matrix corresponding to the two-spin state to be \(\rho_{12}\), the mutual information between the two spins can be computed using the relation:
\[I_{12}=S_{1}+S_{2}-S_{12}, \tag{2}\]
where \(S_{1,2}=-Tr(\rho_{1,2}\ln(\rho_{1,2}))\), \(S_{12}=-Tr(\rho_{12}\ln(\rho_{12}))\) are the corresponding von Neumann entropies. Since we will be interested in \(I_{12}\) at finite temperature, we will first construct the total thermal density matrix: \(\rho_{\rm Th}=e^{-\frac{\hbar}{k_{B}T}}\), and then trace over the bosonic subspace and the remaining \((N-2)\) or \((N-1)\) spins. Since we will average over the disorder, it does not matter which two spins are considered for the purpose of computing mutual information. Another useful observable that we use to study the QPT is the average boson number \(\langle a^{\dagger}a\rangle\) evaluated in the interacting ground state.
## III Quantum phase transition
It is well known that in the thermodynamic limit (when the atom number \(N\rightarrow\infty\)), the usual Dicke model exhibits a quantum phase transition [4] from the normal phase to the super-radiant phase at some critical coupling strength \(g_{c}\). In the disordered Dicke model, if we fix the mean of the coupling strength at a sufficiently low value and vary the standard deviation (\(\sigma\)) we see a similar QPT. The QPT here is studied with the aid of the disorder-averaged energy and average boson number in the ground state. In Figs. 2 and 4 we show these properties in the ground-state of the disordered Dicke model, considering two types of distributions as discussed in the previous section.
What we will empirically show now, is that much of the behavior of the disordered Dicke model can be understood by averaging known results for the disorder-free
(clean) model. This is not clear a-priori, since all the two-level systems in the disordered model couple to the same bosonic mode, and thus get coupled. For the clean Dicke model, Emary et al [2] have derived analytical results for the ground state energy:
\[E_{G}=\begin{cases}-\frac{N\omega_{0}}{2},&g<g_{c}\\ -\frac{N\omega_{0}}{4}\left[\frac{g^{2}}{g_{c}^{2}}+\frac{g_{c}^{2}}{g^{2}} \right],&g>g_{c}\end{cases} \tag{3}\]
and the average boson number in the cavity:
\[\langle a^{\dagger}a\rangle=\begin{cases}0,&g<g_{c}\\ \frac{N}{\omega^{2}}\left[g^{2}-\frac{g_{c}^{4}}{g^{2}}\right]&g>g_{c}\end{cases} \tag{4}\]
where \(g_{c}\) is the critical value of the coupling in the absence of disorder. We will make use of the above results and integrate over the coupling strength distribution to obtain approximate analytical results for the disordered Dicke model. We denote the disorder-averaged value of an observable \(O\) as \(\overline{O}\):
\[\overline{O}=\int\limits_{x_{1}}^{x_{2}}P(g)O(g)dg, \tag{5}\]
where \(P(g)\) is the distribution of the disorder and the limits of integration \(x_{1}\) and \(x_{2}\) have to be chosen appropriately according to the observable and the distribution being considered.
### Uniform distribution
In the first scenario the coupling \(g\) is drawn from a uniform distribution:
\[P_{u}(g)=\begin{cases}\frac{1}{2\sqrt{3}\sigma}&\text{if}\quad\mu-\sqrt{3} \sigma<g<\mu+\sqrt{3}\sigma\\ 0&\text{otherwise}\end{cases} \tag{6}\]
with mean \(\mu\) and standard deviation \(\sigma\).
The disorder-averaged ground state energy and average boson number, are found from the integrals:
\[\overline{E_{G}} = \int\limits_{x_{1}}^{x_{2}}P_{u}(g)E_{G}dg, \tag{7}\] \[\overline{\langle a^{\dagger}a\rangle} = \int\limits_{x_{1}}^{x_{2}}P_{u}(g)\langle a^{\dagger}a\rangle dg, \tag{8}\]
where \(E_{G}\) is given in Eqn. 3, \(\langle a^{\dagger}a\rangle\) is given in Eq. 4, and we use the overline to denote the disorder average. The lower and upper limits of the box distribution are: \(x_{1}=\mu-\sqrt{3}\sigma\) and \(x_{2}=\mu+\sqrt{3}\sigma\) respectively and we consider \(\mu\) and \(\sigma\) to be in the range: \([0,1]\).
For the NP (\(|g|\leq g_{c}\)) we find (Appendix A):
\[\overline{E_{G}} = -\frac{N\omega_{0}}{2}, \tag{9}\] \[\overline{\langle a^{\dagger}a\rangle} = 0. \tag{10}\]
On the other hand, for the SP (\(|g|>g_{c}\)), we have:
\[\overline{E_{G}} = -\frac{N}{2\sqrt{3}\sigma}\left[\frac{x_{2}^{3}}{3}-\frac{g_{c}^{ 4}}{x_{2}}+\frac{2g_{c}}{3}\right], \tag{11}\] \[\overline{\langle a^{\dagger}a\rangle} = \frac{N}{2\sqrt{3}\sigma\omega^{2}}\left[\frac{x_{2}^{3}}{3}+\frac {g_{c}^{4}}{x_{2}}-\frac{4g_{c}^{3}}{3}\right]. \tag{12}\]
Taylor expanding around the critical point \(g_{c}\) and considering only the dominant terms, we have:
\[\overline{E_{G}} \approx -\frac{N\omega_{0}}{2}-\frac{AN\omega_{0}}{2}(x_{2}-g_{c})-1.33 AN\omega_{0}(x_{2}-g_{c})^{3}, \tag{13}\]
\[\overline{\langle a^{\dagger}a\rangle} \approx \frac{AN}{\omega^{2}}(x_{2}-g_{c})^{2}-\frac{0.667AN}{\omega^{2}}( x_{2}-g_{c})^{3}, \tag{14}\]
where \(A=\frac{1}{2\sqrt{3}\sigma}\) and \(x_{2}\) is the upper limit of the integration \(\mu+\sqrt{3}\sigma\). At the critical point the ground state energy is \(-\frac{N\omega_{0}}{2}\) and the average boson number is zero, hence we have a relation for the critical line as a function of \(\mu\) and \(\sigma\):
\[\mu+\sqrt{3}\sigma=\frac{1}{2}. \tag{15}\]
Fig. 2(a) shows the numerical value of the ground state energy \(E_{\text{G}}\) of the system as a function of the standard deviation (\(\sigma\)) and mean (\(\mu\)) of the coupling parameter. Our goal is to check the validity of the equation for the critical line marking the QPT (Eq. 15) numerically. In this figure the white/pink color indicates the normal phase where the ground state energy is large and constant, \(E_{\text{G}}=-\frac{N\omega_{0}}{2}\) and the other colors represent the super-radiant phase where \(E_{\text{G}}\) is decreasing. Similarly Fig. 2(b) shows the average boson number in the ground
Figure 2: Phase diagram of the disordered Dicke model with uniform coupling distribution (Eq. (6)). To map it out, we show (a) the ground state energy \(E_{\text{G}}\) and (b) the average boson number, \(\langle a^{\dagger}a\rangle\) wrt. the ground state, as a function of the standard deviation \(\sigma\) and the mean \(\mu\) of the coupling parameters \(g_{i}\). We consider the resonant case: \(\omega=\omega_{0}=1\), take the average over 120 realizations, and fix the atom number to be \(N=8\), the bosonic cut-off to be \(n_{\text{max}}=40\).
state of the disordered Dicke model. In the normal phase \(\overline{\langle a^{\dagger}a\rangle}\approx 0\) (black color), i.e. there are no excitations in the bosonic mode whereas in the super-radiant phase \(\overline{\langle a^{\dagger}a\rangle}\) is finite (other colors), which indicates a macroscopic excitation of the bosonic mode. The dash-dotted line here represents the quantum critical line which is given in Eq. 15 and our numerical data already roughly agrees with this linear relation. It is remarkable that the formula describes the numerical data this well, despite the coarse approximation of just disorder-averaging the clean Dicke model results. Around the critical line the expectation value (with respect to the uniform disorder) of the ground state energy and the average boson number can be represented by the simpler Taylor series in Eq. 13 and Eq. 14 respectively. It is clear that for \(\mu=0\), the standard deviation of the disordered Dicke model plays the same role as the coupling parameter \(g\) in the usual Dicke model and the critical point is \(\sigma_{c}=\frac{g_{c}}{\sqrt{3}}\) within this crude approximation.
The line that separates the NP and the SP in Fig. 2 can also be obtained approximately with the aid of a heuristic argument that exploits a symmetry of the Hamiltonian. We observe that the Hamiltonian in Eqn. 1 has the same eigenvalues as one in which any one of the couplings \(g_{i}\) is changed to \(-g_{i}\). In other words, the eigenvalues of \(H(\{g_{j},j\neq i\},g_{i})\) and \(H(\{g_{j},j\neq i\},-g_{i})\) are the same. This is a direct consequence of the fact that
\[H(\{g_{j},j\neq i\},-g_{i})=\sigma_{i}^{z}H(\{g_{j},j\neq i\},g_{i})\sigma_{i} ^{z}. \tag{16}\]
Thus when the transformation \(T=\sigma_{i}^{z}\) is applied on any eigenstate of the Hamiltonian \(H(\{g_{j},j\neq i\},g_{i})\), we would get an eigenstate of the Hamiltonian \(H(\{g_{j},j\neq i\},-g_{i})\) with the same eigenvalue. This argument naturally extends to the case when multiple \(g_{i}\)'s undergo a sign change. Hence we can consider a scenario where all the coupling strengths are made positive, i.e. if there are any negative coupling strengths, we simply take their absolute values. Hence when the lower limit of the uniform distribution \(\mu-\sqrt{3}\sigma<0\) the effective distribution is:
\[P_{\text{eff}}(g)=\begin{cases}\frac{1}{\sqrt{3}\sigma}&\text{if}\quad 0<g<-(\mu- \sqrt{3}\sigma)\\ \frac{1}{2\sqrt{3}\sigma}&\text{if}-(\mu-\sqrt{3}\sigma)<g<(\mu+\sqrt{3}\sigma ).\end{cases} \tag{17}\]
as shown in Fig. 3(a). The effective distribution in this case yields a mean value of \(\langle g\rangle=\frac{\mu+3\sigma^{2}}{2\sqrt{3}\sigma}\) and a variance of \(\langle g^{2}\rangle=\mu^{2}+\sigma^{2}\) which in turn corresponds to a standard deviation of: \(\sqrt{\mu^{2}+\sigma^{2}-\frac{(\mu+3\sigma^{2})^{2}}{12\sigma^{2}}}\). If the lower limit of the distribution \(\mu-\sqrt{3}\sigma\geq 0\), the effective distribution remains identical to the original one and its mean and standard deviation remain unchanged as \(\mu\) and \(\sigma\) (Fig. 3(b)).
To identify the phase transition line heuristically, we argue as follows. We would expect that as more and more of the couplings \(g_{i}\) are drawn above \(g_{c}\), we would see increasingly dominant effects characteristic of the SP. A coarse way to identify this would be to simply demand that the right most edge of the effective distribution (Eqn. 17) must be above the critical coupling \(g_{c}=\frac{1}{2}\), i.e.
\[\mu+\sqrt{3}\sigma=\frac{1}{2}, \tag{18}\]
which is nothing but the crude approximation Eqn. 15 and dot-dashed line in Fig. 2. For a refined result, we demand that the mean of the effective distribution (Eqn. 17) must reach above \(g_{c}\)
\[\frac{\mu+3\sigma^{2}}{2\sqrt{3}\sigma} =\frac{1}{2},\quad\mu<\sqrt{3}\sigma\] \[\mu =\frac{1}{2},\quad\mu\geq\sqrt{3}\sigma. \tag{19}\]
This is shown by the dashed line in Fig. 2. A less stringent condition is to demand that the mean plus one standard deviation of the effective distribution (Eqn. 17) must reach above \(g_{c}\)
\[\frac{\mu+3\sigma^{2}}{2\sqrt{3}\sigma}+\sqrt{\mu^{2}+\sigma^{2} -\frac{(\mu+3\sigma^{2})^{2}}{12\sigma^{2}}} =0.5,\quad\mu<\sqrt{3}\sigma\] \[\mu+\sigma =0.5,\quad\mu\geq\sqrt{3}\sigma. \tag{20}\]
This is shown by the solid white line in Fig. 2 and appears to be closest to the actual line of separation between the SP and NP.
Figure 3: Effective distribution containing only positive coupling strengths. The left panel denotes the original distribution, whereas the right panel shows the effective distribution where only the absolute values of the coupling strengths are considered. (a) For \(\mu-\sqrt{3}\sigma<0\), the effective mean of the coupling strength is \(\langle g\rangle=\frac{\mu+3\sigma^{2}}{2\sqrt{3}\sigma}\) and the effective variance \(\langle g^{2}\rangle=\mu^{2}+\sigma^{2}\) hence the effective standard deviation is \(\sqrt{\mu^{2}+\sigma^{2}-\frac{(\mu+3\sigma^{2})^{2}}{12\sigma^{2}}}\). (b) For \(\mu-\sqrt{3}\sigma\geq 0\) the distribution remain unchanged: the mean and the standard deviation of \(g\) are \(\mu\) and \(\sigma\). For both the cases \(A=\frac{1}{2\sqrt{3}\sigma}\).
### Gaussian distribution
To demonstrate the robustness of our results to variations of the detailed shape of the probability distribution for the coupling, we now consider a second case. The angle \(\theta\) is drawn from the Gaussian distribution:
\[P(\theta)\propto e^{-(\theta-\theta_{0})^{2}/\sigma_{\theta}^{2}}, \tag{21}\]
where \(\theta_{0}\in[0,\pi]\) is the mean and \(\sigma_{\theta}\in[0,\frac{\pi}{4}]\) is the standard deviation of \(\theta\) and the disordered coupling strength for the \(i^{\text{th}}\) spin is then taken as:
\[g_{i}=2\cos\theta_{i}. \tag{22}\]
Here, we numerically calculate the mean and the standard deviation of \(g\): \(\mu=\langle g\rangle=\frac{1}{N}\sum_{i=1}^{N}g_{i}\) and \(\sigma=\sqrt{\langle g^{2}\rangle-\langle g\rangle^{2}}\), to obtain characteristics of the distribution that are easily comparable with the previous section. The ground state energy \(E_{\text{G}}\) and the average boson number for this distribution are shown in Fig. 4 and we see a behavior similar to the uniform distribution used in Fig. 2.
## IV Thermal phase transition
Moving from the quantum to the thermal phase transition, in this section we derive an analytical expression for the critical temperature for the disordered Dicke model building on previous results [30, 9] for the clean Dicke model. We start by rewriting the system Hamiltonian for the disordered Dicke model as:
\[\tilde{\mathcal{H}} =\frac{\mathcal{H}}{\omega}\] \[=a^{\dagger}a+\sum_{j=1}^{N}\frac{\epsilon}{2}\sigma_{j}^{z}+ \frac{1}{\sqrt{N}}(a+a^{\dagger})\sum_{j=1}^{N}\lambda_{j}\sigma_{j}^{x} \tag{23}\] \[=a^{\dagger}a+\sum_{j=1}^{N}h_{j}. \tag{24}\]
where \(\epsilon=\frac{\omega_{0}}{\omega}\), \(\lambda_{j}=\frac{g_{j}}{\omega}\) and
\[h_{j}=\frac{\epsilon}{2}\sigma_{j}^{z}+\frac{1}{\sqrt{N}}(a+a^{\dagger}) \lambda_{j}\sigma_{j}^{x}. \tag{25}\]
Following Wang and Hieo [9], who studied the Dicke model within the rotating wave approximation, the partition function can be computed as:
\[Z(N,T)=\sum_{s_{1},...,s_{N}=\pm 1}\int\frac{d^{2}\alpha}{\pi} \langle s_{1}...s_{N}|\langle\alpha|e^{-\beta\tilde{\mathcal{H}}}|\alpha \rangle|s_{1}...s_{N}\rangle\] \[=\int\frac{d^{2}\alpha}{\pi}e^{-\beta|\alpha|^{2}}\Pi_{j=1,2,..., N}\langle s_{j}|e^{-\beta h_{j}}|s_{j}\rangle\] \[=\int\frac{d^{2}\alpha}{\pi}e^{-\beta|\alpha|^{2}}\Pi_{j=1,2,..., N}\Big{(}2\cosh\Big{[}\frac{\beta\epsilon}{2}\Big{[}1+\frac{16\lambda_{j}^{2} \alpha^{2}}{\epsilon^{2}N}\Big{]}^{1/2}\Big{]}\Big{)}. \tag{26}\]
Here \(|\alpha\rangle\) is a coherent state which satisfies the relation: \(a|\alpha\rangle=\alpha|\alpha\rangle\) and \(|s_{1}...s_{N}\rangle\) is the product basis for the spin subspace. In polar coordinates the partition function becomes:
\[Z(N,T)=\int\limits_{0}^{\infty}rdre^{-\beta r^{2}}\Pi_{j=1,2,..., N}\Big{(}2\cosh\Big{[}\frac{\beta\epsilon}{2}\Big{[}1+\frac{16\lambda_{j}^{2}r^{2}}{ \epsilon^{2}N}\Big{]}^{1/2}\Big{]}\Big{)}. \tag{27}\]
Figure 4: Phase diagram of the disordered Dicke model where \(g_{i}=2\cos\theta_{i}\), \(\theta_{i}\) are angles randomly drawn from a Gaussian distribution with mean \(\theta_{0}\) and standard deviation \(\sigma_{\theta}\). To map it out, we show (a) the ground state energy \(E_{\text{G}}\) and (b) the average boson number, \(\langle a^{\dagger}a\rangle\) wrt. the ground state, as a function of the standard deviation \(\sigma\) and the mean \(\mu\) of the coupling parameters \(g_{i}\). We consider the resonant case: \(\omega=\omega_{0}=1\) and take the average over 200 realizations, and fix the atom number to be \(N=8\), and the bosonic cut-off to be \(n_{\text{max}}=40\).
Defining the variable \(y=\frac{r^{2}}{N}\) allows us to rewrite the above integral as:
\[Z(N,T) =N\int\limits_{0}^{\infty}dye^{-\beta Ny}\Pi_{j=1,2,\ldots,N}\Big{(} 2\cosh\Big{[}\frac{\beta\epsilon}{2}\Big{[}1+\frac{16{\lambda_{j}}^{2}y}{ \epsilon^{2}}\Big{]}^{1/2}\Big{]}\Big{)}\] \[=N\int\limits_{0}^{\infty}dy\exp\Big{(}-\beta Ny+\sum\limits_{j= 1}^{N}\log\Big{[}\Big{(}2\cosh\Big{[}\frac{\beta\epsilon}{2}\Big{[}1+\frac{16{ \lambda_{j}}^{2}y}{\epsilon^{2}}\Big{]}^{1/2}\Big{]}\Big{)}\Big{]}\Big{)}. \tag{28}\]
We can write this more compactly as
\[Z(N,T)=N\int\limits_{0}^{\infty}dy\exp\Big{(}\phi_{N}(y)\Big{)} \tag{29}\]
using a shorthand:
\[\phi_{N}(y)=-\beta Ny+\sum\limits_{j=1}^{N}\log\Big{[}\Big{(}2\cosh\Big{[} \frac{\beta\epsilon}{2}\Big{[}1+\frac{16{\lambda_{j}}^{2}y}{\epsilon^{2}} \Big{]}^{1/2}\Big{]}\Big{)}\Big{]}.\]
for the exponent. We would like to evaluate the above integral using the method of steepest descent for which we would like to extract the point at which \(\phi_{N}(y)\) is a maximum. To find this, we compute the derivative:
\[\frac{d\phi_{N}(y)}{dy}=-\beta N+\frac{4\beta}{\epsilon}\sum\limits_{j}\frac{ \lambda_{j}^{2}}{\eta_{j}}\tanh\Big{(}\frac{\beta\epsilon\eta_{j}}{2}\Big{)}, \tag{30}\]
where we are using the shorthand notation
\[\eta_{j}=\Big{[}1+\frac{16{\lambda_{j}}^{2}y}{\epsilon}^{2}\Big{]}^{1/2}. \tag{31}\]
Figure 5: Thermal phase diagrams of the disordered Dicke model, based on the mutual information between two spins. Axes are the temperature \(T\) and mean coupling strength \(\mu=\langle g\rangle\), for (a) \(\sigma=0.2\), (b) \(\sigma=0.4\), (c) \(\sigma=0.5\), (d) \(\sigma=0.8\). The couplings \(g\) are drawn from a random uniform distribution with finite mean \(\mu\) and standard deviation \(\sigma\) (see Eq. 6). The number of atoms is \(N=6\) and we choose the bosonic cut-off as \(n_{\rm max}=20\). We take the average over 824 realizations of \(g\) for each \(\sigma\).
Figure 6: Thermal phase diagrams of the disordered Dicke model, based on the mutual information between two spins. Axes are the standard deviation \(\sigma\) and mean coupling strength \(\mu=\langle g\rangle\), for (a) \(T=0.1\), (b) \(T=1\), (c) \(T=1.5\), (d) \(T=2\). The couplings \(g_{i}=2\cos\theta_{i}\), where \(\theta_{i}\) are angles randomly drawn from a Gaussian distribution with mean \(\theta_{0}\) and standard deviation \(\sigma_{\theta}\). The number of atoms is \(N=6\) and we choose the bosonic cut-off as \(n_{\rm max}=20\). We take the average over 96 realizations of \(g\) for each temperature.
A vanishing derivative, \(\frac{d\phi_{N}(y)}{dy}=0\), implies:
\[0=-\beta N+\frac{4\beta}{\epsilon}\sum_{j}\frac{\lambda_{j}^{2}}{\eta_{j}}\tanh \Big{(}\frac{\beta\epsilon\eta_{j}}{2}\Big{)}. \tag{32}\]
Following the intuition from the corresponding calculation for the clean Dicke model, we argue that the critical value of the inverse temperature must correspond to the case when all the \(\eta_{j}\) take their minimum possible value namely unity. Inserting \(\eta_{j}=1\), we have:
\[0=-\beta_{c}N+\frac{4\beta_{c}}{\epsilon}\sum_{j}\lambda_{j}^{2}\tanh\Big{(} \frac{\beta_{c}\epsilon}{2}\Big{)} \tag{33}\]
which can be reshaped into
\[\sum_{j}\lambda_{j}^{2}\tanh\Big{(}\frac{\beta_{c}\epsilon}{2}\Big{)}=\frac{N \epsilon}{4}. \tag{34}\]
This gives us
\[\tanh\Big{(}\frac{\beta_{c}\epsilon}{2}\Big{)}=\frac{\epsilon}{4\frac{\sum _{j}\lambda_{j}^{2}}{N}}=\frac{\epsilon}{4\langle\lambda^{2}\rangle}. \tag{35}\]
Substituting for the expressions for \(\epsilon\) and \(\lambda\), we obtain an expression for the transition temperature:
\[T_{c}=\frac{\omega_{0}}{2\omega}\frac{1}{\tanh^{-1}\Big{(}\frac{\omega_{0} \omega}{4(g^{2})}\Big{)}}. \tag{36}\]
To verify this expression for the critical temperature, we numerically study the mutual information between two spins, which has been shown to be a useful marker for the thermal phase transition in the Dicke model [8; 29]. In Fig. 5, we show the mutual information between two spins as a function of the mean coupling strength \(\mu\) and the temperature \(T\), for different standard deviations: \(\sigma=0.2,\ 0.4,\ 0.5,\ 0.8\). For \(\sigma=0.2\) the phase diagram is almost identical to the one for the usual DM (see Fig. 4(c) of our earlier work [8]). For \(\mu<\frac{1}{2}\) the system lies in the normal phase, which gives rise here to the black color; for \(\mu>\frac{1}{2}\), there is a thermal phase transition from the super-radiant phase (light color) to the normal phase around some critical temperature. In this figure the red dashed line denotes the analytical critical temperature of Eq. 36. We can see that it describes the numerical results well. If the standard deviation is increased, it is clear from Fig. 5 (b) (\(\sigma=0.4\)) and (c) (\(\sigma=0.5\)) that the thermal phase transition starts at lower mean values \(\mu\) than \(\mu=g_{c}\). Finally, for sufficiently wide coupling distributions, with e.g. \(\sigma=0.8\), there is a clear TPT from the SP to the NP even for vanishing mean coupling strength \(\mu=0.0\). Hence, we can conclude from Fig. 5 that if we introduce disorder with a sufficiently broad distribution into the coupling strength between spins and bosons, there exists a TPT even for vanishing mean coupling \(\mu=\langle g\rangle=0\).
In Fig. 6, we show similar data, but using the distribution based on angles, Eq. (21)-Eq. (22). We again show the mutual information between two spins as a function of the mean and the standard deviation of the spin-boson coupling strength for fixed temperatures. Here \(\theta\) is a random number, drawn from a Gaussian distribution and \(g=2\cos\theta\) which we describe in the subsection III.2. In the normal phase the mutual information is small, shown by black color. On the other hand, in the super-radiant phase \(I_{12}\) is relatively high, which is represented by the other colors. One can notice that when we gradually increase the temperature, the normal phase is also expanding in parameter space. In this figure the white dashed curves represent the critical values of \(\sigma\) and \(\mu\) for TPT that we derived analytically in Eq. (36), which separate the normal and super-radiant phases quite well.
## V Realizing disordered couplings in Dicke model with cold atom or ultracold molecules in a cavity
While we intentionally study the abstract model (1) such that it can apply to a variety of systems, we provide in this section some examples for practical realisations. When considering the origin of the Hamiltonian (1) through light matter coupling for two-level systems in a single mode optical cavity, one has, in the dipole approximation [35]
\[g_{i}=\sqrt{\frac{1}{2\hbar\epsilon_{0}\omega}}\omega_{0}u(\mathbf{x}_{i})d_{ 21}\cos\theta_{i}, \tag{37}\]
where \(\epsilon_{0}\) is the vacuum permeability, \(u(\mathbf{x}_{i})\) the cavity mode amplitude at the location \(\mathbf{x}_{i}\) of the \(i^{th}\) two level system, and \(d_{21}\cos\theta_{i}\) the transition dipole matrix element between \(\left|\,2\,\right\rangle\) and \(\left|\,1\,\right\rangle\) projected onto the local cavity field axis, where we have made the dependence on the angle \(\theta_{i}\) between cavity field at \(\mathbf{x}_{i}\) and transition dipole axis explicit.
Even for identical atoms or molecules, treated as an approximate two-level system (TLS), a random position distribution \(\mathbf{x}_{i}\) can now translate into disordered coupling strengths through the position of the TLS relative to the cavity field structure in \(u(\mathbf{x}_{i})\) that may contain standing waves, which will cause disorder in the field strength. While this can easily be avoided by trapping all atoms on spatial scales small compared to the cavity wavelength [36], one can just as well generate a range of coupling distributions by weakly trapping the atoms on the flanks of a standing wave [37].
For atomic TLSs without any external fields other than the cavity field, there would be no additional contribution from the transition dipole orientation, since we can always choose the quantisation axis along the local cavity mode electric field direction, such that \(\cos\theta_{i}\to 1\). This is no longer true once an additional external field perturbs the symmetry, or the particle is asymmetric, such as most molecules are.
A symmetry breaking field \(\mathbf{B}\) could be magnetic, strong enough to Zeeman-shift undesired magnetic sublevels of the excited state out of cavity resonance and locally defining the quantisation axis. If the cavity is penetrated for example by the circular magnetic field around a current carrying wire, a random 3D distribution of atomic positions will translate into a random distribution of angles between quantisation axis and cavity mode electric field, and hence affect couplings, as sketched in Fig. 1.
Another approach to break the symmetry of the two-level system would be considering ultra-cold molecules [38, 39] in the optical cavity [40]. Typical hetero-nuclear molecules possess transition dipole moments with a fixed orientation relative to the molecular axis [41]. Molecules oriented randomly in 3D, such as in the ground state with angular momentum \(J=0\) of the quantum mechanical rotor, will thus exhibit a distribution of couplings. Disadvantages of molecules are their vibrational and rotational degrees of freedom, which are undesired here. However eliminating or minimising the impact of these is also required for quantum information and quantum simulation applications of ultra-cold molecules and aids cooling them, and is thus being actively pursued. Coupling to both degrees of freedom can be strongly suppressed, by choosing a molecule with a nearly diagonal Franck-Condon factor [42] between ground and excited state, and a larger angular momentum in the ground state than the excited state [43, 44].
Randomly oriented molecules neatly realise the uniform coupling distribution that we focussed on, since the probability of a given polar angle \(\theta\) is \(P(\theta)=\frac{\sin\theta}{2}\) and hence \(P(\cos\theta)\) will be uniform. Refined distributions can then be tailored by partially orienting molecules along the cavity field axis, e.g. \(P(\theta)=\mathcal{N}e^{-(\theta-\theta_{0})^{2}/\sigma_{\theta}^{2}}\), with \(\theta_{0}\) enforced by an additional external bias field \(\mathbf{E}\) (see Fig. 1).
The implementations of the disordered Dicke model with cold atoms and molecules in cavities that we discussed above can then provide controlled insight, which can be leveraged for understanding the underlying Hamiltonian in more complex and less controlled cases, such as when studying superradiance effects in semi-conductor quantum dot lattices [32]. In this case transition dipoles of carriers in quantum dots are also likely disordered by additional fields and quantum dot geometries, however a clear distinction of such effects from other disorder and decoherence sources will be much more difficult.
## VI Summary and conclusions
In this work, we propose and investigate a disordered single-mode Dicke model. We specifically focus on two concrete random distributions of the spin-boson coupling parameters \(g_{i}\): (i) a uniform distribution, (ii) \(g_{i}\propto\cos\theta_{i}\), where \(\theta_{i}\) are Gaussian random variables and study the resulting quantum and thermal phase transitions in the disordered Dicke model. In both cases we see similar results and hence demonstrate that our results are robust to changes in the detailed shape of the distribution.
We find that the phase transitions depend on both the mean and the standard deviation of the random coupling strengths. For the QPT, we find that for mean coupling strengths significantly smaller than their standard deviation \(\sigma\), the latter plays a role similar to the coupling \(g\) in the clean Dicke model. Even for vanishing mean coupling \(\mu=0\), the system thus shows a QPT around \(\sigma=\frac{0.5}{\sqrt{3}}\) for uniformly distributed couplings. When \(\mu\) is systematically increased, the critical value of \(\sigma\) decreases and after a certain mean coupling (\(=g_{c}\)) the QPT disappears. We derive approximate expressions for the ground state energy and the average boson number around the critical line: \(\mu+\sqrt{3}\sigma=\frac{1}{2}\), for the QPT in the \(\mu-\sigma\) plane, which provide a reasonably good and simple approximation of our numerical results. Exploiting a symmetry of the Dicke model allows us to improve on the expression.
We also derive an analytical expression for the critical temperature and numerically verify it with the aid of mutual information between two spins. It shows that for wide distributions, such that \(\sigma\) is large, there is a phase transition from SP to NP at \(\sigma\approx 0.8\) even for vanishing mean coupling strength, \(\mu=0\).
The disordered Dicke model should describe quantum dot superlattices in semiconductor quantum optics (see e.g. [32]). Additionally, we list several methods, by which the disordered Dicke model can be realized in experiments with ultracold atoms or molecules in a cavity.
###### Acknowledgements.
We are grateful to the High Performance Computing (HPC) facility at IISER Bhopal, where large-scale calculations in this project were run. P.D. is grateful to IISERB for the PhD fellowship. A.S acknowledges financial support from SERB via the grant (File Number: CRG/2019/003447), and from DST via the DST-INSPIRE Faculty Award [DST/INSPIRE/04/2014/002461].
## Appendix A The uniform distribution
Consider the scenario where the coupling \(g\) is drawn from a uniform distribution:
\[P_{u}(g)=\begin{cases}\frac{1}{2\sqrt{3}\sigma}&\text{if}\quad\mu-\sqrt{3} \sigma<g<\mu+\sqrt{3}\sigma\\ 0&\text{otherwise}\end{cases} \tag{15}\]
with mean \(\mu\) and standard deviation \(\sigma\).
To calculate the disorder-averaged ground state energy
and average boson number, we have to evaluate:
\[\overline{E_{G}} =\int\limits_{x_{1}}^{x_{2}}P_{u}(g)E_{G}dg, \tag{10}\] \[\overline{\langle a^{\dagger}a\rangle} =\int\limits_{x_{1}}^{x_{2}}P_{u}(g)\langle a^{\dagger}a\rangle dg, \tag{11}\]
where \(E_{G}\) is given in Eqn. 3 and \(\langle a^{\dagger}a\rangle\) is given in Eq. 4. The lower and upper limits of the box distribution are: \(x_{1}=\mu-\sqrt{3}\sigma\) and \(x_{2}=\mu+\sqrt{3}\sigma\) respectively and we consider \(\mu\) and \(\sigma\) to be in the range: \([0,1]\). Depending on the relation between \(x_{1}\) and \(x_{2}\) and \(g_{c}\), there are five cases to be considered. After performing the integration outlined above, we obtain an expression for the disorder averaged ground state energy:
\[\overline{E_{G}}=\begin{cases}D_{1}\left[\frac{2g_{c}}{3}-\frac{x_{1}^{3}}{3}+ \frac{g_{c}^{4}}{x_{1}}\right]&x_{1}<-g_{c}\text{ and }0<x_{2}\leq g_{c}\\ D_{1}\left[\frac{4g_{c}^{3}}{3}+\frac{1}{3}\left(x_{2}^{3}-x_{1}^{3}\right)+g_{ c}^{4}\left(\frac{1}{x_{1}}-\frac{1}{x_{2}}\right)\right]&x_{1}<-g_{c}\text{ and }x_{2}>g_{c}\\ -\frac{N\omega_{0}}{2}&|x_{1}|<g_{c}\text{ and }0<x_{2}\leq g_{c}\\ D_{1}\left[\frac{x_{2}^{3}}{3}-\frac{g_{c}^{4}}{x_{2}}+\frac{2g_{c}}{3} \right]&|x_{1}|<g_{c}\text{ and }x_{2}>g_{c}\\ D_{1}\left[\frac{1}{3}\left(x_{2}^{3}-x_{1}^{3}\right)+g_{c}^{4}\left(\frac{ 1}{x_{1}}-\frac{1}{x_{2}}\right)\right]&x_{1},\ x_{2}>g_{c}\text{ and }\ x_{1}<x_{2}\end{cases} \tag{12}\]
where \(D_{1}=-\frac{N}{2\sqrt{3}\sigma\omega^{2}}\). For the average boson number the disorder-averaged expression is:
\[\overline{\langle a^{\dagger}a\rangle}=\begin{cases}D_{2}\left[-\frac{4g_{c}^ {2}}{3}-\frac{x_{1}^{3}}{3g_{c}^{2}}-\frac{g_{c}^{4}}{x_{1}}\right]&x_{1}<-g_{ c}\text{ and }0<x_{2}\leq g_{c}\\ D_{2}\left[-\frac{8g_{c}^{2}}{3}+\frac{1}{3}\left(x_{2}^{3}-x_{1}^{3}\right)+g_{ c}^{4}\left(\frac{1}{x_{2}}-\frac{1}{x_{1}}\right)\right]&x_{1}<-g_{c}\text{ and }x_{2}>g_{c}\\ 0&|x_{1}|<g_{c}\text{ and }0<x_{2}\leq g_{c}\\ D_{2}\left[\frac{x_{2}^{3}}{3}+\frac{g_{c}^{4}}{x_{2}}-\frac{4g_{c}^{3}}{3} \right]&|x_{1}|<g_{c}\text{ and }x_{2}>g_{c}\\ D_{2}\left[\frac{1}{3}\left(x_{2}^{3}-x_{1}^{3}\right)+g_{c}^{4}\left(\frac{ 1}{x_{2}}-\frac{1}{x_{1}}\right)\right]&x_{1},\ x_{2}>g_{c}\text{ and }\ x_{1}<x_{2}\end{cases} \tag{13}\]
where \(D_{2}=\frac{N}{2\sqrt{3}\sigma\omega^{2}}\).
For the NP (\(|g|\leq g_{c}\)) the third case is applicable and thus:
\[\overline{E_{G}} =-\frac{N\omega_{0}}{2}, \tag{14}\] \[\overline{\langle a^{\dagger}a\rangle} =0. \tag{15}\]
On the other hand, for the SP (\(|g|>g_{c}\)), we consider only the fourth case: \(|x_{1}|<g_{c}\) & \(x_{2}>g_{c}\) for the QPT around \(g_{c}\). Thus:
\[\overline{E_{G}} =-\frac{N}{2\sqrt{3}\sigma}\left[\frac{x_{2}^{3}}{3}-\frac{g_{c}^{ 4}}{x_{2}}+\frac{2g_{c}}{3}\right], \tag{16}\] \[\overline{\langle a^{\dagger}a\rangle} =\frac{N}{2\sqrt{3}\sigma\omega^{2}}\left[\frac{x_{2}^{3}}{3}+ \frac{g_{c}^{4}}{x_{2}}-\frac{4g_{c}^{3}}{3}\right]. \tag{17}\]
|
2304.14954 | A Class of Dependent Random Distributions Based on Atom Skipping | We propose the Plaid Atoms Model (PAM), a novel Bayesian nonparametric model
for grouped data. Founded on an idea of `atom skipping', PAM is part of a
well-established category of models that generate dependent random
distributions and clusters across multiple groups. Atom skipping referrs to
stochastically assigning 0 weights to atoms in an infinite mixture. Deploying
atom skipping across groups, PAM produces a dependent clustering pattern with
overlapping and non-overlapping clusters across groups. As a result,
interpretable posterior inference is possible such as reporting the posterior
probability of a cluster being exclusive to a single group or shared among a
subset of groups. We discuss the theoretical properties of the proposed and
related models. Minor extensions of the proposed model for multivariate or
count data are presented. Simulation studies and applications using real-world
datasets illustrate the performance of the new models with comparison to
existing models. | Dehua Bi, Yuan Ji | 2023-04-28T16:18:43Z | http://arxiv.org/abs/2304.14954v2 | # PAM: Plaid Atoms Model for Bayesian Nonparametric Analysis of Grouped Data
###### Abstract
We consider dependent clustering of observations in groups. The proposed model, called the plaid atoms model (PAM), estimates a set of clusters for each group and allows some clusters to be either shared with other groups or uniquely possessed by the group. PAM is based on an extension to the well-known stick-breaking process by adding zero as a possible value for the cluster weights, resulting in a zero-augmented beta (ZAB) distribution in the model. As a result, ZAB allows some cluster weights to be exactly zero in multiple groups, thereby enabling shared and unique atoms across groups. We explore theoretical properties of PAM and show its connection to known Bayesian nonparametric models. We propose an efficient slice sampler for posterior inference. Minor extensions of the proposed model for multivariate or count data are presented. Simulation studies and applications using real-world datasets illustrate the model's desirable performance.
_Keywords:_ Clustering; Dependent clustering; Dirichlet process; MCMC; Slice sampler; Stick-breaking process.
Introduction
Clustering, or unsupervised learning, is a primary tool for data analysis and scientific exploration. Representative clustering methods include algorithmic approaches like K-Means (MacQueen, 1967) and model-based clustering like MClust (Fraley and Raftery, 1998). Alternatively, Bayesian nonparametric (BNP) models like the Dirichlet process (DP) (Ferguson, 1973) induce clusters naturally through their property of allowing ties among observations. These ties are also referred to as atoms in some literature, e.g., in Denti et al. (2021). Hereafter, we use "clusters" and "atoms" interchangeably.
For complex problems and data structures, dependent clustering is often necessary. For example, in linguistic research, it is of interest to discover common themes across multiple documents (Teh et al., 2004), where the themes are modeled as shared clusters. In biomedical research, modern experiments routinely generate high-throughput "-omics" data for multiple subjects. It is desirable to identify shared features across subjects, where a feature is often defined as a cluster of molecular units (e.g., genes). A common question for many dependent clustering problems is whether a clustering method can simultaneously cluster the observations within each group and capture the shared clusters across all or some groups. A group could be an individual subject or a scientific study, and observations could be genes of the subject or experimental units for the study. Various dependent clustering approaches have been proposed in the literature. For instance, Teh et al. (2004) pioneered a hierarchical DP (HDP) model to cluster observations arranged in groups. Through the use of a DP prior as the base measure for another DP model, the authors built a foundation for generating clusters that are common across the groups but with varying weights. In other words, all groups share the same atoms (clusters) but with different weights (sizes). Rodriguez et al. (2008) proposed a different structure called the Nested DP (NDP), which induces two layers of clusters, one for the groups and the other for the observations within
each group. As a result, observations within each group form observational clusters, and groups sharing the same observational clusters form distributional clusters. Groups belonging to the same distributional cluster have the same atoms and weights for the observational clusters, which is different from HDP. In other words, under NDP if two groups have the same atoms, they must also have the same weights. This is referred to a phenomenon called "degeneracy" (Camerlenghi et al., 2019). Conversely, due to its construction it is impossible for HDP to produce identical weights for observational clusters across different groups. With probability one, the weights of atoms under HDP are different across groups.
Recognizing the properties of HDP and NDP, Camerlenghi et al. (2019) proposed a latent nested process (LNP) model based on common and group-specific completely random measures (CRMs). The LNP model allows groups to have common or unique clusters, and if common, identical weights. More recently, Denti et al. (2021) proposed a common atoms model (CAM) that allows common atoms with different or identical weights across groups, with more efficient computation. Table A.1 in Appendix A.1 summarizes the features of these BNP models, along with our proposed model, called the plaid atoms model (PAM). Other BNP models have been proposed to generate various dependent clustering structures, including the semi-HDP (Beraha et al., 2021) and hidden-HDP (Lijoi et al., 2022), which are not thoroughly reviewed due to space limits.
Our research is motivated by the need to generate novel dependent clustering structures across groups that allow for both common and unique atoms. We use the term "plaid atoms" to represent this structure, hence the name plaid atoms model (PAM). Under PAM, two groups may share a subset of common atoms but also possess unique ones. For example, two documents may both cover Roman history, but only one document includes the theme of religion. Similarly, two clinical trials may share subpopulations of adult patients with similar characteristics, but only one trial consists of pediatric patients. Therefore, in PAM, we generalize existing work by proposing plaid atoms that include both common and unique
ones across groups. The dependent clustering is governed by a hierarchical BNP model that uses a zero-augmented beta (ZAB) distribution in a stick-breaking representation (Sethuraman, 1994) of the HDP. This allows the weight of an atom to be exactly zero in some groups but not in others, thereby effectively removing the atom from some groups. Therefore, the atom possessed by the remaining groups is common and shared by these groups. A unique atom for a group can be generated when that atom is removed from all the other groups.
Along with PAM, we also propose a marginal process for data from a single group. The process, called the fractional stick-breaking process (FSBP), is the mean process of PAM and possesses interesting and useful properties that are connected to known BNP models. In the theoretical discussion, we derive results for both models, FSBP and PAM. However, in the application of this paper, we focus on PAM as it is the main motivation and interests of this work. We propose an efficient computational approach based on the slice sampler (Kalli et al., 2011), following the work in Denti et al. (2021), but with substantial modification to accommodate the ZAB construction. Inference under PAM provides useful estimates to describe the clustering results and accuracy in data analysis. Lastly, we implement PAM for datasets with either univariate or multivariate observations, allowing PAM to be applied in a wide range of applications.
The remaining sections of the article are as follows. In Section 2, we introduce the PAM for both continuous and count data, as well as the FSBP. Section 3 discusses the theoretical properties of FSBP and PAM. In Section 4, we discuss posterior inference and outline the slice sampler algorithm for PAM. Section 5 compares the performance of our proposed method with other models through simulation studies. Section 6 applies the proposed model to two publicly available datasets. Section 7 concludes the paper with some discussion.
Two Proposed Models
### Overview
We will begin by presenting the proposed general model, PAM, for grouped data analysis, followed by a discussion of the fractional stick-breaking process (FSBP), which is the mean process of PAM. Consider a dataset with \(J\) groups of observations, where each group \(j\) consists of \(n_{j}\) observations of dimension \(p\geq 1\). Denote the \(i\)th observation in \(j\)th group by \(\mathbf{y}_{i,j}=(y_{i,j,1},\ldots,y_{i,j,p})\), \(i=1,\ldots,n_{j}\), and let \(\mathbf{y}_{j}=\{\mathbf{y}_{1,j},\ldots,\mathbf{y}_ {n_{j},j}\}\) represent all the observations in group \(j\), \(j=1,\ldots,J.\) Assume that each observation \(\mathbf{y}_{i,j},i=1,\ldots,n_{j}\) and \(j=1,\ldots,J\) takes a value from \(X\), a suitable Polish space endowed with the respective Borel \(\sigma\)-field \(\mathcal{X}\). Our goal is to partition observations \(\mathbf{y}_{j}\) within each group into clusters, allowing some but not all clusters to be shared with clusters in other groups.
### Plaid Atoms Model - Continuous Data
Assume observation \(\mathbf{y}_{i,j}\) (could be a scalar or vector) arises from a nonparametric mixture model indexed by parameter \(\mathbf{\theta}_{i,j}\) and a random distribution \(G_{j}\). Mathematically, we write
\[\mathbf{y}_{i,j}|\mathbf{\theta}_{i,j}\sim F(\mbox{\boldmath $y$}_{i,j}|\mathbf{\theta}_{i,j}),\quad\mathbf{\theta}_{i,j} |G_{j}\sim G_{j},\;\;i=1,\ldots,n_{j};\;j=1,\ldots,J,\]
where \(F(\cdot|\mathbf{\theta}_{i,j})\) is a parametric distribution for \(\mathbf{y}_{i,j}\). BNP models assume \(G_{j}\) follows a nonparametric prior distribution. For example, in HDP [Teh et al., 2004], each \(G_{j}\) is assigned a DP prior with base measure \(G_{0}\), which itself is an instance of \(DP\), i.e.,
\[G_{j}|\alpha_{0},G_{0} \sim \mbox{DP}(\alpha_{0},G_{0}),\] \[G_{0}|\gamma,H \sim \mbox{DP}(\gamma,H).\]
Using the stick-breaking representation (Sethuraman, 1994) of DP, HDP can be rewritten as
\[\begin{split}& G_{j}=\sum_{k=1}^{\infty}\pi_{j,k}\delta_{\mathbf{\phi}_ {\mathbf{k}}},\,\pi_{j,k}=\pi^{\prime}_{j,k}\prod_{l=1}^{k-1}(1-\pi^{\prime}_{j,l}) \\ &\pi^{\prime}_{j,k}\sim\text{Beta}\left(\alpha_{0}\beta_{k}, \alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)\right)\\ &\mathbf{\phi}_{k}\sim H,\text{ and }\qquad\quad\beta_{k}\sim\text{GEM }(\gamma)\end{split} \tag{1}\]
where \(\delta_{\{\cdot\}}\) is the indicator function, and GEM is the Griffiths-Engen-McCloskey distribution (Pitman, 2002). Specifically, \(\beta_{k}\sim\text{GEM}(\gamma)\) means that \(\beta_{k}=\beta^{\prime}_{k}\prod_{l=1}^{k-1}(1-\beta^{\prime}_{l})\), and \(\beta^{\prime}_{k}\sim\text{Beta}(1,\gamma)\), where \(\text{Beta}(a,b)\) denotes a beta distribution with mean \(a/(a+b).\) Note that \(G_{0}\) can be reconstructed from \(\{\beta_{k}\}_{k=1}^{\infty}\) and \(\{\mathbf{\phi}_{\mathbf{k}}\}_{k=1}^{\infty}\) in equation (1) by \(G_{0}=\sum_{k=1}^{\infty}\beta_{k}\delta_{\mathbf{\phi}_{\mathbf{k}}}\). Appropriate prior distributions like gamma can be specified for \(\alpha_{0}\) and \(\gamma\) to complete HDP. Since all \(G_{j}\)'s have the same set of atoms \(\mathbf{\phi}_{k}\), by construction, the HDP model (1) assumes all groups share a set of common atoms. In the proposed PAM, we allow \(\pi^{\prime}_{j,k}\) to take the zero value for each group \(j\), effectively removing atoms \(\mathbf{\phi}_{k}\) from the group. Specifically, let \(p_{j}\in(0,1)\) denote a group-specific parameter that controls the proportions of atoms \(\pi_{j,k}\)'s to be retained (or removed with probability \(1-p_{j}\)). Thus, we propose a zero-augmented beta (ZAB) for \(\pi^{\prime}_{j,k}\), given by,
\[f(\pi^{\prime}_{j,k})=p_{j}\times\underbrace{f_{\text{Beta}}\left(\alpha_{0} \beta_{k},\alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)\right)}_{(1)}+(1-p _{j})\times\underbrace{I(\pi^{\prime}_{j,k}=0)}_{\text{zero augmentation}}\quad, \tag{2}\]
where \(f_{\text{Beta}}(a,b)\) is the probability density function (p.d.f) of the beta distribution, and \(I(A)\) is the indicator function for condition \(A\). By replacing the beta prior distribution for \(\pi^{\prime}_{j,k}\) in (1) with (2), the proposed PAM as the prior distribution for \(G_{j}\) can be written as
follows:
\[\begin{split}& G_{j}=\sum_{k=1}^{\infty}\pi_{j,k}\delta_{\mathbf{\phi}_{ k}},\ \pi_{j,k}=\pi^{\prime}_{j,k}\prod_{l=1}^{k-1}(1-\pi^{\prime}_{j,l})\\ & f(\pi^{\prime}_{j,k}|\mathbf{\beta},p_{j},\alpha_{0})=p_{j}\times f _{\text{Beta}}\left(\alpha_{0}\beta_{k},\alpha_{0}\left(1-\sum_{l=1}^{k}\beta_ {l}\right)\right)+(1-p_{j})\times I(\pi^{\prime}_{j,k}=0)\\ &\beta_{k}|\gamma\sim\text{GEM}(\gamma),\ \ \mathbf{\phi}_{k}\sim H.\end{split} \tag{3}\]
where \(\mathbf{\beta}=\{\beta_{k}\}_{k=1}^{\infty}\). We use \(G_{j}\sim\text{PAM}(\mathbf{p},\alpha_{0},\gamma,H)\) to denote (3), with \(\mathbf{p}=\{p_{1},\ldots,p_{J}\}\). Priors need be placed on the parameters of \(\mathbf{p}\), \(\alpha_{0}\) and \(\gamma\), for example,
\[p_{j}|a,b\sim\text{Beta}(a,b),\alpha_{0}\sim\text{Gamma}(a_{\alpha},b_{\alpha }),\gamma\sim\text{Gamma}(a_{\gamma},b_{\gamma}).\]
Adopting the parametrization in Denti et al. (2021) and Teh et al. (2004), and adding the sampling model for observation \(\mathbf{y}_{i,j}\), the proposed PAM can be represented using a set of latent indicator variables \(\mathbf{Z}=\{z_{i,j}\}_{\forall i,j}\) as cluster memberships for the observations. In other words, \(z_{i,j}=k\) if observation \(i\) in group \(j\) is assigned to cluster \(k\). Denoting \(\mathbf{\pi}_{j}=\{\pi_{j,k}\}_{k=1}^{\infty}\), the proposed PAM mixture model is given by:
\[\begin{split}&\mathbf{y}_{i,j}|z_{i,j},\mathbf{\phi}_{k}\sim F(\mathbf{y}_{i,j} |\mathbf{\phi}_{z_{i,j}}),\\ & z_{i,j}|\mathbf{\pi}_{j}\sim\sum_{k=1}^{\infty}\pi_{j,k}\delta_{k} (z_{i,j}),\ \ \pi_{j,k}=\pi^{\prime}_{j,k}\prod_{l=1}^{k-1}(1-\pi^{\prime}_{j,l}),\\ & f(\pi^{\prime}_{j,k}|\mathbf{\beta},p_{j},\alpha_{0})=p_{j}\times f _{\text{Beta}}\left(\alpha_{0}\beta_{k},\alpha_{0}\left(1-\sum_{l=1}^{k}\beta _{l}\right)\right)+(1-p_{j})\times I(\pi^{\prime}_{j,k}=0).\end{split} \tag{4}\]
The priors of \(\mathbf{\beta}\) and \(\mathbf{\phi}_{k}\) remain the same as in (3), and the same priors can also be used for \(p_{j}\), \(\alpha_{0}\) and \(\gamma\). Except for the sampling distribution of \(\mathbf{y}_{i,j}\) in (4), models (3) and (4) are equivalent. The notation \(G_{j}\) in (3) is replaced with \(z_{i,j}\) in (4). This reparameterization is routinely used to facilitate posterior inference (Denti et al., 2021; Teh et al., 2004), which will be clear later on.
In equations (3) and (4), we use the Gaussian kernel for univariate observations
(\(p=1\)) by setting \(\mathbf{\phi}_{k}=(\mu_{k},\sigma_{k}^{2})\) and \(F(\cdot|\mathbf{\phi}_{k})=N(\cdot|\mu_{k},\sigma_{k}^{2})\). The base measure \(H\) is modeled as the conjugate prior of normal-inverse-gamma (NIG), where \(H=\text{NIG}(\mu_{0},\kappa_{0},\alpha_{0},\beta_{0})\), i.e., \(\mu_{k}|\sigma_{k}^{2}\sim N(\mu_{0},\sigma_{k}^{2}/\kappa_{0})\) and \(\sigma_{k}^{2}\sim\text{IG}(\alpha_{0},\beta_{0})\). For multivariate observations (\(p>1\)), \(\mathbf{\phi}_{k}=(\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\), where \(\mathbf{\Sigma}_{k}\) is a \(p\times p\) covariance matrix. We use \(F(\cdot|\mathbf{\phi}_{k})=\text{MVN}(\cdot|\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\) to model multivariate Gaussian, and adopt the conjugate prior of normal-inverse-Wishart \(H=\text{NIW}(\mathbf{\mu}_{0},\nu_{0},\kappa_{0},\mathbf{\Psi})\), i.e., \(\mathbf{\mu}_{k}\sim\text{MVN}(\mathbf{\mu}_{0},\mathbf{\Sigma}_{k}/\nu_{0})\) and \(\mathbf{\Sigma}_{k}\sim\text{IW}(\mathbf{\Psi},\kappa_{0})\), where IW is the inverse-Wishart distribution.
### Plaid Atoms Model - Count Data
Following Denti et al. (2021), we extend the proposed PAM to count data and refer to it as the Discrete Plaid Atoms Model (DPAM). Let the dimension of observation be \(p=1\). Let \(x_{i,j}\in\mathbb{N}\) be the observed count data for observation \(i=1,\ldots,n_{j}\) in group \(j=1,\ldots,J\), where \(\mathbb{N}\) denotes the natural numbers. Thus the data vector \(\mathbf{x}_{j}=(x_{1,j},\ldots,x_{n_{j},j})\) is the set of counts observed for the \(j\)th group. We apply the data augmentation framework in Canale and Dunson (2011) and introduce latent continuous variables \(y_{i,j}\) so that
\[\Pr(x_{i,j}=\omega)=\int_{a_{\omega}}^{a_{\omega+1}}g(y_{i,j})dy_{i,j},\quad \omega=0,1,2,\cdots \tag{5}\]
where \(a_{0}<a_{1}<\cdots<a_{\infty}\) is a fixed sequence of thresholds that take values \(\{a_{\omega}\}_{\omega=0}^{\infty}=\{-\infty,0,1,2,\ldots,+\infty\}\), and \(g(y_{i,j})\) follows the PAM mixture model as in equation (4). This construction allows posterior inference for \(y_{i,j}\) since it is trivial to see that
\[x_{i,j}|y_{i,j}=\sum_{\omega=0}^{\infty}\mathbf{1}_{\omega}(x_{i,j})\cdot\mathbf{1}_{ [a_{\omega},a_{\omega+1})}(y_{i,j}),\]
where \(\mathbf{1}_{a}(b)\) equals \(1\) if \(b=a\) or \(b\in a\), and \(0\) otherwise.
### Fractional Stick-breaking Process (FSBP)
We now introduce the new marginal process FSBP for data without grouping structure. This process is the mean of PAM. Let \(\mathcal{P}(X)\) be the set of probability measures on \((X,\mathcal{X})\). We define the fractional stick-breaking process (FSBP) as follows. Let \(\bar{p}\in(0,1]\) and \(\gamma>0\) be fixed constants. Furthermore, let \(H\) be a fixed, non-atomic distribution. For a random distribution \(G^{*}\in\mathcal{P}(X)\), we denote \(G^{*}\sim FSBP(\bar{p},a,b,H)\) if for \(k\geq 1\)
\[\begin{split}& G^{*}=\sum_{k=1}^{\infty}\pi_{k}\delta_{\mathbf{\phi}_{ k}},\ \pi_{k}=\bar{p}\cdot{\pi_{k}}^{\prime}\prod_{l=1}^{k-1}(1-\bar{p}\cdot\pi_{l}^{ \prime}),\\ &{\pi_{k}}^{\prime}\sim\text{Beta}(a,b),\ \mathbf{\phi}_{k}\sim H.\end{split} \tag{6}\]
Notice that we have slightly abused the notation by using \(\pi_{k}\) to denote the weight of \(G^{*}\), which is similar to \(\pi_{j,k}\) in equation (3). In section 3, we show that the mean process of the PAM model is the FSBP. Therefore, learning about FSBP sheds light on the theoretical properties of the more general but complex PAM model. Moreover, FSBP is also connected to many random probability measures (RPM) and stochastic processes in literature, which we briefly discuss next.
First of all, when \(\bar{p}=a=1\), FSBP becomes the usual stick-breaking process (SBP), hence the name FSBP. Since the stick-breaking process is equivalent to DP, we have \(FSBP(1,1,b,H)=DP(b,H)\). Second, when \(\bar{p}<1\), FSBP induces a different mechanism from SBP in generating the "breaks of sticks". Instead of breaking \(\pi_{k}^{\prime}\) portion of the stick for atom \(k\) in SBP, FSBP only breaks \(\bar{p}\cdot\pi_{k}^{\prime}\) portion. This means that each break is smaller but the remaining stick is longer. Third, FSBP is a special case of the kernel stick-breaking process (KSBP) of Dunson and Park (2008), where the kernel function is independent of the covariates and equal to a fixed constant of \(\bar{p}\). The beta parameters are also fixed to \(a\) and \(b\) for all \(k\), i.e., independent of the index \(k\). Lastly, it is closely related to the geometric stick-breaking (GSB) RPM of Mena et al. (2011) if \(\bar{p}=1\), and we modify
equation (6) such that \(\pi_{k}{}^{\prime}=\pi^{\prime}\) for all \(k\geq 1\) and \(\pi^{\prime}\sim\mbox{Beta}(a,b)\).
## 3 Theoretical Properties of FSBP and PAM
### Properties of FSBP
In this section, we explore the theoretical properties of the proposed FSBP and the PAM model. We first present results on FSBP. We assume \(a=1\) and \(b=\gamma\) in FSBP so that it is closely related to the aforementioned BNP models. For simplicity, we denote \(FSBP(\bar{p},a=1,b=\gamma,H)\) as \(FSBP(\bar{p},\gamma,H)\). In Theorem 1 below, we establish the mean and variance of \(G^{*}\sim FSBP(\bar{p},\gamma,H)\).
**Theorem 1**.: _For an arbitrary set \(A\subseteq X\), let \(\bar{p}\in(0,1]\) and \(\gamma>0\) be fixed constants, and \(H\) a fixed probability measure. For \(G^{*}\sim\mbox{FSBP}(\bar{p},\gamma,H)\), the mean and variance of \(G^{*}\) on \(A\) are_
\[E[G^{*}(A)]=H(A),\mbox{ Var}\left(G^{*}(A)\right)=\frac{H(A)\{1-H(A)\}}{v},\mbox{ where }v=\frac{1+\gamma}{\bar{p}}+\frac{1-\bar{p}}{\bar{p}}.\]
**Remark 1**.: _The mean and variance of \(G^{*}\) match the mean and variance of a DP \(G^{\prime}\sim DP(v-1,H)\)._
The proof of the theorem is in Appendix A.2. Theorem 1 shows that the mean and variance of FSBP and DP are connected.
We now derive the Exchangeable Partition Probability Function (EPPF) for \(G^{*}\) in a special case of FSBP, when \(\gamma=1\). The general case of \(\gamma>0\) has no closed-form results. The EPPF is the probability of a random partition of \(n\) samples from \(G^{*}\), which is an almost surely discrete distribution. Specifically, according to its definition given by equation (6), \(G^{*}\) is an infinite mixture of point masses, denoted by \(\mathbf{\Phi}=\{\mathbf{\phi}_{1},\mathbf{\phi}_{2},\ldots\}\). If we consider
random samples from \(G^{*}\), given by \(\mathbf{\Theta}=\{\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{n}\}\), where \(\mathbf{\theta}_{i}|G^{*}\sim G^{*}\) for \(i=1,\ldots,n\), each \(\mathbf{\theta}_{i}\) takes a value in \(\mathbf{\Phi}\) with a probability. Therefore, ties might be generated among the \(\mathbf{\theta}\)'s. We assume \(\mathbf{\Theta}\) possesses \(K\) unique values taken from \(\mathbf{\Phi}\), and denote these unique values by \(\mathbf{\Phi}_{K}=\{\mathbf{\phi}_{r_{1}},\ldots,\mathbf{\phi}_{r_{K}}\}\), where each \(r_{k}\) indexes the \(k\)th cluster and \(r_{k}\in\mathbb{N}=\{1,2,\ldots\}\), the positive integers. Denote \(\nabla=\{r_{1},\ldots,r_{K}\}\) the index set of the \(K\) clusters. Denote \(\mathbf{z}=\{z_{1},\ldots,z_{n}\}\) the label vector where \(\{z_{i}=k\}\) if \(\mathbf{\theta}_{i}\) is in cluster \(k\), i.e., \(\{\mathbf{\theta}_{i}=\mathbf{\phi}_{r_{k}}\}.\) Let \(c_{k}=\{i:z_{i}=k\}\) be the set of indices \(i\)'s for cluster \(k\), i.e., \(\forall i\in c_{k},\mathbf{\theta}_{i}=\mathbf{\phi}_{r_{k}}.\) Therefore, given \(\mathbf{z}\), the set \(C(\mathbf{z})=\{c_{k},r_{k}\in\nabla\}\) forms a partition of \(\{1,\ldots,n\}\). At last, the EPPF of \(G^{*}\) evaluated at a specific partition \(C\) of \(\{1,\ldots,n\}\) is defined as \(\Pr(C(\mathbf{z})=C)\)(Pitman, 1995). Following the work of Miller (2019), we derive the expression of the EPPF of \(G^{*}\) when \(\gamma=1\) in the following theorem. For the upcoming discussion, notice that we denote \(S_{K}\) as the set of \(K!\) permutations of \(\{1,\ldots,K\}\). That is, an element \(\mathbf{\lambda}\in S_{K}\) is a permutation of \(\{1,\ldots,K\}\), denoted as \(\mathbf{\lambda}=\{\lambda_{1},\ldots,\lambda_{K}\}\).
**Theorem 2**.: _Let \(\bar{p}\in(0,1]\) be a fixed constant, and let \(H\) be a fixed probability measure. Let \(G^{*}\sim FSBP(\bar{p},1,H)\). The EPPF of \(G^{*}\) for \(n\) samples is given by_
\[\frac{1}{\Gamma(n+1)}\left\{\prod_{c\in C}\Gamma(|c|)\right\}\left\{\prod_{c \in C}|c|\right\}\left[\sum_{\mathbf{\lambda}\in S_{K}}\left\{\prod_{k=1}^{K} \left(\xi_{k}\cdot\left(\alpha_{k}(\mathbf{\lambda})+1\right)-1\right)^{-1}\right\} \right],\]
_where_
\[\xi_{k}=\frac{\bar{p}}{F(\alpha_{k+1}(\mathbf{\lambda});\alpha_{k}(\mathbf{\lambda})+ 1,1-\bar{p})},\]
\(\Gamma(\cdot)\) _is the gamma function, \(|c|\) denotes the cardinality of the set \(c\), \(F(\cdot;n,p)\) is the CDF of a binomial distribution with size \(n\) and success probability \(p\), \(\alpha_{k}(\mathbf{\lambda})=|c_{\lambda_{k}}|+|c_{\lambda_{k+1}}|+\cdots+|c_{ \lambda_{K}}|\), and \(c_{\lambda_{k}}\) is the \(\lambda_{k}\)'s component of \(C\). When \(\bar{p}\to 1\), we have \(\xi_{k}\to 1\), and the EPPF of \(G^{*}\)
_converges to the EPPF of \(G_{0}\sim DP(1,H)\), which is given by_
\[\frac{1}{\Gamma(n+1)}\left\{\prod_{c\in C}\Gamma(|c|)\right\}.\]
The proof of theorem 2 is given in appendix A.3. Theorem 2 establishes the connection of FSBP and DP in its EPPFs.
We next explore the clustering property of the FSBP. Returning to the general case where \(\gamma>0\), we will show that the expected number of clusters in \(G^{*}\) is greater than the expected number of clusters in the corresponding DP with \(G_{0}\sim DP(\gamma,H)\). The first lemma shows the probability of forming a new cluster with the \(i\)th sample \(\mathbf{\theta}_{i}\), i.e., \(\mathbf{\theta}_{i}\notin\{\mathbf{\theta}_{1},...,\mathbf{\theta}_{i-1}\}\), when they are generated from FSBP.
**Lemma 1**.: _Let \(\bar{p}\in(0,1]\) and \(\gamma>0\) be fixed constants, and let \(H\) be a fixed probability measure. Let \(G^{*}\sim FSBP(\bar{p},\gamma,H)\), and let \(\mathbf{\theta}_{1},\cdots,\mathbf{\theta}_{i}|G^{*}\sim G^{*}\). Denote \(w_{i}\) as a binary indicator for the \(i\)th sample \(\mathbf{\theta}_{i}\), such that_
\[w_{i}=\begin{cases}1&\text{if }\mathbf{\theta}_{i}\notin\{\mathbf{\theta}_{1},\cdots, \mathbf{\theta}_{i-1}\}\\ 0&\text{o.w.}\end{cases},\]
_then, for \(i\geq 2\),_
\[Pr(w_{i}=1|\bar{p},\gamma)=1-\sum_{k=2}^{i}(-1)^{k}\binom{i-1}{k-1}\frac{(k-1)!}{\prod_{l=1}^{k}(l+k)}\frac{(\gamma+1)\bar{p}^{k-1}}{{}_{2}F_{1}(1,1-k; \gamma+2;\bar{p})}\]
_where \({}_{2}F_{1}(a,b;c;z)\) is the hypergeometric function (Abramowitz et al., 1988)._
The proof of Lemma 1 is in Appendix A.4. Next, we consider a special case of \(G^{*}\), where \(\bar{p}=1\) (in this case, \(G^{*}\) reduces to \(G_{0}\sim DP(\gamma,H)\)). In this case, the probability of forming
a new cluster with the \(i\)th sample \(\mathbf{\theta}_{i}\) coincides with that of the DP:
**Lemma 2**.: _Let \(\bar{p}=1\) in Lemma 1, then_
\[\mbox{Pr}(w_{i}=1|\bar{p}=1,\gamma)=\frac{\gamma}{\gamma+i-1}.\]
The proof of Lemma 2 is in Appendix A.5. Notice that the result in Lemma 2 corresponds to the probability when the \(i\)th sample \(\mathbf{\theta}_{i}\) is drawn from the base measure \(H\) in a Polya urn scheme in DP [10]. Based on Lemma 1 and 2, we have the following theorem.
**Theorem 3**.: _Let \(\bar{p}\in(0,1]\), \(\gamma>0\) be fixed constants, and \(w_{i}\) be defined as in Lemma 1. Then_
\[Pr(w_{i}=1|\bar{p},\gamma)\geq\frac{\gamma}{\gamma+i-1}.\]
The proof of Theorem 3 is shown in Appendix A.6. The following corollary follows directly from Theorem 3:
**Corollary 1**.: _Let \(n^{*}\) be the prior number of clusters of \(G^{*}\sim FSBP(\bar{p},\gamma,H)\) on \(n\) samples. The prior expected number of clusters is_
\[E[n^{*}|\bar{p},\gamma]=1+\sum_{i=2}^{n}Pr(w_{i}=1|\bar{p},\gamma).\]
_Let \(n_{0}\) be the prior number of clusters of \(G_{0}\sim DP(\gamma,H)\) on \(n\) samples. The prior expected number of clusters in this case is_
\[E[n_{0}|\gamma]=\sum_{i=1}^{n}\frac{\gamma}{\gamma+i-1}.\]
_Additionally, we have_
\[E[n^{*}|\bar{p},\gamma]\geq E[n_{0}|\gamma]\approx\gamma\log\left(\frac{\gamma+n}{ \gamma}\right).\]
**Remark 2**.: _The FSBP has a higher prior expected number of clusters than DP._
We also hypothesize that the prior expected number of clusters for FSBP decreases with \(\bar{p}\), although the proof of this hypothesis is left for future work as it involves complicated maneuver of hypergeometric functions. Next, we derive properties related to PAM and show the FSBP is the mean process of PAM.
### Properties of PAM
When \(p_{j}=1\) for all \(j=1,\ldots,J\), equations (1) and (3) are identical. In other words, HDP is a special case of PAM when \(p_{j}=1\) for all \(j\). Now, with \(p_{j}\sim\text{Beta}(a,b)\), we show that PAM is a proper discrete distribution (Proposition 1) and observations belonging to two groups and generated from PAM have a positive probability to be equal, thereby forming clusters (Proposition 2).
**Proposition 1**.: _Assume \(G_{j}\sim\text{PAM}(\mathbf{p},\alpha_{0},\gamma,H)\) where PAM is defined in (3). Also, assume \(p_{j}\sim Beta(a,b)\) for \(j=1,\ldots,J\). Then_
1. \(\sum_{k\geq 1}\pi_{j,k}=1\)_, and_
2. \(E[\pi_{j,k}]=\frac{1}{1+\gamma^{\prime}}\left(\frac{\gamma^{\prime}}{1+\gamma ^{\prime}}\right)^{k-1}\) _where_ \(\gamma^{\prime}=\frac{1+\gamma-\bar{p}}{\bar{p}}\)_,_ \(\bar{p}=\frac{a}{a+b}\)_._
Note that we use \(\bar{p}\) to represent the prior mean of \(p_{j}\). This will correspond to the parameter \(\bar{p}\) in FSBP as shown in Theorem 4.
**Proposition 2**.: _Let \(G_{1},\ldots,G_{J}\sim\text{PAM}(\boldsymbol{p},\alpha_{0},\gamma,H)\). Without loss of generality, for two groups \(G_{1}\) and \(G_{2}\), let \(\boldsymbol{\theta}_{i,1}|G_{1}\sim G_{1}\) and \(\boldsymbol{\theta}_{i^{\prime},2}|G_{2}\sim G_{2}\), then_
\[\text{Pr}(\boldsymbol{\theta}_{i,1}=\boldsymbol{\theta}_{i^{\prime},2})>0. \tag{7}\]
The proofs of Propositions 1 and 2 are in Appendix A.7 and A.8, respectively. The second proposition is trivial but necessary to verify PAM as a proper model choice for clustering on grouped data. Moreover, the first proposition is important in which not only it establishes the correctness of \(G_{j}\) as a discrete distribution, but it also leads to the derivation of the mean process of \(G_{j}\) in the next theorem.
**Theorem 4**.: _For an arbitrary set \(A\subseteq X\), let \(\alpha_{0},\gamma>0\), \(H\) be a fixed probability measure, \(G_{0}\sim DP(\gamma,H)\), and \(G_{j}\sim\text{PAM}(\boldsymbol{p},\alpha_{0},\gamma,H)\) where PAM is defined in (3) and \(\boldsymbol{p}=\{p_{1},\ldots,p_{J}\}\). Further assume \(p_{j}\sim Beta(a,b)\) for \(j=1,\ldots,J\). Then, the conditional mean of \(G_{j}\) is given by_
\[\text{E}[G_{j}(A)|G_{0}]=G^{*}(A),\]
_where \(G^{*}\sim FSBP(\bar{p},\gamma,H)\), \(\bar{p}=a/(a+b)\)._
The proof of Theorem 4 is given in Appendix A.9. This theorem shows that the mean process of PAM is FSBP. As a consequence of Theorem 1 of FSBP, the marginal mean of \(G_{j}\) follows directly and is shown in the following corollary:
**Corollary 2**.: \(\text{E}[G_{j}(A)]=\text{E}[\text{E}[G_{j}(A)|G_{0}]]=\text{E}[G^{*}(A)]=H(A).\)__
Unfortunately, there are no closed-form results for the partition probability functions, including the EPPF with \(J=1\) and the partial exchangeable partition probability function (pEPPF) with \(J>1\) for PAM, and the expected number of clusters for PAM is not available
in closed-form either. However, since we have now shown the mean of PAM is FSBP, the EPPF and expected number of clusters for FSBP in the previous section provide a clue on the average behavior of PAM. Specifically, the mean process of PAM induces more clusters on average than DP.
## 4 Posterior Inference
### Overview
Posterior inference under PAM utilizes a modified and efficient slice sampler proposed by Denti et al. (2021). A simpler approach based on the Chinese Restaurant Franchise (CRF) process in Teh et al. (2004) that can be applied for the inference of HDP does not work for PAM, unfortunately, due to the group-specific zero weights in the proposed ZAB construction. An alternative inference method for PAM could be the truncated blocked-Gibbs sampler in Rodriguez et al. (2008), which approximates the infinite mixture in equation (3) with a finite mixture. However, such an approximation can introduce errors in the inference (Denti et al., 2021; Rodriguez et al., 2008). The proposed slice sampler is illustrated for univariate observations (i.e., \(p=1\)), and can be easily extended to accommodate multivariate observations (i.e., \(p>1\)) or the DPAM model.
### Slice Sampler
By integrating out \(z_{i,j}\) in equation (4), we can rewrite the density function for \(y_{i,j}\) as an infinite mixture as
\[f(y_{i,j}|\mathbf{\Phi},\mathbf{\pi}_{j})=\sum_{k\geq 1}\pi_{j,k}\cdot p(y_{i,j}|\mathbf{ \phi}_{k}), \tag{8}\]
where \(\mathbf{\Phi}=\{\mathbf{\phi}_{k}\}_{k\geq 1}\). Following Kalli et al. (2011), we use a set of uniformly distributed random variables \(\mathbf{u}=\{u_{i,j}\}\) to separate the "active" mixture components from the other
"inactive" components, which will become clear next. By definition, each \(u_{i,j}\sim\text{Unif}(0,1)\). Additionally, we consider \(J\) deterministic probabilities \(\mathbf{\xi}_{j}=\{\xi_{j,k}\}_{k\geq 1}\) for a fixed \(j\), where \(\xi_{j,k}\equiv\xi_{k}=(1-\zeta)\zeta^{k-1}\) and \(\zeta\in(0,1)\) is a fixed parameter with a default value of \(0.5\), and \(\mathbf{\xi}_{j}\equiv\mathbf{\xi}=\{\xi_{k}\}_{k\geq 1}\). A more complicated construction may allow different \(\zeta_{j}\) for different groups \(j\), which we do not consider here. As a result, the augmented likelihood for observation \(y_{i,j}\) can be expressed as:
\[f_{\mathbf{\xi}}(y_{i,j},u_{i,j}|\mathbf{\Phi},\mathbf{\pi}_{j})=\sum_{k\geq 1}1_{\{u_{i,j} <\xi_{k}\}}\frac{\pi_{j,k}}{\xi_{k}}p(y_{i,j}|\mathbf{\phi}_{k}) \tag{9}\]
Integrating with respect to \(u_{i,j}\) returns \(f(y_{i,j}|\mathbf{\Phi},\mathbf{\pi}_{j})\) in (8). Now adding the cluster indicator \(z_{i,j}\) in (4), we express (9) as
\[f_{\mathbf{\xi}}(y_{i,j},u_{i,j}|z_{i,j},\mathbf{\Phi},\mathbf{\pi}_{j})=\sum_{k\geq 1}1_{ \{z_{i,j}=k\}}1_{\{u_{i,j}<\xi_{z_{i,j}}\}}\frac{\pi_{j,z_{i,j}}}{\xi_{z_{i,j} }}p(y_{i,j}|\mathbf{\phi}_{z_{i,j}}). \tag{10}\]
The proposed slice sampler follows a Gibbs-sampler style, in which it iteratively samples the following parameters,
1. \(u_{i,j}|\cdots\propto I(0<u_{i,j}<\xi_{z_{i,j}})\),
2. the stick-breaking weights \(\beta^{\prime}_{k}\), \(\pi^{\prime}_{j,k}\), and \(p_{j}\),
3. the indicator \(z_{i,j}\) with \(\Pr(z_{i,j}=k|\cdots)\propto 1_{\{u_{i,j}<\xi_{k}\}}\frac{\pi_{j,k}}{\xi_{k}}p(y_ {i,j}|\mathbf{\phi}_{k})\), and
4. the atom location parameter \(\mathbf{\phi}_{k}|\cdots\propto\prod_{z_{i,j}=k}N(y_{i,j}|\mathbf{\phi}_{k})p_{H}(\bm {\phi}_{k})\).
In the last step, since \(\mathbf{\phi}_{k}\sim H\), \(p_{H}(\mathbf{\phi}_{k})\) denotes the prior density of \(H\). The entire sampler is presented in Algorithm 1 next. Below we first describe the details of sampling \(\pi^{\prime}_{j,k}\) in step 2 above. The details of the entire slice sampler are in Appendix A.10.
In each iteration of the slice sampler, due to the introduction of latent uniform variates \(u_{i,j}\) and the truncation on \(\xi_{k}\), the infinite summation in equation (9) can be reduced to a
finite sum through "stochastic truncation". To see this, first notice that \(\{\xi_{k}\}\) is a descending sequence, and therefore only finitely many \(\xi_{k}\)'s can meet the condition \(1_{\{u_{i,j}<\xi_{k}\}}\). In other words, given \(\mathbf{u}\), there exists a \(K^{\prime}\geq 1\) such that when \(k\geq K^{\prime}\), \(\min_{i,j}(\mathbf{u})\geq\xi_{k}\). This means that up to \(K^{\prime}\) of the \(\xi_{k}\)'s will be larger than a \(u_{i,j}\). Let \(K^{*}=K^{\prime}-1\). Then, noticing that \(\xi_{K^{\prime}}=(1-\zeta)\zeta^{K^{\prime}-1}\), we can easily show that
\[K^{*}=\bigg{\lfloor}\frac{\log(\min(\mathbf{u}))-\log(1-\zeta)}{\log(\zeta)}\bigg{\rfloor}. \tag{11}\]
Here, \(K^{*}\) is called the "stochastic truncation" in the slice sampler. Given \(K^{*}\), sampling \(\beta^{\prime}_{k}\) is straightforward but requires a Metropolis-Hastings (MH) step (See Appendix A.10 for details). To sample \(\pi^{\prime}_{j,k}\), again conditional on \(K^{*}\), let \(\mathbf{Z}_{j}=\{z_{i,j}\}_{i=1}^{n_{j}}\), \(m_{j,k}=\sum_{i=1}^{n_{j}}1(z_{i,j}=k)\), and refer to the stick-breaking representation. The full conditional distribution of \(\pi^{\prime}_{j,k}\) is given by
\[p(\pi^{\prime}_{j,k}|\cdots)=p(\pi^{\prime}_{j,k}|\mathbf{Z}_{j},\mathbf{\beta},p_{j}, \alpha_{0})\propto\left[\pi^{\prime}_{j,k}\,{}^{m_{j,k}}(1-\pi^{\prime}_{j,k}) ^{\sum_{s=k+1}^{K^{*}}m_{j,s}}\right]f(\pi^{\prime}_{j,k})\]
where \(f(\pi^{\prime}_{j,k})\) is defined in equation (2). When \(m_{j,k}>0\), it means cluster \(k\) in group \(j\) is not empty, and therefore \(\pi^{\prime}_{j,k}\neq 0\) (otherwise, it would not be possible to have a non-empty cluster \(k\) in group \(j\)). Hence, the full conditional of \(\pi^{\prime}_{j,k}\) is
\[p(\pi^{\prime}_{j,k}|\cdots)=f_{\text{Beta}}\left(\alpha_{0}\beta_{k}+m_{j,k },\alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)+\sum_{s=k+1}^{K^{*}}m_{j,s }\right). \tag{12}\]
Recall \(f_{\text{Beta}}(,)\) denotes a beta distribution density. When \(m_{j,k}=0\), which could mean \(\pi^{\prime}_{j,k}=0\) or \(\pi^{\prime}_{j,k}\neq 0\) but the atom is not sampled, we have
\[p(\pi^{\prime}_{j,k}|\cdots)\propto(1-\pi^{\prime}_{j,k})^{\sum_{s=k+1}^{K^{ *}}m_{j,s}}f(\pi^{\prime}_{j,k}).\]
This can be expressed as
\[p(\pi^{\prime}_{j,k}|\cdots)=p^{*}_{j}\times f_{\text{Beta}}\left(\alpha_{0} \beta_{k},\alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)+\sum_{s=k+1}^{K^{*}} m_{j,s}\right)+(1-p^{*}_{j})\times I(\pi^{\prime}_{j,k}=0) \tag{13}\]
where
\[p^{*}_{j}=\frac{p_{j}}{p_{j}+(1-p_{j})\times\frac{B\left(\alpha_{0}\beta_{k}, \alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)\right)}{B\left(\alpha_{0} \beta_{k},\alpha_{0}\left(1-\sum_{l=1}^{k}\beta_{l}\right)+\sum_{s=k+1}^{K^{* }}m_{j,s}\right)}}\]
and \(B(a,b)\) is the beta function.
Lastly, sampling \(p_{j}\) and the concentration parameters follow standard MCMC simulation (Escobar and West, 1995), details of which is provided in Appendix A.10.
Additional step for count dataFinally, for DPAM an additional step is added to update the latent continuous variable. Denote \(\text{TN}(\mu,\sigma^{2};a,b)\) the truncated normal distribution with mean \(\mu\), variance \(\sigma^{2}\), and boundaries \(a\) and \(b\), the full conditional distribution of \(y_{i,j}\) is
\[y_{i,j}|\cdots\sim\text{TN}(\mu_{z_{i,j}},\sigma^{2}_{z_{i,j}};a_{x_{i,j}},a_{ x_{i,j}+1}). \tag{14}\]
Computation AlgorithmAlgorithm 1 introduces the proposed slice sampler. For multivariate observations, step 9 of Algorithm 1 can be replaced with a conjugate NIW prior, and multivariate normal can be used for \(p(y_{i,j}|\boldsymbol{\phi}_{k})\) in step 8. On the other hand, the extension to DPAM can be achieved by adding steps to sample the latent \(y_{i,j}\) according to equation (14) after step 7, and modifying the likelihood \(p(y_{i,j}|\boldsymbol{\phi}_{k})\) in step 8 with
\[p(x_{i,j}|\boldsymbol{\phi}_{k})=\Delta\Phi(a_{x_{i,j}}|\boldsymbol{\phi}_{k}) =\Phi(a_{x_{i,j}+1}|\boldsymbol{\phi}_{k})-\Phi(a_{x_{i,j}}|\boldsymbol{\phi} _{k}),\]
where \(\Phi(\cdot)\) denotes the cumulative distribution function (c.d.f) of the Gaussian distribution.
Label SwitchingAs PAM involves an infinite mixture model, the issue of label switching can arise in MCMC samples (Papastamoulis, 2015). To address the problem of label switching, we use the Equivalence Classes Representatives (ECR) algorithm described in Papastamoulis and Iliopoulos (2010). Details of label-switching with the ECR method are in Appendix A.10.
### Inference on Clusters
Like all BNP models, PAM produces random clusters and their associated posterior distributions. For a specific application, it is often desirable to report the common and unique clusters across groups. We discuss the corresponding inference under PAM next. We consider two approaches.
The first approach is through the MCMC samples of the label matrix \(\mathbf{Z}^{(m)}=\{z_{i,j}^{(m)}\}\). For the \(m\)th MCMC iteration, vector \(\mathbf{z}_{j}^{(m)}=\{z_{1,j}^{(m)},\ldots,z_{n_{j},j}^{(m)}\}\) induces \(k_{j}^{(m)}\) clusters. Let
\(\mathbf{t}_{j}^{(m)}=\{t_{1}^{(m)},\ldots,t_{k_{j}}^{(m)}\}\) denote the labels of these clusters, which are the unique values of \(\mathbf{z}_{j}^{(m)}\). Then the set and number of common clusters between groups \(j\) and \(j^{\prime}\) are given by \(\mathbf{t}_{j}\cap\mathbf{t}_{j}^{\prime}\) and its cardinality, respectively, and the set and number of unique clusters for group \(j\) are given by \(\mathbf{t}_{j}\ mod\ \mathbf{Z}\backslash\mathbf{z}_{j}\) and its cardinality, respectively. Here, operation \(A\ mod\ B\) for two sets \(A\) and \(B\) is redefined as the unique elements in \(A\) but not \(B\), and \(\mathbf{Z}\backslash\mathbf{z}_{j}\) means the set after removing \(\mathbf{z}_{j}\) from \(\mathbf{Z}\).
The second approach to summarize common and unique clusters is to use the posterior sample of the group-specific weights \(\mathbf{\pi}_{j}^{(m)}\), \(j=1,\ldots,J\). Specifically,
\[\begin{split} n_{\text{comm}}(\{\mathbf{\pi}_{j}^{(m)},\mathbf{\pi}_{j^{ \prime}}^{(m)}\})=\sum_{k=1}^{|\mathbf{\pi}_{j}^{(m)}|}1(\pi_{j,k}^{(m)}\neq 0\text{ and }\pi_{j^{\prime},k}^{(m)}\neq 0),\\ n_{\text{uniq}}(\mathbf{\pi}_{j}^{(m)})=\sum_{k=1}^{|\mathbf{\pi}_{j}^{(m )}|}1\left(\pi_{j,k}^{(m)}\neq 0\text{ and }\sum_{j^{\prime}=\{1,\cdots,j-1,j+1,\cdots,J\}}\pi_{j^{\prime},k}^{(m)}=0 \right)\end{split} \tag{15}\]
where \(|\cdot|\) denotes the cardinality of the corresponding vector. Thus, the weight approach is able to learn the same information as the \(\mathbf{Z}\) matrix method.
The above two approaches generate the same values of \(n_{\text{comm}}\) and \(n_{\text{uniq}}\) for each MCMC sample.
To produce a point estimate of the clustering result, we follow the approach in Wade and Ghahramani (2018) to estimate an optimal partition through a decision-theoretic approach that minimizes the variation of information (Meila, 2007). This optimal partition is then used as a "point estimate" of the random clusters obtained from PAM posterior inference.
Simulation Study
### Simulation Setup
We test the performance of PAM through simulated univariate and multivariate data. We generate univariate observations based on scenario one in Denti et al. (2021), which assumes a finite mixture of Gaussian distributions. The second scenario assumes three groups of multivariate observations, with \(p=3\) and \(J=3\), where there is a combination of common and unique clusters among the groups.
**Scenario 1 - Univariate data:** Consider \(J=6\) groups. For group \(j\), the observation follows a mixture of normal distributions
\[f(y_{i,j})\propto\sum_{g=1}^{j}\frac{1}{g}N(m_{g},\sigma^{2}),\quad i=1,\ldots, n_{j},\]
where \(m_{g}\in\{0,5,10,13,16,20\}\), \(\sigma^{2}=0.6\), and \(j=1,\cdots,6\). Therefore, there are \(j\) true clusters in group \(j\) defined by the \(j\) normals in \(f(y_{i,j})\), with only the first cluster \(N(m_{1},\sigma^{2})\) shared across all six groups. We test two sub-cases by setting the number of observations in group \(n_{j}=n_{A}\), where \(n_{A}\in\{50,100,150\}\), or by setting \(n_{j}=n_{B}\times j\), where \(n_{B}\in\{10,20,40\}\).
**Scenario 2 - Multivariate data:** This scenario assumes \(p=3\), \(J=3\) groups, and a total of five clusters. Observations are generated from a mixture of multivariate Gaussian distributions, with the mean and cluster weights shown in Table A.2 in Appendix A.11. Group 1 possesses all the clusters, group 2 has clusters 1, 3 and 4, and group 3 has clusters 2 and 3. The true covariance matrix is assumed to be the identity matrix, and we assume all groups have the same sample size, with \(n_{j}=n\), where \(n\in\{50,100,200\}\).
For the purpose of benchmarking, we compare the performance of the proposed PAM model with HDP and CAM. We assess the models' performance based on three criteria: 1)
the number of predicted clusters using the estimated optimal partition, with a value closer to the ground truth indicating better estimation, 2) the adjusted Rand index (ARI) (Hubert and Arabie, 1985) between the estimated optimal partition and the ground truth, with a value closer to 1 indicating better performance, and 3) the normalized Frobenius distance (NFD) (Horn and Johnson, 1990) between the estimated posterior pairwise co-clustering matrices and the true co-clustering structure, with a value closer to 0 indicating better performance. These metrics have been routinely adopted in the literature, e.g., in Denti et al. (2021).
### Simulation Results
We adopt standard prior settings for the hyperparameters in Equation (3). Specifically, we use the NIG distribution as the base measure \(H\), with hyperparameters \(\mu_{0}=0\), \(\kappa_{0}=0.1\), \(\alpha_{0}=3\) and \(\beta_{0}=1\). We use Jeffrey's prior for \(p_{j}\)'s, i.e., \(a=b=0.5\). Lastly, we set \(a_{\alpha_{0}}=b_{\alpha_{0}}=a_{\gamma}=b_{\gamma}=3\) for the gamma priors of the concentration parameters \(\alpha_{0}\) and \(\gamma\). We collect an MCMC sample of 10,000 iterations after 10,000 iterations of burn-in. The Markov chains mix well.
Scenario OneWe generate 30 datasets for each sample size in sub-cases one and two, and apply three clustering methods, including the proposed PAM, as well as CAM and HDP, to these simulated data. We evaluate the performance of these methods based on three metrics: the total number of clusters, the ARI, and the NFD. The mean and standard deviation of each metric are reported in Table 1. The results demonstrate that PAM performs competitively with the other methods, especially when the sample size increases.
We also evaluate PAM's ability to identify common and unique clusters across groups. To do so, we randomly select one simulated dataset with a group sample size 150 and present the data distribution for each group in Appendix A.11. We use group 6 as a reference since
it contains all six clusters and investigate the number of common clusters between group 6 and the other groups, as well as the number of common clusters across all groups. Table A.3 in Appendix A.11 shows the results, and it appears that PAM is capable of capturing the unique cluster in group 6. Overall, all models perform similarly on this dataset.
Figure A.2 in Appendix A.11 reports the posterior distributions of the number of clusters in each group, the number of common clusters for group 6, and the number of unique clusters, according to the inference described in Section 4.3. The red vertical lines indicate the ground truth. Overall, the results look reasonable, especially on the common and unique clusters.
Scenario TwoFor the multivariate data, we use the following prior settings for the hyperparameters in (3). The NIW distribution is used as the base measure \(H\), with hyperparameters \(\mathbf{\mu}_{0}=\mathbf{0}=\{0,0,0\}\), \(\kappa_{0}=0.1\), \(\nu_{0}=4\) and \(\mathbf{\Psi}=I_{3}\), where \(I_{3}\) is the \(3\times 3\) identity matrix. Similar to the univariate data, we use Jeffrey's prior for \(p_{j}\). We also set \(a_{\alpha_{0}}=b_{\alpha_{0}}=a_{\gamma}=b_{\gamma}=3\) in the gamma priors for the concentration parameters \(\alpha_{0}\) and \(\gamma\). For simplicity, we only report the model accuracy on the number of clusters, ARI, and
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline \hline Metrics & Methods & \(n_{A}=50\) & \(n_{A}=100\) & \(n_{A}=150\) & \(n_{B}=10\) & \(n_{B}=20\) & \(n_{B}=40\) \\ \hline \multirow{3}{*}{Number of clusters} & CAM & 4.03 (0.49) & 4.67 (0.61) & 4.97 (0.49) & 4.17 (0.75) & 4.40 (0.56) & 5.43 (0.50) \\ & HDP & 3.93 (0.53) & 4.00 (0.59) & 4.27 (0.58) & 4.23 (0.50) & 4.30 (0.65) & 4.33 (0.61) \\ & PAM & 4.93 (0.87) & 5.67 (0.71) & 5.97 (0.62) & 5.77 (0.82) & 6.00 (0.59) & 6.17 (0.38) \\ \hline \multirow{3}{*}{ARI} & CAM & 0.90 (0.05) & 0.93 (0.04) & 0.95 (0.02) & 0.79 (0.08) & 0.83 (0.08) & 0.93 (0.05) \\ & HDP & 0.87 (0.05) & 0.87 (0.05) & 0.90 (0.04) & 0.76 (0.08) & 0.78 (0.09) & 0.82 (0.07) \\ & PAM & 0.87 (0.07) & 0.91 (0.04) & 0.95 (0.03) & 0.73 (0.06) & 0.82 (0.06) & 0.94 (0.05) \\ \hline \multirow{3}{*}{NFD} & CAM & 0.07 (0.03) & 0.07 (0.02) & 0.07 (0.02) & 0.08 (0.02) & 0.08 (0.02) & 0.05 (0.02) \\ & HDP & 0.04 (0.02) & 0.04 (0.02) & 0.03 (0.01) & 0.07 (0.03) & 0.06 (0.03) & 0.05 (0.02) \\ \cline{1-1} & PAM & 0.06 (0.02) & 0.04 (0.02) & 0.02 (0.01) & 0.09 (0.02) & 0.06 (0.02) & 0.02 (0.01) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulated univariate data. Clustering performance for CAM, HDP, and PAM evaluated according to the number of total detected clusters (truth = 6 clusters) based on the estimated optimal clustering, the Adjusted Rand Index (ARI), and the normalized Frobenius distance (NFD). The entries are Mean (SD) over 30 datasets.
NFD for this simulation. We generate 30 datasets for each sample size, and we summarize the results in Table 2.
The results indicate that all three methods have high accuracy in the multivariate data simulation. PAM performs competitively with the other two methods in terms of the ARI and NFD metrics when the sample size is large (\(n\geq 100\)).
## 6 Case Studies
In this section, we apply the proposed PAM method to two real-life datasets: a Micro-biome dataset that studies the microbial distributions in African Americans and rural Africans (O'Keefe et al., 2015), and a Warts dataset of treating patients with warts using immunotherapy or cryotherapy. The former example demonstrates the use of the DPAM model for count data, while the latter shows the application of PAM to multivariate observations.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Metrics & Methods & \(n=50\) & \(n=100\) & \(n=200\) \\ \hline \multirow{2}{*}{Number clusters} & CAM & 5.40 (1.13) & 5.37 (0.71) & 5.04 (0.19) \\ & HDP & 5.50 (0.97) & 4.90 (0.76) & 4.93 (0.47) \\ & PAM & 4.93 (0.64) & 5.07 (0.26) & 5.03 (0.18) \\ \hline \multirow{2}{*}{ARI} & CAM & 0.90 (0.06) & 0.95 (0.04) & 0.97 (0.01) \\ & HDP & 0.86 (0.11) & 0.91 (0.07) & 0.96 (0.02) \\ & PAM & 0.89 (0.08) & 0.96 (0.02) & 0.97 (0.01) \\ \hline \multirow{2}{*}{NFD} & CAM & 0.05 (0.03) & 0.04 (0.02) & 0.03 (0.02) \\ & HDP & 0.05 (0.02) & 0.04 (0.02) & 0.01 (0.01) \\ \cline{1-1} & PAM & 0.03 (0.03) & 0.01 (0.01) & 0.01 (0.00) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulated multivariate data. Clustering performance for CAM, HDP, and PAM evaluated according to the number of total detected clusters (truth = 5 clusters) based on the estimated optimal clustering, the Adjusted Rand Index (ARI), and the normalized Frobenius distance (NFD). The entries are Mean (SD) over 30 datasets.
### Microbiome Dataset
We begin by applying the DPAM model to the microbiome dataset, which was also analyzed by Denti et al. (2021). This dataset, reported by O'Keefe et al. (2015), contains information on microbiota abundance for 38 healthy middle-aged African Americans (AA) and rural Africans (AF). The study aimed to investigate the effect of diet swap between individuals of AF and AA, as traditional foods for these populations differ. The 38 study participants were instructed to follow their characteristic diet, such as a low-fat and high-fiber diet for AF and a high-fat and low-fiber diet for AA, for two weeks, and then swap diets for another two weeks. We focus on the data obtained before the diet swap, and cluster subjects' counts of operational taxonomic units (OTUs), which refer to clustered phylotypes based on taxonomical classification of microbial species obtained at the beginning of the experiment. The reported data are in the form of OTU counts (i.e., OTU expression) that record the numbers of recurrences of the corresponding OTUs in a particular ecosystem (Jovel et al., 2016; Kaul et al., 2017). For more background, refer to O'Keefe et al. (2015) and Section 4 of Denti et al. (2021). Hereafter, we use the term "expression" and "counts" interchangeably in this application.
In this dataset, each individual (AA or AF) is treated as a group, and the OTU counts are treated as subjects in each group. Following the same data-preprocessing steps as in Denti et al. (2021), we obtain 38 subjects (17 AF and 21 AA) with 119 OTUs. Note that all the OTUs are the same, so technically each group will never possess unique clusters of OTUs. In this application, by unique clusters, we mean unique expression of OTUs. In other words, we are clustering the counts of the OTUs, not the OTUs themselves. For illustrative purpose, we randomly select four subjects (i.e., four groups), two AAs (with IDs 5 and 22) and two AFs (with IDs 13 and 14). We remove the OTUs that had zero expression in all four individuals from the selected data. In the end, we obtain a dataset with \(J=4\) individuals (groups) and \(n_{j}=109\) OTUs (observations). The histograms of the
microbiome populations of the four selected individuals are shown in Appendix A.12.
For inference, similar to Denti et al. (2021), we incorporate the average OTU frequencies for subject \(j\), denoted as \(\eta_{j}=\frac{1}{n}\sum_{i=1}^{n}x_{i,j}\), as a scaling factor in the latent variable \(y_{i,j}\) of the DPAM model. This leads to the following distribution:
\[y_{i,j}|\mathbf{Z},\mathbf{\mu},\mathbf{\sigma}^{2}\sim N(\eta_{j}\mu_{z_{i,j}},\eta_{j}^{2 }\sigma_{z_{i,j}}^{2})\leftrightarrow\frac{y_{i,j}}{\eta_{j}}|\mathbf{Z},\mathbf{\mu}, \mathbf{\sigma}^{2}\sim N(\mu_{z_{i,j}},\sigma_{z_{i,j}}^{2}) \tag{16}\]
The prior hyperparameters follow the same settings as in scenario one of simulation study, and we present the results with the optimal partition in Table 3.
PAM reports a total of eight estimated clusters across the four individuals: Clusters 1 and 2 are shared by all four individuals, cluster 7 is shared among individuals 5 (AF), 13 (AA), and 14 (AA), and cluster 8 is shared among individuals 5 and 22 (both from AF). The other clusters are unique to a specific individual. Based on the optimal partition of OTUs, we plot the taxa counts (TC) of OTUs grouped by all eight estimated clusters as well as by both clusters and individuals in Figure 1. Note that for easy demonstration of clusters across individuals, we have manually reordered the clusters in ascending order based on the cluster mean.
\begin{table}
\begin{tabular}{c c|c c c c c c c} \hline \hline & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 & Cluster 5 & Cluster 6 & Cluster 7 & Cluster 8 \\ \hline Location & 0.07(0.01) & 0.53(0.04) & 1.75(0.20) & 1.50(0.26) & 2.21(0.27) & 3.73(0.36) & 9.89(1.21) & 74.21(8.99) \\ \hline \multirow{3}{*}{Weights} & ID 5 & 0.56 & 0.26 & 0.11 & 0.00 & 0.00 & 0.00 & 0.05 & 0.02 \\ & ID 22 & 0.84 & 0.12 & 0.00 & 0.00 & 0.00 & 0.02 & 0.00 & 0.02 \\ \cline{1-1} & ID 13 & 0.77 & 0.11 & 0.00 & 0.00 & 0.10 & 0.00 & 0.02 & 0.00 \\ \cline{1-1} & ID 14 & 0.74 & 0.10 & 0.00 & 0.11 & 0.00 & 0.00 & 0.05 & 0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Posterior means of the atom locations \(\mathbf{\mu}\) and weights \(\mathbf{\sigma}\). Each entry in ”Location” row represents posterior mean of \(\mu(\sigma)\). Notice that the mean and SD are not scaled by \(\eta_{j}\).
We report an interesting finding related to the PAM clustering of OTUs. Specifically, OTU _Prevotella melaninogenica_ is in cluster 8, which has the highest expression and is shared (both the cluster and the OTU) only by AF individuals 5 and 22. This finding is consistent with previous studies that have shown that the individuals with a predominance of _Prevotella spp._ are more likely to consume fiber, which is a typical component of an African diet (Graf et al., 2015; Preda et al., 2019).
We also applied DPAM to all 38 individuals and present the number of common clusters between each pair of individuals in a heatmap format in Figure A.4 in Appendix A.12. The heatmap uses a red color to indicate a higher number of common clusters shared by both individuals, while a white color indicates fewer common clusters. The results suggest that the individuals can be roughly divided into two clusters based on the heatmap, with individual 30 serving as the separating point. The cluster on the bottom left of the heatmap consists of 13 individuals from AF, with eight individuals from AA, while the top right cluster has 13 AAs and four AFs. In other words, one cluster is mostly composed of AFs,
Figure 1: Boxplots of microbiome abundance counts stratified by clusters (a) and by both clusters and individuals (b).
while the other is dominated by AAs. A hierarchical clustering of the heatmap confirms the division of the individuals into two distinct groups.
### Warts Dataset
In this example, we consider a publicly available dataset on warts which includes patients treated with two different options: immunotherapy immunotherapy and cryotherapy. Each treatment group contains medical records for 90 patients, and for each patient, six baseline characteristics (covariates) are reported, including the patient's gender, age (Age), time elapsed before treatment (Time), the number of warts (NW), the type of warts (1-common, 2-plantar, 3-both), and the surface area of warts in mm\({}^{2}\) (Area). Additionally, patients' responses to the corresponding treatments are also recorded.
To better understand potential differences between responders to the two treatments, we use PAM to cluster the covariate values of the responders. We use each treatment group as a separate group in PAM, with 71 responders in the immunotherapy group and 48 in the cryotherapy group. Therefore, \(J=2\). We exclude the binary covariate "gender" and the multinomial covariate "type of warts" from the analysis. Additionally, we treat the number of warts as a continuous variable. As a result, the final data set includes four covariates: Age, Time, NW, and Area. We set the hyperparameters of the priors to follow the same settings as in scenario two of the simulation, and the results are summarized below.
PAM identifies a total of seven clusters based on the optimal estimated clustering. Three of these clusters are common between the immunotherapy and cryotherapy groups, while the other four clusters are unique to either group. We summarize the posterior means of the four covariates and the weights for each of the seven clusters in Table 4. The table reveals that, among all responders, individuals with younger age, a short time elapsed from treatment (less than five months), and small surface area of warts form unique clusters in the cryotherapy group. On the other hand, those who were not treated for a longer
time and had a large surface area of warts (over 300 mm\({}^{2}\)) form distinct clusters in the immunotherapy group. Furthermore, it seems that the number of warts does not provide much information in determining a better treatment option for warts patients.
These findings are consistent with results from previously published studies. For instance, Khozeimeh et al. (2017) found that patients younger than 24 years old showed a better response to cryotherapy, and patients who received cryotherapy within six months had a very high probability of being cured. This is consistent with the information implied by clusters 6 and 7, which are unique to the cryotherapy group. Moreover, another study by Khozeimeh et al. (2017) developed an expert system with fuzzy rules, and one such rule for immunotherapy is "If (types of wart is Plantar) and (time elapsed before treatment is VeryLate) then (response to treatment is Yes)." In Khozeimeh et al. (2017)'s expert system, time elapsed before treatment longer than six months is considered "VeryLate". This rule echoes the common and unique clusters for the immunotherapy group found by PAM. In the unique clusters 4 and 5, and the common clusters 1 to 3, the time before treatment was 6.96, 7.38, 6.19, 6.71 and 8.63 months, respectively, all larger than six months.
Additional results are illustrated in Figure A.5 in Appendix A.13, which shows the cluster membership of each patient. The figure indicates that patients with a large area of warts are unique to the immunotherapy group, while those with a younger age are mostly
\begin{table}
\begin{tabular}{l c|c c c|c c|c c} \hline \hline & & \multicolumn{3}{c|}{Common} & \multicolumn{3}{c|}{Unique in Immunotherapy} & \multicolumn{3}{c}{Unique in Cryotherapy} \\ & & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 & Cluster 5 & Cluster 6 & Cluster 7 \\ \hline \multirow{4}{*}{Mean} & Age & 18.53 & 31.66 & 23.68 & 27.36 & 19.64 & 24.51 & 16.55 \\ & Time & 6.19 & 6.71 & 8.63 & 6.96 & 7.38 & 4.41 & 3.80 \\ & NW & 2.44 & 7.13 & 8.44 & 2.75 & 7.98 & 7.54 & 4.28 \\ & Area & 68.41 & 40.82 & 195.16 & 389.20 & 312.65 & 87.78 & 6.41 \\ \hline \multirow{2}{*}{Weight} & Immunotherapy & 0.15 & 0.68 & 0.02 & 0.12 & 0.03 & 0.00 & 0.00 \\ & Cryotherapy & 0.31 & 0.17 & 0.06 & 0.00 & 0.00 & 0.36 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Posterior means of atom locations and atom weights for the inferred seven clusters. “Age” refers to the patient’s age, “Time” refers to time elapsed before treatment, “NW” refers to number of warts, and “Area” refers to the surface area of warts of the patient.
from the cryotherapy group.
## 7 Discussion
We have introduced a novel Bayesian nonparametric model called PAM that can induce dependent clusters across groups and has a mean process of FSBP. This model allows the weights of clusters to be exactly zero in some groups, effectively removing these clusters from those groups, and generating an interpretable clustering structure. In simulation studies, PAM demonstrated competitive performance, and in the two case studies, it produced sensible results. Our methodology accommodates count data and multivariate observations and follows the efficient slice sampler for CAM, with substantial modifications due to the use of ZAB in our construction.
There are some limitations to our current work. Firstly, the model is unable to cluster groups (i.e., distributional clusters), unlike NDP and CAM. However, we are currently working on a separate model that extends PAM to cluster nested data at both group and observational levels. Secondly, the model has not been applied to real datasets consisting of different types of covariates, such as binary and multinomial covariates. Finally, considering longitudinal data is another interesting direction for extending the model.
|
2303.01440 | Programmatic Imitation Learning from Unlabeled and Noisy Demonstrations | Imitation Learning (IL) is a promising paradigm for teaching robots to
perform novel tasks using demonstrations. Most existing approaches for IL
utilize neural networks (NN), however, these methods suffer from several
well-known limitations: they 1) require large amounts of training data, 2) are
hard to interpret, and 3) are hard to repair and adapt. There is an emerging
interest in programmatic imitation learning (PIL), which offers significant
promise in addressing the above limitations. In PIL, the learned policy is
represented in a programming language, making it amenable to interpretation and
repair. However, state-of-the-art PIL algorithms assume access to action labels
and struggle to learn from noisy real-world demonstrations. In this paper, we
propose PLUNDER, a novel PIL algorithm that integrates a probabilistic program
synthesizer in an iterative Expectation-Maximization (EM) framework to address
these shortcomings. Unlike existing PIL approaches, PLUNDER synthesizes
probabilistic programmatic policies that are particularly well-suited for
modeling the uncertainties inherent in real-world demonstrations. Our approach
leverages an EM loop to simultaneously infer the missing action labels and the
most likely probabilistic policy. We benchmark PLUNDER against several
established IL techniques, and demonstrate its superiority across five
challenging imitation learning tasks under noise. PLUNDER policies achieve 95%
accuracy in matching the given demonstrations, outperforming the next best
baseline by 19%. Additionally, policies generated by PLUNDER successfully
complete the tasks 17% more frequently than the nearest baseline. | Jimmy Xin, Linus Zheng, Kia Rahmani, Jiayi Wei, Jarrett Holtz, Isil Dillig, Joydeep Biswas | 2023-03-02T17:57:28Z | http://arxiv.org/abs/2303.01440v4 | # Plunder: Probabilistic Program Synthesis for Learning from
###### Abstract
Learning from demonstration (LfD) is a widely researched paradigm for teaching robots to perform novel tasks. LfD works particularly well with program synthesis since the resulting programmatic policy is data efficient, interpretable, and amenable to formal verification. However, existing synthesis approaches to LfD rely on precise and labeled demonstrations and are incapable of reasoning about the uncertainty inherent in human decision-making. In this paper, we propose PLUNDER, a new LfD approach that integrates a probabilistic program synthesizer in an expectation-maximization (EM) loop to overcome these limitations. PLUNDER only requires unlabeled low-level demonstrations of the intended task (e.g., remote-controlled motion trajectories), which liberates end-users from providing explicit labels and facilitates a more intuitive LfD experience. PLUNDER also generates a probabilistic policy that captures actuation errors and the uncertainties inherent in human decision making. Our experiments compare PLUNDER with state-of-the-art LfD techniques and demonstrate its advantages across different robotic tasks.
## I Introduction
Learning from demonstration (LfD) is a popular and intuitive approach for teaching robots to perform novel tasks [1]. Combining LfD with program synthesis methods [2] leads to an approach that is data-efficient, interpretable, and amenable to formal verification. However, existing LfD synthesis approaches assume that the user has manually labeled each time step with an action label [3, 4]. While these labels facilitate effective LfD, acquiring them can be costly and error-prone.
In this paper, we instead focus on learning from unlabeled demonstrations (LfUD), a setting that avoids manual labeling but is more challenging than LfD. Specifically, we aim to synthesize a probabilistic action selection policy (ASP) that produces high-level action labels using only low-level demonstrations given by the user (_e.g.,_ the steering and accelerator control values in a driving domain). Such probabilistic ASPs are useful for capturing the inherent uncertainties and complexities in real-world human demonstrations and are widely used in many robotic applications. Nevertheless, the absence of labels precludes the application of conventional program synthesis techniques in our setting.
To overcome this problem, we propose Plunder1, a novel synthesis technique that combines an expectation-maximization (EM) loop [5] with probabilistic program synthesis [6]. The high-level workflow of Plunder is shown in Figure 1. Our approach starts from a trivial ASP, \(\pi^{(0)}\), which switches the robot's high-level actions randomly, and iteratively improves it within an expectation-maximization formulation. In the Expectation (E) Step, we sample posterior action label sequences by combining the current ASP with low-level demonstrations. This sampling process is implemented using a particle filter and does not necessitate the collection of supplementary data. In the Maximization (M) Step, using a combination of syntactic enumeration and numerical optimization, we synthesize a new ASP to better match the sampled action labels. To enhance the scalability of the M step, we also propose an incremental synthesis technique that focuses its search on ASPs similar to the best ASP found in the previous iteration. This iterative process is repeated until an optimal ASP (\(\pi^{*}\)) that maximizes the likelihood of the provided demonstrations is obtained.
Footnote 1: Program Learning from Unlabeled Noisy Demonstrations for Robots
To evaluate our approach, we compare Plunder with baseline techniques on data collected from three simulated environments. Our baselines include state-of-the-art LfD techniques [3] as well as a behavioral cloning approach that trains a neural policy directly on the low-level demonstrations [7]. Our experimental results indicate that Plunder consistently outperforms all baselines. We also present case studies showing the advantages of Plunder, including resilience to noise, scalability to complex probabilistic programs, and interpretability to the end user.
In summary, this work makes the following contributions:
* We propose the novel idea of combining the EM algorithm with probabilistic program synthesis to tackle the LfUD problem and overcome demonstration noise.
* We introduce an incremental synthesis technique that leverages the EM paradigm to improve the scalability of the M step.
* We present Plunder ([https://github.com/ut-amrl/plunder](https://github.com/ut-amrl/plunder)), a reusable implementation of the proposed algorithm, and conduct empirical evaluations to demonstrate its efficacy in comparison to various alternative approaches.
Fig. 1: Workflow of Plunder
## II Overview
In this section, we will walk through the workflow of Plunder using a minimal one-dimensional autonomous driving example. More complex applications are presented and discussed in section IV.
Consider the scenario where a human expert wants to teach an autonomous vehicle to drive on a straight road with stop signs. The expert uses a remote control with a joystic to adjust the motor and brake inputs of the vehicle to control its acceleration rate at every point in time. In the provided demonstrations, the vehicle is initially stationary and then begins gradually accelerating until it reaches the user's desired velocity. The user then ensures that the vehicle decelerates and comes to a halt near the next stop sign.
Figure 2 displays 20 trajectories created by letting the user remotely control the vehicle using joysticks. The two plots show the velocity and acceleration of the vehicle between two stop signs. Due to the vehicle's hardware and mechanical constraints, the acceleration value of the vehicle cannot exceed \(a_{\max}=13m/s^{2}\) or drop below \(a_{\min}=-20m/s^{2}\), despite the user inputting acceleration or deceleration commands via the joystick. The presented trajectories are color-coded based on the user's intended vehicle behavior. In this example, the user's intention can be modeled as one of 3 high-level labels: the acceleration label (ACC), the constant velocity label (CON), and the deceleration label (DEC). However, high-level labels such as these are generally challenging to obtain directly. Hence we aim to infer them from only the low-level observations (e.g., acceleration and velocity) while simultaneously learning the ASP that models the user's high-level intention.
The presented trajectories in Figure 2 demonstrate significant variation in the user's decision-making regarding state transitions. For instance, one trajectory indicates an extra conservative behavior with an earlier switch from acceleration to constant velocity, resulting in a much lower maximum velocity reached compared to other trajectories. Additionally, in Figure 1(b), all acceleration trajectories exhibit small but continuous variations attributed to the inherent noise in the vehicle's actuation module. Such variations and noise are prevalent in human demonstrations and are challenging to eliminate. However, as demonstrated in section IV, existing program synthesis techniques for LfD cannot handle such noisy and unlabeled demonstrations.
To address the challenges mentioned above, we propose Plunder, an incremental program synthesis algorithm that learns optimal ASPs from unlabeled and noisy demonstrations. The program synthesized by Plunder specifies a discrete-time transition system with nodes representing the high-level actions of the vehicle and edges showing the conditions for transitioning between those actions, shown as \(\phi_{h,h^{\prime}}\) for actions \(h\) and \(h^{\prime}\). Figure 3 depicts the transition system for the running example.
Plunder models this action selection policy as a _probabilistic program_[6], where the transition conditions are represented as probabilistic expressions. At each time step, if none of the transition conditions execute to true for the current state, then the system remains in that state.
The full syntax of probabilistic transition conditions for this example is shown in Figure 4. Note that in addition to standard Boolean constructs, the conditionals can also make use of the the probabilistic coin-flip function, flp, where flp\((p)\) evaluates to true with probability \(p\) and false otherwise. Additionally, the S-shaped logistic function lgs can appear inside flp to approximate numerical inequalities. For instance, the deterministic inequality \(e>x_{0}\) can be relaxed into a probabilistic condition using flp\((\texttt{lgs}(e,x_{0},k))\), where \(\texttt{lgs}(e,x_{0},k)\) maps the difference between \(e\) and \(x_{0}\) to a value in the range \((0,1)\) in a smooth and continuous manner, and the parameter \(k\) controls the slop (also known as the growth rate) of the logistic curve. A larger \(k\) indicates less uncertainty in the expression. Time-varying values such as the robot's velocities and accelerations are represented as domain variables, and besides basic arithmetic functions, Plunder also allows the user to provide other domain-specific functions and constants.
In Figure 5, we show the results of four (non-consecutive) iterations of the EM loop, each containing the ASP found in that iteration and its corresponding high-level action sequence samples. Each action sequence is shown as a
Fig. 4: Syntax of probabilistic transition conditions. Top four rules are specific to the example in section II.
Fig. 3: The probabilistic ASP for the example in section II.
Fig. 2: Remote-controlled demonstrations of low-level vehicle states in the running example. Note the variations in transition boundaries and the motor actuation noise.
horizontal line, using the same color coding as before. Notice that because the synthesized ASP is probabilistic, the sequences have random variations. While each synthesized ASP contains a total of 6 transition conditions, we only show \(\phi_{\texttt{LOC},\texttt{CON}}\) and \(\phi_{\texttt{CON},\texttt{DEC}}\), which are the most relevant ones for this particular task. A comprehensive description of all iterations and synthesized programs can be accessed through the public repository for Plunder.
The EM loop begins with a simple initial ASP, \(\pi^{(0)}\), which uses \(\texttt{randSwitch}(0.1)\) for all transition conditions. This specifies that the vehicle has a \(10\%\) chance to randomly switch to another action and a \(90\%\) chance of repeating the last action. The algorithm then proceeds by alternating between the E Step and M Step, progressively refining the candidate program until the desired level of accuracy is reached in \(\pi^{(5)}\), which is returned as the final policy for the task in the running example.
In the final ASP \(\pi^{(5)}\), the transition condition \(\phi_{\texttt{LOC},\texttt{CON}}\) specifies that the vehicle should switch from acceleration to constant speed when its velocity approaches the recommended maximum velocity (\(v_{\max}\)). Similarly, \(\phi_{\texttt{CON},\texttt{DEC}}\) specifies that the vehicle should transition from constant speed to deceleration when the estimated stop distance given the current speed (distTrv) is close to the remaining distance to the next stop sign (\(d_{\text{stop}}\)).
Synthesizing programs such as \(\pi^{(5)}\) is a challenging task because the transition conditions can have complex structures with multiple numerical constants, functions, and logical disjunctions or conjunctions (which Plunder had to synthesize for other benchmarks, although not used in the running example. See section IV for more complex examples). The synthesizer we used for the M step of Plunder uses a combination of enumerative search with pruning techniques and numerical optimization to identify an action selection policy that best matches the given labels, which we describe in more detail in the next section.
## III The Plunder algorithm
In this section, we present a comprehensive overview of the Plunder algorithm.
### _Algorithm Overview_
In our setting, we seek to infer a _maximum a posteriori (MAP)_ estimate of a probabilistic ASP \(\pi\) given only human demonstrations of low-level action trajectories \(l_{1:t}\) (_e.g.,_ the steering and accelerator control action demonstrations) and the corresponding robot states \(s_{1:t}\) (_e.g.,_ the speed of the vehicle, its distance to other vehicles, etc. at each time-step). The hidden variables in this problem are the high-level action labels \(h_{1:t}\) (_e.g.,_ whether to slow down, speed up, or switch lanes), which are unknown. The optimal MAP estimate \(\pi^{*}\) is the ASP that maximizes the likelihood of the demonstrated actions after marginalizing out the high-level action labels:
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}P(l_{1:t}|s_{1:t},\pi)P(\pi)\] \[=\operatorname*{arg\,max}_{\pi}\sum_{h_{1:t}}P(l_{1:t}|h_{1:t},s_ {1:t})P(h_{1:t}|s_{1:t},\pi)P(\pi)\, \tag{1}\]
where \(P(l_{1:t}|h_{1:t},s_{1:t})\) is the motor model that defines the likelihood of the low-level actions given the high-level action labels and the robot states, \(P(h_{1:t}|s_{1:t},\pi)\) is the label sequence probability according to the ASP, and \(P(\pi)\) specifies the prior probability (density) of each ASP (note that the ASP can contain unknown parameters, whose distribution is also specified in this prior).
We further make the Markov assumption that the high-level action labels are conditionally independent given the robot state, the ASP, and the label from the last time step. Hence, we have
\[P(h_{1:t}|s_{1:t},\pi)=\prod_{t=1}^{T}P(h_{t}|h_{t-1},s_{t},\pi)\, \tag{2}\]
and similarly, we also assume the motor model is Markovian
Fig. 5: EM Loop Progression: best candidate programs found at each iteration and the corresponding action label sequence samples (50 sequences in each plot). The ground-truth label sequence is shown at the bottom. Only \(\phi_{\texttt{LOC},\texttt{CON}}\) and \(\phi_{\texttt{CON},\texttt{DEC}}\) are shown for each policy.
and have
\[P(l_{1:t}|h_{1:t},s_{1:t})=\prod_{t=1}^{T}P(l_{t}|h_{t},s_{t}). \tag{3}\]
There are two major challenges in solving (1): First, we need to search the ASP in a combinatorial structure space while optimizing its parameters in a continuous space. Second, due to combinatorial explosion of the label sequence space, directly computing the sum in (1) is also computationally intractable. These two challenges motivated us to design a new Expectation-Maximization algorithm to iteratively approximate the MAP estimate. Our algorithm, Plunder, works by alternating between two steps that are each more tractable than (1). Plunder starts by taking in an initial ASP \(\pi^{(0)}\), which may be inaccurate but serves as a starting point. Then, at each iteration \(k\), the Expectation (E) Step uses the current estimate of the ASP \(\pi^{(k)}\) to sample \(N\) plausible high-level action sequences from the posterior distribution \(h_{1:t}^{i}\sim P(h_{1:t}\mid s_{1:t},l_{1:t},\pi^{(k)})\), \(i=1\ldots N\). Note that this posterior sampling problem is well-studied in robotics and can be solved using standard state estimation techniques such as particle filtering.
Next, in the Maximization (M) Step, we search for a new ASP by solving the optimal synthesis problem
\[\pi^{(k+1)}=\operatorname*{arg\,max}_{\pi\in\mathcal{N}(\pi^{(k)})}\log P(\pi )+\frac{1}{N}\sum_{i=1}^{N}\log P(h_{1:t}^{i}\mid s_{1:t},\pi) \tag{4}\]
where the ASP search space \(\mathcal{N}(\pi^{(k)})\) denotes the set of ASPs that are similar to \(\pi^{(k)}\), and the label sequences \(h_{1:t}^{i}\) are taken from the E Step. Under some mild assumptions [8], the M Step is guaranteed to improve the likelihood of the demonstrations as defined in (1) [5]. Hence, we can repeat this EM cycle until convergence. We summarize these steps in Algorithm 1.
### _Synthesizing Probabilistic Policies_
To synthesize a policy that is maximally consistent with the trajectories according to (4), we decompose the problem into _synthesis of transitions guarded by conditions_. We first decompose trajectories down into lists of _examples_, or independent transitions at every time step. For each starting label, we sequentially synthesize conditions for transitions to every other label. Each condition is a top-level logical formula with numerical expressions. In the first iteration, we enumerate over a set of numerical expressions, built off of a user-provided DSL with input variables and operations, to be used as _features_. In standard deterministic synthesis, these features are then compared to constant holes to form a predicate. However, in order to provide distributions over transitions in the next E Step, we need to synthesize a probabilistic program. We do so by representing predicates as logistic functions over expressions, which can be viewed as comparisons with _probabilistic thresholds_. Finally, we enumerate over conjunctions and disjunctions of these predicates, up to a given depth.
However, this form of enumeration over programs quickly becomes intractable, especially because the synthesis step needs to be embedded inside the EM loop, which may take many iterations to converge. Instead, we take advantage of the iterative nature of the EM algorithm and use incremental synthesis in the M Step. In particular, we restrict the search space to just the neighborhood of the current best policy plus a set of base predicates that allows the synthesis step to "reset" to a simple program. This design is inspired by the success of evolutionary algorithms [9]. To enumerate this neighborhood, we mutate the abstract syntax tree of the current condition to obtain syntactically similar expressions. We use the following five types of mutations:
1. adding a predicate with a random base feature,
2. removing predicates,
3. swapping conjunctions and disjunctions,
4. augmenting numerical expressions with a random operation and a random base feature,
5. simplifying numerical expressions by removing an operation.
In addition, we employ a type system based on physical dimensional constraints, which was previously proposed in [3]. This involves tracking the physical dimensions of expressions and pruning away expressions that are physically meaningless. These two pruning techniques significantly reduce the program search space while providing enough flexibility to explore increasingly complex programs.
Each sigmoid function used in partial programs contains two unknown constants, \(k\), which controls the slope of the logistic curve, and \(x_{0}\), which controls the horizontal offset. We solve for these parameters using numerical optimization by optimizing (4). Specifically, we use the line search gradient descent algorithm, L-BFGS, and take the best results from multiple random initial values. Each input example is classified as a _positive_ or _negative_ example for a given transition. We then optimize for the transition probabilities over all examples using the Markov assumption according to (2). To handle conjunctions, we use the recursive formula \(P(e_{1}\text{ and }e_{2})=P(e_{1})\times P(e_{2})\), and similarly, for disjunctions we use \(P(e_{1}\text{ or }e_{2})=P(e_{1})+P(e_{2})-P(e_{1}\text{ and }e_{2})\), where \(P(e)\) denotes the probability of the predicate expression \(e\) being true, which is given by the logistic function or calculated using one of the above formulae.
To prevent the synthesizer from simply adding predicates
at each iteration to cover all cases and overfitting to the sampled labels, we define the ASP prior used in (4) as \(\log P(\pi)=-c_{1}\cdot\mathrm{size}(\pi)-c_{2}\sum_{k\in\pi}k^{2}\) (for some constants \(c_{1},c_{2}>0\)) to penalize large programs (according to the size of their AST) and overconfident conditions.
### _Sampling Posterior Label Sequences_
We use a standard particle filter to sample action label sequences from the posterior distribution \(h_{1:t}^{i}\sim P(h_{1:t}\mid s_{1:t},l_{1:t},\pi^{(k)})\), for \(i=1\ldots N\). The particle filter uses the ASP to sample action labels (the "particles") forward in time and the motor model to reweight and resample particles such that the ones more consistent with the current observation are more likely to be duplicated. To obtain the full trajectory samples, we take samples from the last time step and trace back in time following the ancestral lineages.
Motor ModelMotor models are user-defined, programmatic controllers designed to perform different tasks specified by the high-level labels. We define our motor model \(P(l_{t}|h_{t},s_{t})\) as a function from the current action label and state to a distribution over low-level actions. To capture the errors and imperfections of real-world demonstrations, we model the output as a Gaussian distribution centered around the ideal low-level actions output by the controllers.
Initial ASPTo speed up convergence, we initialize \(\pi^{(0)}\) in a way that kickstarts the EM loop with good initial label sequences. We find that a simple ASP that randomly switches its label from the last time step with a small probability \(p=0.1\) works well. Intuitively, this corresponds to the idea that human demonstrations are likely to remain in the same high-level label for long periods of time.
## IV Experimental Evaluations
In this section, we first describe the benchmarks and baselines we use for the experiments and then present our main results and analyses.
### _Benchmarks_
We use the following three simulated scenarios to collect our training and testing data.
_Stop Sign (SS)_: This is the one-dimensional example described in in section II, where the demonstrator controls the vehicle to move to and stop at target locations.
_Highway (HW)_: This scenario uses the open-source simulation environment highway-env2, where a demonstrator controls a vehicle moving in a crowded multi-lane highway and needs to switch lanes to pass slower traffic. The high-level actions are _accelerating, decelerating, turning left_, and _turning right_. The low-level actions are the amount of _acceleration_ and _steering_. Each demonstration lasts 100 time steps and contains about 10 high-level action transitions.
_Merge (MG)_: This setting is similar to _Highway_, except that the robot is initially in the left-most lane, and instead of maneuvering around slower traffic, the expected behavior is to quickly and safely merge into the rightmost lane. This scenario uses the same high-level and low-level actions as _Highway_ but with a different, more aggressive motor model. Each demonstration lasts 75 time steps and contains about 3-5 high-level action transitions.
Footnote 2: [https://github.com/eleurenth/highway-env](https://github.com/eleurenth/highway-env)
### _Baselines_
We compare our approach to the following five baselines.
_FitGreedy_. This baseline represents the naive solution to the unlabeled demonstration problem in which each time step is labeled greedily with the action label whose likelihood is highest according to the motor model.
_FitSmooth_. This baseline instead labels the trajectories using a particle filter and the randSwitch\((0.1)\) ASP. This baseline can be viewed as the first iteration of Plunder, except that it does not employ incremental synthesis and directly searches over a much larger program space.
_BC_. This behavior cloning baseline applies supervised learning to directly predict the low-level demonstrations. We use an LSTM with 64 hidden units and train it to predict the next low-level action conditioned on all previous states and low-level actions.
_BC+_. This baseline extends _BC_ with access to the motor model. Instead of letting the LSTM directly output the low-level actions, in _BC+_, we let the neural network output a distribution over all available high-level actions and then perform a weighted sum over the corresponding low-level actions (obtained by running the motor models). We then train _BC+_ in the same way as _BC_.
_FitTruth_. This baseline represents an idealized scenario where labels for all demonstrations are artificially provided, allowing synthesis to be performed directly on the labels.
### _Common Setup_
In the experiments below, for each scenario, we use the ground-truth probabilistic ASP to simulate 10 demonstrations to serve as the training data and an additional 20 demonstrations to serve as the test set. In each E Step, we use \(M=2000\) particles to sample \(N=50\) trajectories, and to improve the M Step performance, we limit the number of transition examples to be at most 2000 and use the remaining examples as the validation set, which is used to calculate the final likelihood when deciding the best ASP. We use the L-BFGS optimizer over 4 random starting values to optimize the ASP parameters. All reported performance metrics are obtained by running each baseline 3 times and taking the best result.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline & \multicolumn{3}{c}{**Accuracy (\%)**} & \multicolumn{3}{c}{**Log Obs. Likelihood**} \\ & SS & HW & MG & SS & HW & MG \\ \hline FitGreedy & 60.7 & 90.2 & 70.6 & -2.192 & -1.479 & -0.743 \\ FitSmooth & 88.3 & 90.8 & 80.6 & -1.123 & -1.782 & -0.653 \\ BC & - & - & - & -4.930 & -1.788 & -0.116 \\ BC+ & 73.7 & 64.1 & 63.3 & -1.208 & -1.101 & -1.296 \\ Plunder & **98.7** & **92.7** & **91.1** & **-0.779** & **-0.536** & **0.257** \\ \hline _FitTruth_ & _98.6_ & _92.5_ & _90.3_ & _-0.783_ & _-0.621_ & _0.272_ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Demonstration Similarity Comparisons
### _Accuracy Results_
In this section, we evaluate the synthesized ASPs in terms of their consistency with the ground-truth action labels and with the provided low-level demonstrations. We report the results in Table I. The percentage of high-level labels that match the ground truth are shown in columns 2, 3, and 4 (except for _BC_, which does not produce high-level labels), and we report the observation log likelihood of the low-level demonstrations (normalized over the number of time steps) in columns 5, 6, and 7. We find that Plunder consistently outperforms the other baselines and performs similarly to _FitTruth_. Interestingly, we also found Plunder outperforming _FitTruth_ in some cases. We hypothesize that this is because _FitTruth_ learns from only a single action label sequence and hence is more prone to overfitting. In contrast, our algorithm samples additional action label sequences in the E-step, which helps the M-step to generalize.
We also visualize the low-level actions generated by the synthesized policy for the _Merge_ benchmark in Figure 6. The top plot shows the demonstrated steering control (red) and the predicted trajectory (green). The bottom plot shows the demonstrated acceleration (blue) and the policy's prediction (yellow). We can see that the synthesized program closely follow the ground truth in both cases.
Our experiments also confirm our hypothesis that the accuracy of the synthesized policies improves as the iterative EM loop proceeds. Under standard amounts of noise, it takes less than 7 iterations for Plunder to converge to a policy for all benchmarks, corresponding to roughly half an hour for _Stop Sign_ and less than 3 hours for _Highway_ and _Merge_. Figure 7 visualizes the performance of the synthesized policy after each iteration of the EM loop in the _Stop Sign_ benchmark. Note that the accuracy trend in the training set is reflected in the test set results, indicating that our approach generalizes well to new data.
### _Impact of Noise_
In this section, we evaluate how the performance of each baseline changes as we vary the amount of noise in the training data using the _Merge_ setting. The observation log likelihoods of the trajectories produced by all approaches are plotted in Figure 8. We find that all approaches decrease in performance as noise is increased; however, Plunder consistently outperforms other baselines, indicating that it is more robust against low-level noise. We also observe that with higher noise, Plunder takes more iterations to converge. In the least noisy setting, it converges in 5 iterations, but in the noisiest setting, it requires 12 iterations.
We find that _FitSmooth_ consistently outperforms _FitGreedy_, demonstrating the importance of the E Step, even with a very basic ASP. Additionally, we note that _BC+_ significantly outperforms _BC_ when there are large amounts of noise, suggesting that having access to motor controllers defined by high-level labels provides useful inductive bias that helps overcome the noise in the data.
### _Importance of Probabilistic ASP_
A crucial component of Plunder is its use of probabilistic programs to account for the inherent uncertainty in human demonstrations. In this experiment, we compare this approach with LDIPS [3], a state-of-the-art LfD approach
Fig. 8: Accuracy of baselines at varying levels of noise in the _Merge_ benchmark. Noise levels are scaled to fit the range \([0,1]\) for presentation purposes.
Fig. 6: An example low-level action sequence predicted by Plunder for the _Merge_ scenario.
Fig. 7: EM loop progress until convergence in the _Stop Sign_ setting. Results for the testing set are similar to that of the training set, indicating that our approach generalizes well.
that produces deterministic ASPs. We conducted a case study using the _Stop Sign_ benchmark.
Directly integrating LDIPS into our algorithm would result in the immediate breakdown of the E Step since all trajectories would be identical. To avoid this, we first execute the synthesized deterministic ASP at each time step and then randomly switch the high-level label with a probability of 0.1, akin to the heuristic prior used in \(\pi^{(0)}\).
The experiment yielded the following observations. First, we observed that the synthesized programs performed poorly, with a \(87.5\%\) accuracy and an average observation likelihood of \(-1.403\). These scores fall significantly below the performance of Plunder from Table I. Second, the performance of LDIPS remains largely the same after each EM iteration, confirming our hypothesis that a deterministic program is insufficient for capturing label uncertainties. Lastly, LDIPS also exhibited another fundamental limitation--namely its inability to account for the proximity of each example to the threshold. Deterministic program synthesizers maximize the number of satisfied examples regardless of their likelihood, which quickly leads to poor performance as the noise level becomes nontrivial.
### _Qualitative Analysis_
In this section, we analyze programs that are synthesized by our algorithm. We find that in all three benchmarks, the synthesized programs contain conditions that are non-trivial and require multiple incremental synthesis steps. For example, we present the following program, which was synthesized in the _Highway_ setting for the transition FASTER\(\rightarrow\) LANE_LEFT:
\[\begin{array}{ll}\phi_{\texttt{FASTER,LANE,LEFT}}&=\texttt{flp}(\texttt{ logs}(x-f_{x},-30.62,1.78))\\ &\land\texttt{flp}(\texttt{lgs}(f_{x}-l_{x},-4.92,-6.53))\\ &\land\texttt{flp}(\texttt{lgs}(r_{x}-l_{x},2.95,-1.67))\end{array}\]
which translates to the condition "if the vehicle's position \((x)\) is approaching the position of the vehicle in front \((f_{x})\), and the position of the vehicle to the left \((l_{x})\) is farther away than both the vehicle in front and the position of the vehicle to the right \((r_{x})\), then turn left." We note that this complex program is beyond the scope of naive enumeration techniques, which would need to optimize well over one billion programs in our DSL to reach a program of this complexity. Such techniques are unable to synthesize this program on its own, much less be embedded inside an EM loop that requires many iterations of synthesis.
### _Interpretability_
Synthesizing a program over discrete high-level transitions also has the benefit of being interpretable and amenable to repair [10]. We find that in our experiments, our approach performs the worst on transitions that have little to no data from the demonstrations. Take the _Merge_ setting as an example, which has no data for the FASTER\(\rightarrow\) LANE_LEFT transition. Consider the highest-noise setting in subsection IV-E. Plunder attempts to fit to a few noisy examples and incorrectly synthesizes the program \(\texttt{flp}(\texttt{lgs}(l_{v_{x}},40.03,77.10))\) for the transition FASTER\(\rightarrow\) LANE_LEFT, where \(l_{v_{x}}\) is the forward velocity of the vehicle to the left. Our approach makes it easy to alter programs; for example, in this case, we can simply remove this transition. The new program achieves an average log-likelihood of \(-1.724\), an improvement from the standard Plunder performance of \(-1.869\). Other approaches generally do not have this benefit and cannot be confidently repaired in this simple manner.
## V Related Work
Learning from Expert DemonstrationsThe problem addressed in this paper is related to a major challenge in machine learning, which is to learn a policy by observing expert demonstrations without explicit reinforcement signals [11, 12]. Various solutions have been proposed, generally separated into two categories. _Behavioral cloning_ learns a classifier by performing supervised learning over state-action pairs [13]. _Inverse reinforcement learning_ applies reinforcement learning techniques after learning an unspecified reward function [14, 15]. Both approaches have been successfully used to solve complex tasks, such as object manipulation or surgery [16, 17]. Unlike IRL techniques, which rely on the ability to interact with the environment and collect additional data, Plunder does not require interacting with the environment and only requires a motor model, which is more readily available. Additionally, techniques that involve neural networks tend to be data-intensive, non-interpretable, and not amenable to repair [18]. To the best of our knowledge, our approach is the first to synthesize a symbolic policy from noisy and unlabeled demonstrations.
Expectation-Maximization TechniquesThe expectation maximization algorithm is a commonly used technique in scenarios where the generative model and its hidden states are both unknown [5]. EM has found some recent applications in various robotics domains. For instance, in [8], the authors propose to use an EM loop to learn a dynamics model and estimate the robot's state trajectories simultaneously. In [19], an internal expectation-maximization loop was used to optimize the learned policy, but this approach focuses on reinforcement learning applications and is not suitable for handling high levels of noise. In contrast, our approach utilizes an EM loop to infer labels from noisy demonstrations while synthesizing a complex probabilistic program that maximizes the demonstration likelihood.
Probabilistic Program SynthesisThe goal of probabilistic program synthesis is to obtain a probabilistic program from partial observations, which can then be leveraged for statistical inference or sampling purposes [6]. Prior work has used Markov Chain Monte Carlo (MCMC) methods to generate distributions over programs [6, 20]. However, our work uses an alternative technique based on a combination of enumerative synthesis and numerical optimization. Our approach can be seen as a generalization of earlier deterministic policy synthesis methods such as [3], which uses an SMT solver to maximize the number of satisfied examples. Additionally, the probabilistic program we develop incorporates "logistic thresholds" to capture uncertainties
in natural phenomena, an idea that was recently explored in [21].
## VI Conclusions and Future Work
We have presented Plunder, an algorithm for simultaneously inferring action label sequences and synthesizing probabilistic action selection policies. We have developed and released a reusable implementation of our algorithm and performed empirical evaluation to demonstrate its effectiveness on three simulated LfUD datasets. For future work, we plan to extend our technique to more challenging settings (such as simultaneously optimizing the motor model) to further reduce the required user effort. We are also interested in combining EM with more advanced synthesis techniques (such as those based on large language models) to further improve the scalability of this approach.
|
2310.19337 | Characterization and Exploitation of the Rotational Memory Effect in
Multimode Fibers | In an ideal perfectly straight multimode fiber with a circular-core, the
symmetry ensures that rotating the input wavefront leads to a corresponding
rotation of the output wavefront. This invariant property, known as the
rotational memory effect (RME), remains independent of the typically unknown
output profile. The RME thus offers significant potential for imaging and
telecommunication applications. However, in real-life fibers, this effect is
degraded by intrinsic imperfections and external perturbations, and is
challenging to observe because of its acute sensitivity to misalignments and
aberrations in the optical setup. Thanks to processing involving a spatial
light modulator, we efficiently overcome these measurement biases, allowing for
precise quantification of the RME. We establish an experimental and theoretical
framework for studying and manipulating the RME in multimode fibers.
Theoretical predictions are consistent with experimental data and simulations,
connecting the shape of the angular-dependent correlation of the RME to the
geometrical properties of the core deformation. This work opens the road for
accurate characterization of the distributed disorder originating from the
fabrication process and calibration-less imaging in multimode fibers. | Rodrigo Gutiérrez-Cuevas, Arthur Goetschy, Yaron Bromberg, Guy Pelc, Esben Ravn Andresen, Laurent Bigot, Yves Quiquempois, Maroun Bsaibes, Pierre Sillard, Marianne Bigot, Ori Katz, Julien de Rosny, Sébastien M. Popoff | 2023-10-30T08:22:23Z | http://arxiv.org/abs/2310.19337v3 | # Tailoring the Rotational Memory Effect in Multimode Fibers
###### Abstract
In an ideal perfectly straight multimode fiber with a circular-core, the symmetry ensures that rotating the input wavefront leads to a corresponding rotation of the output wavefront. This invariant property, known as the rotational memory effect (RME), remains independent of the typically unknown output profile. The RME thus offers significant potential for imaging and telecommunication applications. However, in real-life fibers, this effect is degraded by imperfections and external perturbations, and is challenging to observe because of its acute sensitivity to misalignments and aberrations in the optical setup. In this work, we establish an experimental and theoretical framework for studying and manipulating the RME in multimode fibers. We first detail a method to precisely quantify the effect. We then explore how the effect is altered by the introduction of external deformations. Subsequently, we present a theoretical model that is consistent with experimental data and simulations, connecting the shape of the angular-dependent correlation of the RME to the geometrical properties of core deformation. Finally, we demonstrate that it is possible to find and send input wavefronts with significantly improved correlation for all rotations angles. This allows for the effective use of the RME, even in the presence of strong disturbances.
## I Introduction
Optical fibers present a unique opportunity for minimally invasive imaging deep within the human body. Most flexible medical endoscopes utilize multi-core fibers or fiber bundles [1]. Comparatively, multimode fibers (MMFs) offer orders of magnitude higher information density, allowing, in theory, an increase in image resolution or a decrease in the endoscope footprint [2]. However, dispersion distorts the input image, a phenomenon that is exacerbated by mode coupling introduced by defects or deformations within the fiber. For this reason, image reconstruction techniques through multimode fibers hinge on estimating [3] or measuring the transmission matrix (TM) [4; 5; 6; 7; 8], i.e., the relationship between the input and output fields of the optical system. Unfortunately, this TM approach is prone to real-time changes due to dynamic fiber bending and temperature fluctuations [9], which prevent the direct use of a previously calibrated system.
A similar challenge arises when imaging through a highly scattering sample where the TM is inaccessible [10; 11]. An elegant solution to this problem is to exploit an invariant property of the scattering sample, namely the angular memory effect, which enables images to be recovered without the need to measure the TM [12; 13]. For a given illumination, even though the output random pattern remains unknown, the angular memory effect facilitates the shifting of this output speckle pattern in two directions with minimal to no deformation. Tilting the input wavefront then allows scanning the object plane with the unknown pattern. Recording the reflected or fluorescent signal provides sufficient information to reconstruct the image of the hidden object [12]. While the range of such an effect is constrained, strategies have been proposed to recover images of objects beyond this limitation [14], making the memory effect highly attractive for non-invasive imaging applications.
Building on decades of research in scattering media, recent interest has surged in the study of coherent effects in disordered fibers [15; 16]. Specifically, the TM approach [17; 18; 19; 20] and random matrix theory [21; 22] have emerged as particularly useful frameworks for these investigations. In particular, a close analogous effect to the angular memory effect in scattering media is observed in the special case of square-core fibers, where a translation of the input wavefront results in a corresponding translation at the output [23], albeit with the noticeable presence of artifacts, which can nonetheless be exploited to recover images [24; 25]. In more typical cylindrical-core fibers, a similar phenomenon, known as the rotational memory effect (RME), has been recently identified [26; 27]. This effect is characterized by the rotation of an input wavefront along the optical axis of the fiber leading to a corresponding rotation of the output pattern, even though the latter is unknown. In principle, this effect could be harnessed for imaging through a multimode fiber for which the TM has not been previously measured.
Nevertheless, since its initial observation, no prediction or quantitative description of the RME has been presented. Neither the angular range covered by the
RME, nor its dependence on disorder, nor its potential robustness and modularity have been studied or elucidated. Furthermore, the manifestation of an angular revival effect, leading to a secondary peak in the correlation of the output pattern at the rotation angle \(\pi\), has been observed but also remains unexplained. An important consideration is that the measurement of the RME is complicated by its high sensitivity to misalignments and aberrations in the optical system used to inject light into the fiber [26]. However, these adverse effects can be understood and compensated using a framework that some of us have introduced in Ref. [28]. The procedure involves learning the input and output aberrations by optimizing a model-based numerical model. This approach enables the retrieval of an accurate TM of the system, even when using imperfect measurements. Additionally, it provides the transformation needed to physically compensate for the input aberrations, which can be directly implemented using a spatial light modulator (SLM). The numerical compensation for aberrations is a crucial step, as it enables precise observation of the RME, which would otherwise be rapidly which would otherwise be rapidly obscured by aberration effects.
In the present article, we first demonstrate that it possible to measure the RME in MMFs with high accuracy. We then investigate how this effect is influenced, and ultimately canceled out, by the presence of disorder, identifying two distinct contributions. The first contribution arises from the intrinsic defects within the fiber, which in particular preserves a revival effect in the angular correlation of the RME, _i.e._ a local maximum at the rotation angle \(\theta=\pi\). The second contribution stems from deformations in the fiber conformation, such as bending or pressure on the fiber, which progressively reduce the angular range of the RME. To comprehend these effects, we introduce a theoretical framework based on accurate disorder modeling, which leads to analytical predictions for the RME, corroborated by numerical simulations of the microscopic wave equation in disordered MMFs. The theory and simulations faithfully reproduce our experimental results obtained on various commercial MMFs. Our model provides an explanation for the previously mentioned revival effect, as well as for the impact of fiber deformation. Furthermore, we show that the measurement of the RME is a very good sensor of different types of disorder, facilitating in particular the identification of the shape of the geometrical defects within the fiber. Finally, we demonstrate the ability to tailor the memory effect. By utilizing well-defined operators, we identify input wavefronts that optimize the RME correlation, either for a specific angle or across the entire \(2\pi\)-range.
## II Measuring the RME with tunable disorder
A memory effect is defined in relation to a field transformation. A perfect memory effect exists when the application of this transformation before or after propagation through a given optical system produces the same effect. For RME to occur, the rotation operator \(\mathbf{R}(\theta)\) must commute with the optical system's transmission matrix \(\mathbf{T}\) of the fiber [27]. In this study, we consider only one polarization of the field. The matrix \(\mathbf{T}\) therefore links the input field in a specific circular polarization channel to the output field in the same polarization channel.
### Experimental setup
To experimentally measure and quantify the rotational memory effect, we first measure the TM of a 24.5 cm segment of a straight 50-micron core graded-index fiber (BendBright OM4 [29]). Utilizing a fast digital micromirror modulator and an InGaAs camera, we follow the procedure detailed in Ref. [28], which enables us to identify and compensate for aberrations and misalignments. It also allows us to accurately generate the input masks on the modulator that correspond to rotating the field with respect to the optical axis in the input facet plane of the fiber. The principle of the experiment is depicted in Fig. 1 and is further detailed in Appendix A.1.
### Measurement of the RME
To illustrate the effect of the RME, we first observe its impact on a focusing operation. We compute the mask that focuses light at a specific position in the output facet of the fiber using the TM [30; 31]. It is noteworthy that the knowledge of the TM is not necessary for this step, nor for any of our measurements, as focusing can be achieved through methods like sequential optimization [32; 33] or phase conjugation [6]. We then rotate the input wavefront. We show in Fig. 2(a) the sum of the resulting output amplitude patterns for 10 values of the rotation angle. We can see that rotating the input
Figure 1: **The rotational memory effect in MMF.** When the fiber is illuminated by a coherent wavefront \(|\psi_{in}\rangle\), a seemingly random transmitted field \(\mathbf{T}|\psi_{in}\rangle\) is observed at the output. In an ideal MMF with cylindrical symmetry, rotating the input wavefront (_i.e._, sending \(\mathbf{R}(\theta)|\psi_{in}\rangle\)) and measuring the transmitted field \(\mathbf{TR}(\theta)|\psi_{in}\rangle\), is equivalent to rotating the output field resulting from the propagation of \(|\psi_{in}\rangle\) and measuring \(\mathbf{R}(\theta)\mathbf{T}|\psi_{in}\rangle\). A local perturbation is then added by moving a tip in contact with the fiber over a distance \(\Delta x\) transverse to the fiber axis.
masks allows the focusing spot to be rotated along the optical axis of the fiber, with limited degradation of focusing quality.
To further characterize the RME, we seek to quantify the similarity between a transmitted field \(|\psi\rangle=\mathbf{T}|\psi_{\mathrm{in}}\rangle\) for a normalized input field \(|\psi_{\mathrm{in}}\rangle\), and the output field \(|\psi_{\theta}\rangle=\mathbf{T}_{\theta}|\psi_{\mathrm{in}}\rangle\), where \(\mathbf{T}_{\theta}=\mathbf{R}(-\theta)\mathbf{T}\mathbf{R}(\theta)\). The second field corresponds to a rotation of the input and output profiles by an angle of \(\theta\) and \(-\theta\) respectively. We define a correlation function for this purpose as
\[C(\theta)=\frac{|\langle\psi|\psi_{\theta}\rangle|}{\sqrt{\langle\psi|\psi \rangle}\sqrt{\langle\psi_{\theta}|\psi_{\theta}\rangle}}. \tag{1}\]
In practice, we send a set of 100 random input wavefronts, rotate them in the plane of the input facet, and measure the output field. We then compute the average correlation function \(\langle C(\theta)\rangle\). Figure 2(b) shows the experimentally measured \(\langle C(\theta)\rangle\) for the unperturbed fiber (solid red line). We emphasize that the TM measurement is not used to characterize the memory effect; only knowledge of input aberration and misalignment effects is exploited to accurately rotate the input field. Although the results shown here correspond to field correlations, we demonstrate in Appendix A.2 that very similar results are obtained for intensity correlation measurements. The latter can be expressed as \(C_{I}(\theta)\simeq C(\theta)^{2}\). As a result, the behavior of the angular correlation shown in Fig. 2(b) is qualitatively analogous to the result reported in Ref. [26], where the intensity RME correlation was measured. Furthermore, we show in Appendix A.3 that when the TM is known, the RME correlation can be accurately computed without the need for additional measurements.
To study the effect of perturbations on the RME, we gradually apply a controlled deformation to the fiber along an axis orthogonal to the propagation direction. The fiber is maintained on a V-groove and we press locally on the fiber from the top with a spherical metallic tip using a motorized translation stage. The correlation \(\langle C(\theta)\rangle\) as a function of the rotation angle \(\theta\) is presented in Fig. 2(b), for different values of the displacement \(\Delta x\) of the tip (red to brown curves).
We first observe that, even without applying a local perturbation to the fiber, the correlation decreases to approximately 60%. When the fiber is held straight, this effect can be attributed to the presence of defects in the fiber. This correlation curve exhibits a second maximum, close to 95%, at \(\theta=\pi\), along with small local maxima at \(\theta=\pi\pm\pi/2\). These features are indicative of the geometrical defects within the fiber. Upon applying the deformation, we observe that the correlation as a function of \(\theta\) decreases globally, and the local maxima vanish.
To further understand these effects, we develop in Sec. III a theoretical model whose predictions are compared with experimental observations. This model proves capable of predicting all RME behaviors in the presence of the disturbances just described.
## III Effect of disorder on the RME
### Model of disorder
In an ideal MMF, due to the axisymmetry of the system, perfect RME should be expected, _i.e._ the rotation of a given input wavefront should result in a corresponding rotation of the output wavefront. This corresponds to \(C(\theta)=1\) for all \(\theta\). However, real fibers are rarely perfect, resulting in mode coupling that is mainly influenced by the geometrical defects of the fibers [34]. Two
Figure 2: **Experimental measurement of the RME.** (a) Rotation of a focal spot. Light is focused at a given output position and the input phase mask is rotated along the axis of the fiber for 10 values of the rotation angle \(\theta\). We sum all the resulting output amplitude patterns to reveal conservation of the focal spot, albeit with a variation in intensity, the latter being maximal for angle \(\theta=0\) and \(\theta=\pi\). (b) Experimental measurement of the RME angular correlation (1) as a function of the level of perturbation \(\Delta x\). The bright red curve shows results for the unperturbed fiber (\(\Delta x=0\,\mu\)m). The red to brown curves correspond to progressively increased disturbances, obtained by applying a local deformation using a translation stage. Results are obtained by averaging over 100 random inputs. Shaded areas correspond to the error estimated by the standard deviation of the experimental data.
main contributions can be identified: large radius bends, attributable to the geometrical conformation of the fiber, and minor distortions at the core-cladding interface, primarily due to fabrication inaccuracies [35; 36; 37].
We propose to model fiber disturbances by a deviation of the refractive index profile from a perfect axisymmetric function of the following form:
\[\delta n(r,\phi,z)=g(z,r)\sum_{q}\Gamma_{q}\cos(q\phi+\varphi_{q})\,, \tag{2}\]
where \(z\), \(r\), and \(\phi\) are the cylindrical coordinates corresponding respectively to the longitudinal (axis of the fiber), radial, and azimuthal directions. The longitudinal variation of \(g(z,r)\) are characterized by random fluctuations with a correlation length \(l_{z}\), which is typically of the order of \(100\) um [37], while radial variations of \(g(z,r)\) are discussed in detail in below. On the other hand, disorder in the azimuthal direction is decomposed into harmonics with orbital momentum \(q\) and weight \(\Gamma_{q}\)[22].
Different fabrication techniques are employed based on the fiber type and manufacturer. These methods include modified chemical vapor deposition (MCVD), vapor axial deposition (VAD), outside vapor deposition (OVD), and plasma-activated chemical vapor deposition (PCVD). One well-recognized challenge across these techniques, attributed to their inherent limitations in precision, lies in achieving a highly accurate radial index profile, particularly when dealing with sharp index changes.
We attribute the radial fluctuations to variations between neighboring radial layers, stemming from inaccuracies in the deposition technique or interlayer diffusion of the doping elements. Specifically, in the case of the fiber under investigation, these inaccuracies are associated with the PCVD process. This leads us to approximate the fiber of length \(L\) by a succession of \(N_{z}=L/l_{z}\) segments, each of length \(l_{z}\), in which the perturbation term \(\delta n\) is invariant along \(z\). Specifically, for the \(p^{\text{th}}\) segment in the interval \(z\in[p\,l_{z},(p+1)l_{z}]\), we write \(\delta n_{p}(r,\phi)=g_{p}(r)\sum_{q}\Gamma_{q}\cos(q\phi+\varphi_{q})\). We model \(g_{p}(r)\) as a Gaussian random variable with zero mean, characterized by a standard deviation \(\sigma_{g}(r)=d_{\text{layer}}|dn_{0}(r)/dr|\)[38], where \(n_{0}(r)\) is the radial profile of the unperturbed fiber, and \(d_{\text{layer}}\simeq 10\) nm is the typical length of each layer formed in the PCVD process [39]. For a gradient index fiber with a parabolic index profile (see Appendix B and Ref. [40]), the standard deviation can be put in the form
\[\sigma_{g}(r)\simeq\frac{r}{a^{2}}\frac{\text{NA}^{2}}{n_{\text{max}}}d_{ \text{layer}}\mathcal{H}(r/a)\,, \tag{3}\]
where NA and \(a\) are respectively the numerical aperture and radius of the MMF, \(n_{\text{max}}\) is the value of refractive index \(n_{0}(r)\) at the center of the core, and \(\mathcal{H}\) is the Heaviside function.
As detailed in Appendix B, the TM of the \(p^{\text{th}}\) segment of length \(l_{z}\) expressed in the unperturbed fiber mode basis can be written as
\[\mathbf{T}_{p}=e^{-i(\mathbf{H}_{0}+\mathbf{V})l_{z}}. \tag{4}\]
Here, \(\mathbf{H}_{0}\) is the propagation operator in the absence of perturbation; it is a diagonal matrix containing the propagation constants \(\beta_{\mu}\) of the modes of the unperturbed fiber, indexed by \(\mu\). On the other hand, \(\mathbf{V}\) represents the perturbation due to the index fluctuations, \(\delta n_{p}(r,\phi)\), projected onto the mode basis (see Appendix B for further details). The complete TM is obtained by multiplying the TMs of all the segments.
### Theoretical predictions for \(\langle C(\theta)\rangle\)
In the limit of moderate disorder, we are able to find an analytical expression of the mean correlation function \(\langle C(\theta)\rangle\), which involves the geometrical parameters of the fiber as well as the disorder strength. In Appendix C, we show that it can be put in the form \(\langle C(\theta)\rangle=\bar{C}(\theta)/\bar{C}(0)\), with
\[\tilde{C}(\theta)=1+A\sum_{q,\nu,\mu}\Gamma_{q}^{2}\cos(q\theta)B_{\nu\mu}^{q }\,. \tag{5}\]
The prefactor \(A=N_{z}(kl_{z})^{2}/4N_{\text{modes}}\) is a coefficient that combines properties of the radial disorder with the number of propagating modes supported by the fiber. In addition, the coefficient \(B_{\nu\mu}^{q}\) characterizes the energy coupling between eigenstates \(\psi_{\nu}\) and \(\psi_{\mu}\) of the unperturbed propagation operator \(\mathbf{H}_{0}\). It is expressed as
\[B_{\nu\mu}^{q}=\delta_{m_{\mu\nu},q}\operatorname{sinc}\left(\frac{\beta_{\mu }-\beta_{\nu}}{2}l_{z}\right)^{2}I_{\nu\mu}\,, \tag{6}\]
where \(m_{\mu\nu}=|m_{\mu}-m_{\nu}|\) is the difference between orbital angular momentum of the eigenstates coupled by the azimuthal disorder, and
\[I_{\nu\mu}=d_{\text{layer}}\int_{0}^{\infty}dr\left|\psi_{\nu}(r)\right|^{2} \left|\psi_{\mu}(r)\right|^{2}\sigma_{g}^{2}(r)r^{2} \tag{7}\]
is the coupling term induced by disorder along the radial direction.
The expression (5) is a perturbative result, valid when photons scatter on average once over the disordered potential \(\mathbf{V}\). As the extent of disorder increases, it becomes necessary to take higher-order perturbations into account. This means taking into account multiple interactions between photons and disorder. For all the results presented in this work, the single scattering contribution (5) is dominant, but we have also calculated the second-order perturbation contribution to obtain quantitative agreement with experimental results and simulations. The second-order contribution takes the following form
\[\tilde{C}^{(2)}(\theta)=\tilde{A}\sum_{\begin{subarray}{c}q,q^{\prime}\\ \nu,\nu,\mu\end{subarray}}\Gamma_{q}^{2}\Gamma_{q}^{2}\cos[(q+q^{\prime}) \theta]C_{\nu\kappa\mu}^{qq^{\prime}}, \tag{8}\]
where \(\tilde{A}=N_{z}(kl_{z})^{4}/16N_{\text{modes}}\). Energy coupling is provided by the term
\[C^{qq^{\prime}}_{\nu\kappa\mu}=\frac{N_{z}-1}{2}B^{q}_{\nu\kappa}B^{q^{\prime}}_ {\kappa\mu}+\delta_{m_{\mu\kappa},q}\delta_{m_{\kappa\nu},q^{\prime}}Q_{\mu \kappa\nu}I_{\nu\kappa}I_{\kappa\mu}, \tag{9}\]
where the explicit but lengthy expression of the coefficient \(Q_{\mu\kappa\nu}\) in terms of the propagation constants \(\beta_{\mu}\), \(\beta_{\kappa}\), and \(\beta_{\nu}\) is given in Appendix C.
### Validation of the model of disorder and theory
To validate the model of disorder as well our theoretical predictions based on Eqs. (5) and (8), we first find the values of the coefficients \(\Gamma_{q}\) that match best the experimental profile of the mean correlation function \(\langle C(\theta)\rangle\). For the graded-index fibers used in our experiments, we find that it is sufficient to use only four non-zero coefficients corresponding to \(q\in[1,2,3,4]\). The experimental results shown in Fig. 3 (blue solid lines) are virtually indistinguishable from the analytical curves (black solid lines). To assess the physical relevance of these coefficients, we then perform simulations using the same values of \(\Gamma_{q}\), without adding any fitting parameter. The simulation consists in dividing the fiber into segments of length \(l_{z}\). For each segment, we add to the index profile matching the specifications of the fiber a perturbation of the form given by Eq. (2). We then estimate its TM using a custom fiber solver [41]. Finally, the complete TM of the fiber is obtained by multiplying the TMs of all segments, each segment corresponding to a different realization of the radial disorder (see Appendix B for more details). We calculate the RME correlation as a function of angle and average over 20 realizations of the fiber. The results, presented in Fig. 3, show good agreement between simulations and theory, with no adjustment parameters required.
### Discussion and interpretation
The different values of the angular momenta \(q\) of the deformation have different origins and impacts on the RME. Global radial index variations, corresponding to \(q=0\), alter the shape of the modes but do not break the axisymmetry of the system. This is equivalent to changing the length of the fiber, which impacts only the relative phase between the modes [42]. The TM remains diagonal in the mode basis and commutes with the rotation operator \(\mathbf{R}(\theta)\) for any angle \(\theta\). As a result, the \(q=0\) component does not impact the RME.
In the absence of external perturbation (\(\Delta x=0\)), thanks to prior compensation for aberrations, the deviation of the correlation curve from a perfect RME (\(C(\theta)=1\)) is due to intrinsic fiber defects caused by the fabrication process, which give rise to non-zero \(\Gamma_{q}\), for \(q>0\). In this regime, we find that the correlation function is dominated by the contributions of even values of \(q\). The contribution \(q=2\) is responsible for the valleys found at \(\theta=\pi\pm\pi/2\), and the contribution \(q=4\) for the valleys observed at \(\theta=\pi/2\pm\pi/4\) and \(\theta=3\pi/2\pm\pi/4\). Even contributions have no impact on the value of the correlation at \(\theta=\pi\), simply because they correspond to \(\pi\)-symmetric deformations (see inset of Fig. 4). Consequently, the slight decrease in correlation at \(\theta=\pi\) is entirely controlled by the odd deformations.
Although all \(\Gamma_{q}\) terms are of the same order of mag
Figure 3: **Comparison between experiment, simulations, and theory**. Mean angular correlation function of the RME, as defined in Eq. (1), for various levels of deformation. The fiber used is a typical graded index fiber (Prysmian BendBright OM4 [29]), with radius \(a=50\) μm, NA\(=0.2\), and \(N_{\text{modes}}=55\). The correlation length in the model and simulations is set at \(l_{z}=100\) μm. Experimental data (blue lines) are compared with theoretical predictions based on Eqs. (5) and (8) (black lines), and simulation results for wave propagation inside disordered MMFs (red lines). The parameters \(\Gamma_{q}\) of the model are found by fitting to the experimental results, and simulations are obtained with the same parameters. Error bars represent the standard deviation computed over 100 random input wavefronts for the simulations and experiments, as well as 20 disorder realizations for the simulations.
Figure 4: **Influence of deformation on the perturbation contributions.** Values of the normalized deformation parameters \(\tilde{\Gamma}_{q}=kl_{z}\sigma_{q}(r=a)\Gamma_{q}\). The values of \(\Gamma_{q}\) are found by fitting the theoretical model [Eqs. (5) and (8)] to the experimental data as a function of the deformation. In the inset, we show the symmetry corresponding to the perturbation associated with each value of \(q\).
nitude, we observe that the effect of odd contributions, which couple modes of different parity to the orbital angular momentum, is much less pronounced than even contributions, which couple modes of the same parity. This is explained by the modal properties of the fiber. Indeed, in ideal graded-index fibers, modes in quasi-degenerate groups, i.e. with similar propagation constants \(\beta_{\mu}\), have the same parity of the angular orbital momentum. This property is inherited from the modes of the two-dimensional isotropic harmonic oscillator which represents the idealized parabolic graded-index fiber with no boundary [43, 44, 45, 46]. Consequently, for pairs of modes for which \(m_{\mu\nu}\) is odd, the difference of propagation constants \(\beta_{\mu}-\beta_{\nu}\) is non-negligible, leading to weak contributions of \(B_{\nu\mu}^{q}\) appearing in Eq. (5). This effect has the same origin as the observation that disorder preferentially induces coupling between degenerate modes [28].
The previous analysis fully explains the robustness of the correlation observed at \(\theta=\pi\) for small deformations. We note that this correlation revival can equivalently be interpreted in terms of commutation between the matrix \(\mathbf{T}\) and the the rotation matrix \(\mathbf{R}(\theta)\). At small deformations, \(\mathbf{T}\) is block-diagonal, with blocks corresponding to quasi-degenerate modes. Since, within each block, the different angular momenta \(m_{\mu}\) share the same parity (corresponding to constant values of \(|m_{\mu}|+2p_{\mu}\), where \(p_{\mu}\) is the radial index [47]), the expression of \(\mathbf{R}(\theta)\) restricted to each block necessarily satisfies \(\mathbf{R}(\theta=\pi)=\pm\mathbb{1}\), where \(\mathbb{1}\) is the identity matrix. As a result \(\mathbf{R}(\theta=\pi)\) and \(\mathbf{T}\) commute, regardless of the coupling complexity within each group of modes.
When the external mechanical deformation is introduced, we find that the value of \(\Gamma_{1}\), corresponding to a flattening of the fiber, gradually increases (see Fig. 4). This means that modes of different propagation constant become more and more coupled, and that the TM progressively loses its block-diagonal structure in the mode basis. This explains the disappearance of the dominant revival effect at \(\theta=\pi\), as well as the loss of the local maxima at \(\theta=\pi/2\) and \(\theta=3\pi/2\). As \(\Delta x\) is further increased, we also observe that higher harmonics start to play a role (see \(\Gamma_{3}\) in Fig. 4), and the width of the correlation function \(\langle C(\theta)\rangle\) starts to decrease. This indicates that the width of the correlation function at large deformation depends on the disorder strength and is intimately connected to the loss of the block-diagonal structure of the TM.
In Appendix D, we present measurements of \(\langle C(\theta)\rangle\) obtained for various graded-index fibers, which exhibit advertised properties similar to those of the fiber used in Figs. 3 and 4. Although we obtain qualitatively similar results, we do find some quantitative reproducible differences, expressed in terms of different values for the \(\Gamma_{q}\) weights. This demonstrates that RME is a very good indicator for probing the small variations in disturbances that occur during the MMF manufacturing process.
## IV Tailoring the RME
In previous sections we studied and measured the mean correlation, _i.e._ that obtained by averaging over random input wavefronts. We now ask wether it is possible to find specific input wavefronts for which the correlation is significantly higher than the mean value for one given angle or for a wide range of angles. As with the approaches to tailoring the angular memory effect in scattering media [48], we can build operators to optimize the memory effect at a given angle. Since losses are low in the fiber, TM is close to unitary and the lower part of Eq. (1) is approximately constant. Then, an interesting operator is the one involved in the upper part of the correlation function,
\[\mathbf{O}(\theta_{t})=\mathbf{T}^{\dagger}\mathbf{R}(\theta_{t})^{\dagger} \mathbf{T}\mathbf{R}(\theta_{t}). \tag{10}\]
This operator can be used to improve the RME for a specific value \(\theta_{t}\) of \(\theta\), as shown in Appendix E.1. In order to improve the correlation over the entire \(2\pi\)-range, we can also study the operator built using the sum of operators describing the correlation at different angles,
\[\mathbf{O}_{\text{sum}}=\sum_{t}\mathbf{T}^{\dagger}\mathbf{R}(\theta_{t})^{ \dagger}\mathbf{T}\mathbf{R}(\theta_{t}). \tag{11}\]
To optimize the RME correlation, we construct this operator using the experimentally measured TMs with \(\theta_{t}=t\cdot\pi/4\), where \(t\in[0,7]\). We then compute the singular vectors of this operator corresponding to the singular
Figure 5: **Tailoring the rotational memory effect.** The angular correlation function \(C(\theta)\) is constructed using experimentally measured input channels with improved RME range, for two values of the deformation (\(\Delta x=$0\,\mu\mathrm{m}$\) and \(\Delta x=$60\,\mu\mathrm{m}$\)). The results for the first two singular vectors of the operator defined in Eq. (11) are compared with the average results for random input profiles (dashed line). The insets show the output spatial transverse profiles of the corresponding singular vectors.
values with the largest modulus. We present in Fig. 5 the resulting correlation \(C(\theta)\) for the two first eigenvectors in the case of no deformation and strong deformations (\(\Delta x=60\) um) and the corresponding output field profiles. Our results demonstrate that it is possible to find input wavefronts for which the output profiles remain highly correlated across the entire \(2\pi\)-range.
An interesting application of memory effects is the ability to retrieve information from the distal side, where the field for a given input wavefront is _a priori_ unknown. For imaging applications, the range of the memory effect must be wide enough to cover the size of the object to be imaged, and the output excitation must have a pronounced peak autocorrelation function [12]. This last condition is guaranteed in multiple scattering media by the presence of strong disorder that randomizes the field for any given input wavefront. However, this is not the case in MMFs, where the disorder does not affect all modes in the same way [31]. A trivial solution for maximizing the RME range would be to use the fundamental mode, which is less affected by external perturbations [28]. But, due to its rotational symmetry, the autocorrelation of this mode with respect to angular rotation is close to one. So, even though the field profile remains correlated at the fiber's output when the input profile is rotated, this mode cannot be used to provide information about the distal end of the fiber. As shown in Fig. 5, the first singular vector of the operator (11) is very close to the fundamental mode for any \(\Delta x\) and is therefore not useful for imaging. However, the second combines the properties of a large-range RME and an output pattern with a peaked autocorrelation function (see Appendix E.2 for details). It is therefore a good candidate for recovering information about the fiber distal end.
## V Conclusion
In this article, we first present an approach based on the TM measurement that enables us to accurately measure and study RME in MMFs. Importantly, this method allows us to mitigate the effects of aberrations and misalignments that can significantly disrupt RME analysis.
We then propose a model of disorder and provide a theoretical calculation of the RME correlation function, which is shown to be in good agreement with both experimental data and simulations, for the different \(\Delta x\) positions of the moving tip used to control the degree of disorder. In particular, our model makes it possible to estimate geometric perturbations in the fiber, whether due to fabrication imperfections or mechanical deformations. This approach can serve as a powerful tool for the study of MMF defects resulting from the breaking of fiber symmetry.
Finally, we demonstrate the possibility of generating channels that exhibit a drastic improvement in the RME. In particular, we can create channels that are more robust to deformations compared to random inputs or standard fiber modes, and that also exhibit a random profile with high spatial frequencies. This latter feature is essential for using the memory effect for blind image recovery [12].
Our study paves the way for the use of RME in endoscopy applications, where fiber disturbance changes over time and access to the distal end is impossible. The use of RME is also very promising for classical or quantum optical telecommunications, as RME could be used to encode data through the fiber.
###### Acknowledgements.
The authors warmly thank Yaron Bromberg for its invaluable contributions. R.G.C, A.G, J.R. and S.M.P acknowledge the French _Agence Nationale pour la Recherche_ grant No. ANR-23-CE42-0010-01 MUFFIN and the Labex WIFI grant No. ANR-10-LABX-24, ANR-10-IDEX-0001-02 PSL*. R.G.C, A.G., E.R.A., L.B., Y.Q., M.B., P.S., M.B., J.R., and S.M.P acknowledge the French _Agence Nationale pour la Recherche_ grant No. ANR-20-CE24-0016 MUPHTA.
## Data and code availability
Raw and processed data, sources to regenerate the all the figures, and sample codes for the treatment pre- and post-processing are available in the dedicated repository [49].
## Appendix A TM and RME measurements
### Aberration compensation and TM measurement
To decouple the effects of the RME and measurement inaccuracies, we use the approach we developed to learn and compensate for aberrations and misalignments in Ref. [28]. The idea is to first measure the TM on a pixel basis and then project it onto the mode basis. Without aberrations, this projection into the mode basis should conserve energy, since all the energy must be conveyed by those modes. Using a model-based algorithm, constructed with the deep-learning framework PyTorch [50], we identify the aberrations and misalignments of the system that minimize the loss when projecting onto the mode basis. First, this process enables us to accurately recover the TM in the mode basis, which contains all the information about light propagation in the MMF. Secondly, it facilitates the identification of the aberrations that need to be corrected in order to obtain a desired pattern in the input facet plane of the fiber. The correction can then be implemented onto the SLM.
### Intensity vs field correlation
In the main text, we studied the RME using the field correlation function (1). Another way to characterize the RME amounts to estimating the correlation between the output intensity patterns associated to the fields \(\psi(\mathbf{r})\) and \(\psi_{\theta}(\mathbf{r})\) involved in Eq. (1) [26]. It corresponds the output intensity pattern \(I(\mathbf{r})=|\psi(\mathbf{r})|^{2}\) for a given input, and the intensity \(I_{\theta}(\mathbf{r})=|\psi_{\theta}(\mathbf{r})|^{2}\) obtained when rotating the input and output profiles by an angle \(\theta\). The intensity correlation reads
\[C_{I}(\theta)=\frac{\int d\mathbf{r}I(\mathbf{r})I_{\theta}(\mathbf{r})}{ \sqrt{\int d\mathbf{r}I(\mathbf{r})^{2}\int d\mathbf{r}I_{\theta}(\mathbf{r}) ^{2}}}. \tag{10}\]
Such correlation function was originally used in Ref. [26]. To estimate it, we use the same procedure and apparatus as the ones discussed in Sec. II.2, where intensities can be evaluated from the field measurements. We then compare the mean field correlation (1) with the square root of the mean intensity correlation (10). As illustrated in Fig. 6, the mean values of the two correlations functions are in excellent agreement. This demonstrates that the observables \(\langle C(\theta)\rangle\) and \(\langle C_{I}(\theta)\rangle\) are equivalent.
### Estimation of the correlation using the measured TM
In the present study, we use the compensation of aberrations, facilitated by the transmission matrix (TM) measurement, but we do not directly employ the knowledge of the TM itself. However, the TM provides access to the output field \(|\psi_{\mathrm{out}}\rangle\) for any given input field \(|\psi_{\mathrm{in}}\rangle\). We can thus use the TM to estimate the output of a rotated wavefront and compute the correlation function defined in Eq. (1). For each deformation, we compute the mean correlation for 100 random input wavefronts. We show in Fig. 7 a good agreement between the estimation based on the TM and the one based on the explicit measurement of the output field. This demonstrates that the measurement of the TM can drastically reduce the time needed for characterizing the RME, as it does not require any additional measurement. In comparison, the explicit measurements presented in the main text necessitates, after the initial calibration, to average over 100 random input wavefronts for 50 different angles.
## Appendix B Effective Hamiltonian and transmission matrix
In the situation where the coupling between different polarization channels can be neglected, and in the weakly guiding approximation [40] (_i.e._ for variations of the index of refraction small compared to the average index value), each polarization of the transverse part of the field at frequency \(\omega\) satisfies the scalar wave equation
\[\left[\nabla_{\perp}^{2}+\partial_{z}^{2}+k^{2}n(\mathbf{r},z)^{2}\right]\psi (\mathbf{r},z)=0, \tag{11}\]
where \(k=\omega/c\) and \(\mathbf{r}=(r,\phi)\) labels the position in the transverse plane. The refractive index is further decomposed into an unperturbed axisymmetric component \(n_{0}\) and a perturbation \(\delta n\ll n_{0}\),
\[n(\mathbf{r},z)=n_{0}(r)+\delta n(\mathbf{r},z). \tag{12}\]
Figure 6: **Comparison between the field and the intensity correlations as a function of the rotation angle \(\theta\).** For different values of the deformation \(\Delta x\), we show the field correlation as defined in Eq. (1) (blue curve), as well as the square root of the intensity correlation defined in Eq. (10) (red curve).
Figure 7: **Comparison between the correlation \(C(\theta)\) based on the measurement of the output fields, and the one estimated using the TM.** For different values of the deformation \(\Delta x\), we show the field correlation as defined in Eq. (1) (blue curve), and the one obtained using the TM (red curve).
To identify the effective Hamiltonian that controls the dynamics in the presence of disorder, it is convenient to rewrite the wave equation in the operator form
\[\partial_{z}^{2}|\psi(z)\rangle=-\hat{H}(z)^{2}|\psi(z)\rangle, \tag{10}\]
where
\[\hat{H}(z) =\left[\hat{\nabla}_{\perp}^{2}+k^{2}\hat{n}(\mathbf{r},z)^{2} \right]^{1/2}\] \[\simeq\left[\hat{H}_{0}^{2}+2k^{2}\hat{n}_{0}(r)\delta\hat{n}( \mathbf{r},z)\right]^{1/2}\] \[\simeq\hat{H}_{0}+k^{2}\hat{H}_{0}^{-1}\hat{n}_{0}(r)\delta\hat{n }(\mathbf{r},z). \tag{11}\]
The Hamitonian of the unperturbed problem reads \(\hat{H}_{0}=\left[\hat{\nabla}_{\perp}^{2}+k^{2}\hat{n}_{0}(r)^{2}\right]^{1/2}\). Since in a realistic MMF, the relative variations of \(n_{0}(r)\) in the radial direction are small, the eigenvalues \(\beta_{\mu}\) of \(\hat{H}_{0}\) are close to \(kn_{0}\), where \(n_{0}\) is the typical refractive index of the core. Therefore, a good approximation of \(\hat{H}\) is
\[\hat{H}(z)\simeq\hat{H}_{0}+k\delta\hat{n}(\mathbf{r},z). \tag{12}\]
In the present work, back reflections can be neglected, and Eq. (10) is equivalent to
\[\partial_{z}|\psi(z)\rangle=-i\hat{H}(z)|\psi(z)\rangle. \tag{13}\]
This shows that the the field transmitted through the fiber of length \(L\) can be expressed in terms of a unitary transmission matrix \(\mathbf{T}\) as \(|\psi(L)\rangle=\mathbf{T}|\psi(0)\rangle\). The matrix \(\mathbf{T}\) reads
\[\mathbf{T}=\mathcal{T}e^{-i\int_{0}^{L}dz^{\prime}\hat{H}(z^{\prime})}, \tag{14}\]
where \(\mathcal{T}\) is the time-ordering operator (\(z\) plays the role of time here). In the following, we model the disorder along the propagation direction \(z\) as a succession of \(N_{z}=L/l_{z}\) independent segments of length \(l_{z}\), where the refractive index depends only on the transverse coordinate \(\mathbf{r}\). In that case, the transmission matrix takes the form
\[\mathbf{T}=\prod_{p=1}^{N_{z}}\mathbf{T}^{(p)}, \tag{15}\]
with
\[\mathbf{T}^{(p)}=e^{-i[\hat{H}_{0}+k\delta\hat{n}_{p}(\mathbf{r})]l_{z}}. \tag{16}\]
The index fluctuations of each sector \(p\) is expressed as the product of a random function along the radial direction and a random function decomposed on the azimuthal harmonics,
\[\delta n_{p}(r,\phi)=g_{p}(r)\sum_{q}\Gamma_{q}\mathrm{cos}(q\phi+\varphi_{q}). \tag{17}\]
Here, \(g_{p}(r)\) is a Gaussian random variable with zero mean and variance \(\left\langle g_{p}(r)g_{p}(r^{\prime})\right\rangle=\sigma_{g}(r)^{2}d_{ \mathrm{layer}}\delta(r-r^{\prime})\), where \(d_{\mathrm{layer}}\) is the thickness of each layer obtained in the chemical vapor deposition process. In addition, the phases \(\varphi_{q}\) are random independent variables with uniform distribution, added to mitigate the effect of the orientation of the perturbation.
In this work, we focus on the properties of graded index fibers, where the refractive index \(n_{0}(r)\) takes the form
\[n_{0}(r)^{2}=n_{\mathrm{max}}^{2}\left(1-2\Delta\frac{r^{2}}{a^{2}}\right) \tag{18}\]
in the core of the fiber of radius \(a\). Here \(\Delta=\left(n_{\mathrm{max}}-n_{\mathrm{cl}}\right)/n_{\mathrm{max}}\), where \(n_{\mathrm{cl}}\) is the refractive index in the cladding, _i.e._ for \(r>a\). In the weakly guiding approximation (NA \(\ll 1\)), \(\Delta\simeq\mathrm{NA}^{2}/2n_{\mathrm{max}}^{2}\) and the refractive index profile in the core is well approximated by a parabolic function, \(n_{0}(r)\simeq n_{\mathrm{max}}(1-\Delta r^{2}/a^{2})\). This yields the explicit expression (3) for the amplitude of the radial disorder \(\sigma_{g}(r)\).
The expressions (15), (16), and (17) are used both in the theoretical treatment developed in Appendix A.2 and in the numerical simulations. For simulation purposes, the modes profiles \(\psi_{\mu}\) and propagation constants \(\beta_{\mu}\) of the unperturbed fiber (which are the eigenstates and eigenvalues of \(\hat{H}_{0}\)) are computed using the pyMMF package [28; 41]. The Hamiltonian (12) and transmission matrix (16) of each sector \(p\) is then computed in the basis \(\{\psi_{\mu}\}\). Finally, the total TM is found by multiplying the TMs of all the segments, as in Eq. (15). Details of the simulations, performed in Python, are available in the dedicated repository [49].
## Appendix C Analytical predictions for the RME
In this appendix, we evaluate the mean correlator \(\langle C(\theta)\rangle=\tilde{C}(\theta)/\tilde{C}(0)\), where
\[\tilde{C}(\theta)=\overline{\langle\psi|\psi_{\theta}\rangle}=\overline{ \langle\psi_{\mathrm{in}}|\mathbf{T}^{\dagger}\mathbf{T}_{\theta}|\psi_{ \mathrm{in}}\rangle}, \tag{19}\]
and \(\overline{\dots}=\langle\dots\rangle\) stands for the average over different configurations of the disorder. We first decompose the input field in the mode basis \(\{\psi_{\mu}\}\) of the unperturbed Hamiltonian \(\hat{H}_{0}\),
\[|\psi_{\mathrm{in}}\rangle=\sum_{\mu=1}^{N}c_{\mu}|\psi_{\mu}\rangle, \tag{20}\]
where \(\sum_{\mu=1}^{N}|c_{\mu}|^{2}=1\). In the following, we write the unperturbed eigenmodes in the form
\[\psi_{\mu}(r,\phi)=\frac{1}{\sqrt{2\pi}}\varphi_{\mu}(r)e^{im_{\mu}\phi}, \tag{21}\]
so that the normalization condition \(\langle\psi_{\mu}|\psi_{\mu}\rangle=1\) reads
\[\int_{0}^{\infty}drr|\varphi_{\mu}(r)|^{2}=1. \tag{22}\]
In addition, we consider random input wavefronts, uniformly distributed over the \(N_{\rm modes}\) modes of the MMF. Using \(\langle c_{\mu}c_{\mu^{\prime}}\rangle=\delta_{\mu,\mu^{\prime}}/N_{\rm modes}\), we express the correlator (14) as
\[\tilde{C}(\theta)=\frac{1}{N_{\rm modes}}\sum_{\nu,\mu}e^{-i(m_{\nu}-m_{\mu}) \theta}\left\langle|T_{\nu\mu}|^{2}\right\rangle. \tag{15}\]
We then use the decomposition (12), where TMs \(\mathbf{T}^{(p)}\) are independent of each other, and satisfy \(\langle\mathbf{T}^{(p)}\rangle=0\). This gives
\[\langle|T_{\nu\mu}|^{2}\rangle=\left(\prod_{p=1}^{N_{z}}\langle\mathbf{T}^{(p )}\otimes\mathbf{T}^{(p)\dagger}\rangle\right)_{\nu\mu}. \tag{16}\]
In the case of weak disorder, we can evaluate the previous correlator using a perturbative expansion of each matrix \(\mathbf{T}^{(p)}\). To obtain an explicit form of the latter, it is more convenient to work with the interaction representation \(\mathbf{T}_{I}(z)=e^{i\hat{H}_{0z}}\mathbf{T}(z)\) than directly using the expansion of Eq. (12). As the matrix \(\mathbf{T}_{I}(z)\) obeys the equation \(\partial_{z}\mathbf{T}_{I}(z)=-i\hat{V}_{I}(z)\mathbf{T}_{I}(z)\), where \(\hat{V}_{I}(z)=e^{i\hat{H}_{0z}}\hat{V}e^{-i\hat{H}_{0z}}\) and \(\hat{V}(z)=k\delta\hat{n}(z)\), it can be expanded, up to the second order in \(\hat{V}_{I}\), in the form
\[\mathbf{T}_{I}(z) =\mathbb{1}-i\int_{0}^{z}dz^{\prime}\hat{V}_{I}(z^{\prime}) \mathbf{T}_{I}(z^{\prime})\] \[\simeq\mathbb{1}-i\int_{0}^{z}\!\!dz^{\prime}\hat{V}_{I}(z^{ \prime})-\int_{0}^{z}\!\!dz^{\prime}\!\!\int_{0}^{z^{\prime}}\!\!dz^{\prime \prime}\hat{V}_{I}(z^{\prime})\hat{V}_{I}(z^{\prime\prime}). \tag{17}\]
Physically, this expansion corresponds to a situation where photons interact at most twice with the disordered potential located in a section of the fiber of length \(z\). Inside each sector of length \(l_{z}\), the potential \(\hat{V}(z)\) is invariant along \(z\), so that integrals in Eq. (17) can be evaluated explicitly. This allows us to find the expression of \(\mathbf{T}^{(p)}=e^{-i\hat{H}_{0z}}\mathbf{T}_{I}(l_{z})\), up to second order in \(\hat{V}=k\delta\hat{n}_{p}\),
\[T_{\nu\mu}^{(p)}\simeq e^{-i\beta_{\nu}l_{z}}\left(\delta_{\nu\mu}+T_{\nu\mu}^{(p,1)}+T_{\nu\mu}^{(p,2)}\right), \tag{18}\]
where
\[T_{\nu\mu}^{(p,1)} =-il_{z}e^{i\beta_{\nu\mu}l_{z}/2}\text{sinc}\left(\beta_{\nu\mu} l_{z}/2\right)V_{\nu\mu}, \tag{19}\] \[T_{\nu\mu}^{(p,2)} =-il_{z}\sum_{\kappa}\frac{e^{i\beta_{\nu\kappa}}}{\beta_{\nu\mu} }\left[e^{i\beta_{\kappa\mu}l_{z}/2}\text{sinc}\left(\beta_{\kappa\mu}l_{z}/2\right)\right.\] \[\quad+\left.e^{-i\beta_{\nu\kappa}l_{z}/2}\text{sinc}\left(\beta _{\nu\kappa}l_{z}/2\right)\right]V_{\nu\kappa}V_{\kappa\mu}, \tag{20}\]
with \(\beta_{\nu\mu}=\beta_{\nu}-\beta_{\mu}\). Inserting the expansion (18) into Eq. (16) and keeping terms up to second order in \(V\), we obtain
\[\langle|T_{\nu\mu}|^{2}\rangle \simeq\delta_{\nu\mu}+N_{z}\langle|T_{\nu\mu}^{(p,1)}|^{2}\rangle +N_{z}\langle|T_{\nu\mu}^{(p,2)}|^{2}\rangle\] \[\quad+\frac{N_{z}(N_{z}-1)}{2}\sum_{\kappa}\langle|T_{\nu\kappa}^{ (p,1)}|^{2}\rangle\langle|T_{\kappa\mu}^{(p,1)}|^{2}\rangle. \tag{21}\]
First-order contributions are of the form
\[\langle|T_{\nu\mu}^{(p,1)}|^{2}\rangle=l_{z}^{2}\,\text{sinc}\left(\beta_{\nu \mu}l_{z}/2\right)^{2}\langle|V_{\nu\mu}|^{2}\rangle, \tag{22}\]
where \(V_{\nu\mu}=k\langle\psi_{\nu}|\delta\hat{n}_{p}|\psi_{\mu}\rangle\). For the model of disorder given by Eq. (19), we find
\[\langle|V_{\nu\mu}|^{2}\rangle=\frac{k^{2}}{4}I_{\nu\mu}\sum_{q}\Gamma_{q}^{2} \delta_{q,|m_{\nu}-m_{\nu}|}, \tag{23}\]
where
\[I_{\nu\mu}=d_{\rm layer}\int dr|\psi_{\nu}(r)|^{2}|\psi_{\mu}(r)|^{2}\sigma_{g }(r)^{2}r^{2}. \tag{24}\]
Second order contributions \(\langle|T_{\nu\mu}^{(p,2)}|^{2}\rangle\) involve averages of the form \(\mathcal{C}_{\nu\kappa\mu}^{\nu\kappa^{\prime}\mu}=\langle V_{\nu\kappa}V_{ \kappa\mu}V_{\kappa^{\prime}\kappa^{\prime}}^{*}V_{\kappa^{\prime}\mu}^{*}\rangle\), which we can contract as
\[\mathcal{C}_{\nu\kappa\mu}^{\nu\kappa^{\prime}\mu} =\langle V_{\nu\kappa}V_{\kappa\kappa^{\prime}}^{*}\rangle\langle V _{\kappa\mu}V_{\kappa^{\prime}\mu}^{*}\rangle+\langle V_{\nu\kappa}V_{\kappa^{ \prime}\mu}^{*}\rangle\langle V_{\kappa\mu}V_{\kappa^{\prime}\nu}^{*}\rangle\] \[\simeq\langle|V_{\nu\kappa}|^{2}\rangle\langle|V_{\kappa\mu}|^{2} \rangle\delta_{\kappa\kappa^{\prime}}+\langle|V_{\nu\nu}|^{2}\rangle^{2} \delta_{\kappa\kappa^{\prime}}\delta_{\nu\mu}\delta_{\nu\kappa}. \tag{25}\]
Combining the expression (19) with the previous result, we obtain
\[\langle|T_{\nu\mu}^{(p,2)}|^{2}\rangle\simeq l_{z}^{4}\sum_{\kappa}Q_{\nu \kappa\mu}\langle|V_{\nu\kappa}|^{2}\rangle\langle|V_{\kappa\mu}|^{2}\rangle, \tag{26}\]
where \(Q_{\nu\kappa\mu}\) is a coupling weight between different energy subspaces,
\[Q_{\nu\kappa\mu}=\frac{1}{\beta_{\nu\mu}^{2}l_{z}^{2}}\left[\text{sinc}\left( \beta_{\nu\kappa}l_{z}/2\right)^{2}+\text{sinc}\left(\beta_{\kappa\mu}l_{z}/2 \right)^{2}\right.\] \[\left.-2\text{sinc}\left(\beta_{\nu\kappa}l_{z}/2\right)\text{ sinc}\left(\beta_{\kappa\mu}l_{z}/2\right)\text{cos}\left(\beta_{\nu\mu}l_{z}/2 \right)\right]+\frac{1}{4}\delta_{\nu\kappa}\delta_{\kappa\mu}. \tag{27}\]
Finally, we insert the result (21) into the expression (15) of the correlator, to get an expansion of the form
\[\tilde{C}(\theta)=1+\tilde{C}^{(1)}(\theta)+\tilde{C}^{(2)}(\theta). \tag{28}\]
The first order in \(V^{2}\) reads
\[\tilde{C}^{(1)}(\theta)=A_{1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## Appendix D RME characterization of different graded-index fibers
In this Appendix, we report the measurements for different fiber segments of the same length (\(L=24.5\) cm), and with advertised properties similar to those of the fiber used in the main text. Specifically, we use samples from a Thorlabs 50-micron core OM2 graded-index fiber (GIF50C, NA = 0.2).
Results for different fiber segments of the same spool are reproducible. We present typical results for one sample in Fig. 8. We observe different contributions of the \(\Gamma_{q}\) terms as the ones reported in Fig. 3, where a Prysmian BendBright OM4 fiber was used [29]. In particular, \(\Gamma_{4}\) is much smaller, leading to the absence of observed local maxima of the correlation at \(\pi/2\) and \(3\pi/2\).
## Appendix E Properties of the RME channels
### RME Operator for one angle value
We consider here operator \(\mathbf{O}(\theta_{t})\) defined in Eq. (10), which represents the upper part of the correlation function (1). Computing the singular values of this operator enables the identification of input wavefronts that maximize the angular correlation for a specific value \(\theta_{t}\) of \(\theta\). We present in Fig. 9 the resulting correlation \(C(\theta)\) of the first two singular vectors for \(\theta_{t}=\pi/2\), in the cases of no deformation and strong deformations (\(\Delta x=60\upmu\)m), along with the corresponding output field profiles. As with the results presented for the sum operator in Fig. 5, the first singular vector of the operator (10) closely resembles the fundamental mode for any \(\Delta x\). The second singular vector exhibits higher spatial frequencies and achieves a maximum in the correlation at the target angular value \(\theta_{t}\). However, although the correlation curve is consistently higher than the average correlation, it displays significant fluctuations over the \(2\pi\) range.
### Autocorrelation of the RME channels
As stated in the main text, for efficient information retrieval from the hidden side of a complex medium, the output pattern used should possess both a substantial memory effect range and a narrow autocorrelation function. This implies that the correlator \(C(\theta)\) defined in Eq. (1) displays a broad width, and that the angular au
Figure 8: **RME correlation results for a batch of GIF50C.** (a) Angular correlation function of the RME, as defined in Eq. (1), for various levels of deformation \(\Delta x\). Experimental data (blue) are compared to the theoretical prediction based on Eqs. (5) and (8) (black), and to simulation results obtained with the same parameters as those used in the theoretical model (red). (b) Values of the normalized deformation parameters \(\tilde{\Gamma}_{q}=kl_{z}\sigma_{g}(r=a)\Gamma_{q}\). The values of \(\Gamma_{q}\) are found by fitting the theoretical model [Eq. (5)] to the experimental data as a function of the deformation. In the inset, we show the symmetry corresponding to the perturbation associated with each value of \(q\).
Figure 9: **Tailoring the rotational memory effect.** The angular correlation function \(C(\theta)\) is constructed using experimentally measured input channels with improved RME range, for two values of the deformation (\(\Delta x=0\,\mu\)m and \(\Delta x=60\,\mu\)m). The results for the first two singular vectors of the operator defined in Eq. (11) are compared with the average results for random input profiles (dashed line). The insets show the output spatial transverse profiles of the corresponding singular vectors. |
2302.09335 | Knowledge Graph Completion based on Tensor Decomposition for Disease
Gene Prediction | Accurate identification of disease genes has consistently been one of the
keys to decoding a disease's molecular mechanism. Most current approaches focus
on constructing biological networks and utilizing machine learning, especially,
deep learning to identify disease genes, but ignore the complex relations
between entities in the biological knowledge graph. In this paper, we construct
a biological knowledge graph centered on diseases and genes, and develop an
end-to-end Knowledge graph completion model for Disease Gene Prediction using
interactional tensor decomposition (called KDGene). KDGene introduces an
interaction module between the embeddings of entities and relations to tensor
decomposition, which can effectively enhance the information interaction in
biological knowledge. Experimental results show that KDGene significantly
outperforms state-of-the-art algorithms. Furthermore, the comprehensive
biological analysis of the case of diabetes mellitus confirms KDGene's ability
for identifying new and accurate candidate genes. This work proposes a scalable
knowledge graph completion framework to identify disease candidate genes, from
which the results are promising to provide valuable references for further wet
experiments. | Xinyan Wang, Ting Jia, Chongyu Wang, Kuan Xu, Zixin Shu, Jian Yu, Kuo Yang, Xuezhong Zhou | 2023-02-18T13:57:44Z | http://arxiv.org/abs/2302.09335v2 | # Knowledge Graph Completion based on Tensor Decomposition for Disease Gene Prediction
###### Abstract
Accurate identification of disease genes has consistently been one of the keys to decoding a disease's molecular mechanism. Most current approaches focus on constructing biological networks and utilizing machine learning, especially, deep learning to identify disease genes, but ignore the complex relations between entities in the biological knowledge graph. In this paper, we construct a biological knowledge graph centered on diseases and genes, and develop an end-to-end **K**nowledge graph completion model for **D**isease **G**ene Prediction using interactional tensor decomposition (called KDGene). KDGene introduces an interaction module between the embeddings of entities and relations to tensor decomposition, which can effectively enhance the information interaction in biological knowledge. Experimental results show that KDGene significantly outperforms state-of-the-art algorithms. Furthermore, the comprehensive biological analysis of the case of diabetes mellitus confirms KDGene's ability for identifying new and accurate candidate genes. This work proposes a scalable knowledge graph completion framework to identify disease candidate genes, from which the results are promising to provide valuable references for further wet experiments.
## 1 Introduction
Unraveling the molecular mechanism of disease is essential for realizing precision medicine [1]. One of the main goals is to identify the causing genes of disease. Traditional methods of identifying disease-causing genes (e.g., Genome-wide association study [17]) are mainly obtained through experiments, which are extremely time-consuming and labor-intensive [14].
With the completion of the Human Genome Project and the maturity of high-throughput sequencing technology [1], a growing body of computing-based disease gene prediction methods have been developed, which are proven effective [12]. Compared with traditional experiments, computing-based methods can significantly save resources and reduce experimental errors [15, 16]. Typical studies include network propagation methods ([3]), clustering or classification methods ([18]), network features ([17, 1]), and network embedding methods ([19, 20]). Neural network methods have also been applied to disease gene prediction and obtained high performance [21, 16, 22].
In recent years, Knowledge Graphs (KGs) have been successfully applied to life science research [1]. KG is a semantic network that reveals the relations between entities, which can formally describe things and their relations in the real world. In KGs, nodes represent entities or concepts, and edges are composed of attributes or relations. Knowledge exists in the form of triples [10]. Inferring unknown facts from those already in KG is called KG Completion (KGC). Performing better in existing KGC models, KG Embedding (KGE) based methods learn the latent representation of entities and relations in continuous vector space [21]. Network Embedding (NE) assigns nodes in a network to low-dimensional representations and preserves the network structure effectively [14]. The main difference between KGE and NE is that the latter focuses on the topology of the network, while KGE focuses on the internal information of different relations and the semantic connotation of facts.
A few studies have explored KGE-based methods individually for disease gene prediction. They tend to adopt existing KGE models from the general domain [23, 24] or use external information, such as the textual description of biological entities [1]. Although the conventional KGE models have been proven to be useful for inferring new biological relations, their performance with biological data is not as satisfactory as that of general-domain KGs [1]. One of the key points is how to model KGE in the process of disease gene prediction to accurately capture the interaction between biological entities (such as Protein-Protein Interactions) [17, 20], so that diseases and genes can be learned with more comprehensive biological features.
To address these issues, we first integrated multiple relations centered on diseases and genes from biomedical knowl
edge bases to construct a large-scale biological KG, and develop an end-to-end **K**nowledge graph completion model using an interactional tensor decomposition to identify **D**isease-**G**ene associations, called KDGene. KDGene introduces a gating mechanism-based interaction module between the embeddings of entities and relations to tensor decomposition, which can effectively enhance the information interaction in biological knowledge. Perceiving related knowledge, the model is capable of learning the connotation of different relations and endows biological entities and relations with more comprehensive and precise representations, which is beneficial to disease gene prediction. Experimental results show that KDGene performs optimally among existing disease gene prediction methods. In particular, compared with conventional KGE methods, KDGene realizes an average improvement of over 20% on HR and MAP metrics. Meanwhile, we evaluate the impacts of KGs composed of knowledge with different relation types and degrees of confidence on KDGene's performance. In summary, the main contributions of our work are three-fold:
1. We construct a biological knowledge graph centered on diseases and genes, then adopt a scalable end-to-end KGC framework to predict disease genes.
2. We propose a novel KGC model, called KDGene, specifically for disease gene prediction. The model introduces an interaction module to tensor decomposition, which effectively enhances the information interaction between biological knowledge.
3. Our KDGene achieves state-of-the-art on disease gene prediction. The biological analysis of diabetes mellitus also confirms KDGene's ability to identify new and accurate candidate genes.
## 2 Related Work
### Disease Genes Prediction Models
Researchers have proposed various computing-based Disease Gene Prediction (DGP) methods, mainly divided into four categories: (1) Network Propagation methods. The network propagation models are based on the classic random walk algorithm for the most part, and they are common in early disease gene prediction tasks [23, 17]. (2) Methods based on Network Features. These methods usually use the constructed network to obtain the topological feature information of nodes, then calculate the correlation between a query disease and candidate genes, completing the prediction by sorting the gene list [14]. (3) Supervised Learning methods such as classification [13].
And (4) Network Embedding and Deep Learning methods. These methods have gained wide attention in recent years [1, 15]. HerGePred [13] is a heterogeneous disease-gene-related network embedding representation framework for disease gene prediction which can realize similarity prediction and a random walk with restart on a reconstructed heterogeneous disease-gene network. GLIM [15] can systematically mine the potential relationships between multilevel elements by embedding the features of the human multilevel network through contrastive learning. With the development of deep learning technology, researchers have tried to build specific neural network models to predict disease genes [13].
### KGE Models
Recently, growing amounts of work have been proposed to learn distributed representations for entities and relations in KGs, which fall into three major categories [12]: (1) Geometric Models. They utilize distance-based scoring functions to measure the plausibility of facts, which interpret the relation as a geometric transformation in the latent space [26]. The most representative model, TransE [1], regards the relation \(r\) in each triple _(h, r, t)_ as a translation from the head entity \(h\) to the tail entity \(t\), by requiring the tail entity embedding lies close to the sum of the head entity and relation embeddings. (2) Deep Learning Models. These methods learn the embedding vectors of entity and relation using multi-layer neural networks [1].
And (3) Tensor Decomposition Models. In Canonical Polyadic (CP) decomposition [10], a tensor can be decomposed into a set of matrices, where each row in the matrix represents an embedding vector of entity or relation. DistMult [13], a special case of CP decomposition, forces all relation embeddings as diagonal matrices, which reduces the space of parameters and makes the model easier to train. ComplEx [11] introduces asymmetry into tensor decomposition by adding complex-valued embeddings so that it can simulate asymmetric relations. Since current implementations of CP are lagging behind their competitors, CP-N3 [11] uses a tensor nuclear p-norm as a regularizer to break through the limitations of CP and obtain good performance.
### DGP models combined with KGE
At present, the DGP methods combined with KGE have not been fully exploited. KGED [14] is a convolutional neural network-based KGE model, which is based on a biological KG with entity descriptions to infer relationships between biological entities. Since KGED is used to predict gene-gene relations to generate gene interaction networks for diseases, it is not an end-to-end model for DGP. And it requires textual descriptions of entities, which may introduce noise and are not simple to obtain. In [11], KGE is adopted to predict disease genes directly. However, this work applies conventional methods in the KGC task without comparison with other DGP models.
## 3 Preliminaries
_Knowledge Graph_\(\mathcal{G}\). A knowledge graph can be denoted as \(\mathcal{G}=\{\mathcal{E},\mathcal{R},\mathcal{T}\}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are the entity set and relation set, respectively. And \(\mathcal{T}=\{(h,r,t)\in\mathcal{E}\times\mathcal{R}\times\mathcal{E}\}\) denotes the triple set which consists of all the triple facts in \(\mathcal{G}\). When constructing a biomedical KG, we integrate knowledge from different biological databases in the form of triples and add them to KG. Correspondingly, the associated entity set and relation set are generated.
Knowledge Graph Completion.The task of KGC, also known as Link Prediction, is to either predict unseen relations \(r\) between two existing entities: (_h_,?, _t_), or predict entities when the triple's relation and another entity are given: (_h_, \(r\),?) or (_?, \(r\), _t_). For Disease Gene Prediction, since triples of this kind of facts are in the form (disease, disease_gene, gene), we focus on the second mode to predict the tail entity (gene) given the head entity (disease) and relation (disease_gene).
In this paper, we adopt the improved tensor decomposition-based model under the framework of KGC, in which the triple (_h_, \(r\), _t_) can be represented as an element of the binary third-order entity-relation tensor \(\mathcal{X}^{N\times M\times N}\), where \(N=|\mathcal{E}|\) is the total number of entities and \(M=|\mathcal{R}|\) the number of relations. In the entity-relation tensor \(\mathcal{X}\), \(X_{ikj}\) denotes that there is a \(k\)-th relation between the _i_-th entity and the _j_-th one, which is:
\[\mathcal{X}_{ikj}=\begin{cases}1,&\text{ if }(h_{i},r_{k},t_{j})\in\mathcal{G} \\ 0,&\text{ if }(h_{i},r_{k},t_{j})\notin\mathcal{G}\end{cases} \tag{1}\]
Therefore, tensor decomposition-based algorithms can infer a predicted tensor \(\widehat{\mathcal{X}}\) that approximates \(\mathcal{X}\). To predict the candidate genes of a disease, queries like (_i_, \(k\),?) are answered by ordering gene entities \(j^{\prime}\) by decreasing scoring values of \(\widehat{\mathcal{X}}_{ikj^{\prime}}\). Note that we propose a scalable KGC framework for disease gene prediction which means the KGE model can be replaced by others.
## 4 Methodology
We present KDGene, an end-to-end knowledge graph completion model based on interactional tensor decomposition, and formulate the problem of disease gene prediction as an LP task in KG. Figure 1 shows an overview of KDGene, and the whole framework consists of the following three parts:
1. Biological KG Construction. We collected and integrated multiplex relations, e.g., disease-gene, disease-symptom, protein-protein, protein-GO, and protein-pathway from well-known biomedical databases and construct a biological KG centered on diseases and genes.
2. Representation learning of entities and relations in the biological KG. We do not directly utilize the existing KGE methods but improve the tensor decomposition model by introducing an interaction module, which can enhance the information interaction in biological knowledge and learn the more dedicated representation of relations.
3. Scoring and ranking of disease candidate genes. Based on the representation features learned in the second step, we score and sort all the candidate genes of a disease
\begin{table}
\begin{tabular}{l c|l c} \hline Entity Type & quantity & Relation Type & quantity \\ \hline Disease & 22,697 & Disease-Protein & 117,738 \\ Protein & 21,616 & Protein-Protein & 841,068 \\ Symptom & 2,504 & Disease-Symptom & 184,831 \\ GO & 1,207 & GO-Protein & 61,634 \\ Pathway & 316 & Pathway-Protein & 25,813 \\ \hline Total & 48,340 & Total & 1,231,084 \\ \hline \end{tabular}
\end{table}
Table 1: The scale of our constructed biological knowledge graph related to diseases and genes.
Figure 1: Visualization of the KDGene architecture. After constructing the biological KG, a triple is represented as \((h,r,t)\), with two entities \(h,t\) and a relation \(r\). We use \(\mathbf{e_{h}},\mathbf{e_{t}}\in\mathbb{R}^{d_{e}}\) to denote the embeddings of head and tail entities and \(\mathbf{e_{r}}\in\mathbb{R}^{d_{r}}\) to represent the relation embeddings. When the embeddings \(\mathbf{e_{h}},\mathbf{e_{r}},\mathbf{e_{t}}\) are trained, taking the relation embedding \(\mathbf{e_{r}}\) as the input, the head entity embedding \(\mathbf{e_{h}}\) as the hidden layer, we use the interaction module to obtain the updated relation embedding \(\mathbf{e_{r}^{\prime}}\in\mathbb{R}^{d_{e}}\). Then the scoring function of triple \((h,r,t)\) is calculated by \(\mathbf{e_{h}},\mathbf{e_{r}^{\prime}}\) and \(\mathbf{e_{t}}\). After training, for a query disease, score all candidate genes and rank by descending as the prediction results.
according to Equation 6, and obtain predicted results of candidate genes.
### Biological KG Construction
To learn more comprehensive representations of diseases and genes, we introduce the knowledge of different relation types to construct a biological KG. Regarding diseases, the disease-symptom relations from SymMap [21] are introduced into the KG. Regarding genes, we introduce the Protein-Protein Interactions (PPI) from STRING [22], the Protein-GO relations from [1], and the Protein-Pathway relations from KEGG[11]. Table 1 shows the scale of the biological KG.
In our framework, there are no restrictions on the entity type and relation type which means the construction of the KG is flexible. When others use it, the disease-gene relation facts can be added or subtracted from the KG to complete the training according to the demand. As the amount of knowledge in biomedical databases grows, abundant facts about new types of relations can be continuously added to the KG.
### Cp-N3
The KGC task can be regarded as a 3D binary tensor completion problem, where each slice is the adjacency matrix of one relation type in the KG. It is a natural solution to apply tensor decomposition to the KGC task, which is simple, expressive, and can achieve state-of-the-art results in general-domain KGs. Here, we take the typical CP-N3 model as an example and further introduce the interaction module on this basis to predict disease-gene associations.
**CP-N3**[12] is based on CP decomposition [13], which decomposes a high-order tensor \(\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\) into several \(r\) rank one tensors \(u_{i}\in\mathbb{R}^{n_{1}},v_{i}\in\mathbb{R}^{n_{2}},w_{i}\in\mathbb{R}^{n_ {3}}\) (\(\otimes\) denotes the tensor product):
\[\mathcal{X}\approx\sum_{i=1}^{r}u_{i}\otimes v_{i}\otimes w_{i}. \tag{2}\]
### Interaction Module
Introducing the interaction module aims to equip KGE models, TD-based methods in particular, with better biomedical knowledge perception. That is, the model should learn more precise representations of entities and relations. To deal with the problem of long-term dependencies, Hochreiter and Schmidhuber proposed long short-term memory (LSTM) [15]. They improved the remembering capacity of the standard recurrent cell by introducing a "gate" into the cell in which the gate mechanism can choose which information enters the next cell [21]. We adopt the vanilla LSTM cell [11] consisting of an input gate, an output gate, and a forget gate. The activation process of LSTM is as follows:
First, the forget gate \(f\) and the input gate \(i\) at the time step \(t\) are computed by
\[\begin{split} f_{t}&=\sigma\left(W_{fh}h_{t-1}+W_{ fx}x_{t}+b_{f}\right),\\ i_{t}&=\sigma\left(W_{ih}h_{t-1}+W_{ix}x_{t}+b_{i }\right),\\ \tilde{c}_{t}&=\tanh\left(W_{\bar{c}h}h_{t-1}+W_{ \bar{c}x}x_{t}+b_{\bar{c}}\right),\end{split} \tag{3}\]
where \(\sigma\) is the logistic sigmoid function, and \(x_{t}\) is the current input. For the forget gate, the LSTM unit determines which information should be removed from its previous cell states \(h_{t-1}\). The candidate memory cell is also added to the cell state through a TanH Layer. All the \(W\) are weights that need to be learned, while \(b\) represents the bias vector associated with this component. Then, the cell state is updated by
\[\begin{split} c_{t}&=f_{t}\circ c_{t-1}+i_{t}\circ \tilde{c}_{t},\\ o_{t}&=\sigma\left(W_{oh}h_{t-1}+W_{ox}x_{t}+b_{o} \right),\\ h_{t}&=o_{t}\circ\tanh\left(c_{t}\right).\end{split} \tag{4}\]
\(o_{t},h_{t}\) are the outputs at the current time, and \(\circ\) is the Hadamard product. In this intuitionistic structure, the control of the forget gate can save the previous information, and the control of the input gate can prevent the current irrelevant information from being added to the cell. The information in each part sufficiently interacts with others, so we utilize this simple and effective structure as our interaction module.
### KDGene
We present KDGene, a knowledge graph completion model that introduces the interaction module into CP-N3, which applies to disease gene prediction. In the following, a triple is represented as \((h,r,t)\), with two entities \(h,t\in E\) (the set of entities) and a relation \(r\in R\) (the set of relations). We use \(\mathbf{e_{h}},\mathbf{e_{t}}\in\mathbb{R}^{d_{e}}\) to denote the embeddings of head and tail entities and \(\mathbf{e_{r}}\in\mathbb{R}^{d_{e}}\) to represent the relation embeddings.
Instead of adopting the translation-based principle \(\mathbf{h}+\mathbf{r}=\mathbf{t}\) in TransE [1], we use the gating mechanism as the entity-to-relation translation. When the embeddings \(\mathbf{e_{h}},\mathbf{e_{r}},\mathbf{e_{t}}\) are trained, taking the relation embedding \(\mathbf{e_{r}}\) as the input, and the head entity embedding \(\mathbf{e_{h}}\) as the hidden layer, we use an LSTM cell to obtain the updated relation embedding \(\mathbf{e^{\prime}_{r}}\in\mathbb{R}^{d_{e}}\). The calculation process is as follows:
\[\begin{split} f&=\sigma\left(W_{fh}\mathbf{e_{h}}+W_{ fx}\mathbf{e_{r}}+b_{f}\right),\\ i&=\sigma\left(W_{ih}\mathbf{e_{h}}+W_{ix}\mathbf{e_{r}}+b_{i }\right),\\ \tilde{c}&=\tanh\left(W_{\bar{c}h}\mathbf{e_{h}}+W_{ \bar{c}x}\mathbf{e_{r}}+b_{\bar{c}}\right),\\ c&=f\circ c_{0}+i\circ\tilde{c},\\ \mathbf{e^{\prime}_{r}}&=\sigma\left(W_{oh}\mathbf{e_{h}}+W_{ ox}\mathbf{e_{r}}+b_{o}\right),\end{split} \tag{5}\]
where all \(W\) are weight matrices and \(b\) are bias vectors learned in the training process. The initial input of the cell state is set to 0.
After getting the updated relation embedding \(\mathbf{e^{\prime}_{r}}\), we define the scoring function of a triple \((h,r,t)\) for KDGene as:
\[\phi(h,r,t)=\sum_{i=1}^{d_{e}}e_{hi}\otimes e^{\prime}_{ri}\otimes e_{ti}. \tag{6}\]
In CP-N3, the embedding dimensions of entities and relations must be the same, resulting in a lot of parameter redundancy for those datasets with very different numbers of entities and relations. After introducing the interaction module, the dimensions of entities and relations can be different, which significantly improves the operability and flexibility of KDGene. More importantly, through the gating mechanism
of LSTM, entities, and relations are learned with more precise representations, which will benefit disease gene prediction.
### Training and Prediction
We use the standard data augmentation techniques [11] of adding reciprocal predicates in the original training set \(S\) and get \(S^{\prime}\), i.e. add \((t,r^{-1},h)\) for every \((h,r,t)\). Besides, we follow the 1-N scoring introduced by [13], that is, we take one \((h,r)\) pair and score it against all entities \(t^{\prime}\in E\) simultaneously. We train our model with the full multiclass log-loss:
\[\mathcal{L}=\sum_{(h,r,t)\in S^{\prime}}(-\phi(h,r,t)+log(\sum_{t^{\prime}\in E }exp(\phi(h,r,t^{\prime})))). \tag{7}\]
where \(\mathcal{L}\) is the loss function that should be minimized. For KDGene, we follow the N3 regularization used in CP-N3 [11], and the loss function for KDGene is as follows:
\[\mathcal{L}= \sum_{(h,r,t)\in S^{\prime}}(-\phi(h,r,t)+log(\sum_{t^{\prime} \in E}exp(\phi(h,r,t^{\prime})))\] \[+ \lambda\sum_{i}^{d_{e}}(|e_{hi}|^{3}+|e_{ri}^{\prime}|^{3}+|e_{ti} |^{3})). \tag{8}\]
After training, for disease gene prediction, we take \((h,r)\) pairs, where the head entity is the query disease, and the relation is disease-gene, and then score all candidate genes that are not in the training set. The list of genes with scores from high to low is the prediction result of the candidate genes.
## 5 Experiments
### Experimental setting
**Dataset.** We select curated disease-gene associations from the DisGeNet database [15] as a benchmark dataset and apply the conventional 10-fold cross validation to evaluate the disease gene prediction algorithms. For each fold, there are 117,738 disease-gene associations in the training set and 13,082 in the testing set.
**Baselines.** For baselines, comparisons with existing disease gene prediction algorithms are essential. Typical models including DADA [1], GUILD [12], RWRH [11], PDGNet [23], PRINCE [24], GLIM [10] are our baseline. In addition, since we formulate disease gene prediction as the KGC task, and propose a novel KGC method, KDGene should also be compared with existing KGC models. We experiment with six popular KGE baselines: TransE [2], RotatE [25], DistMult [21], ComplEx [21], TuckER [19] and CP-N3 [11].
**Evaluation Metrics.** Following GLIM [10], we select the hit ratio (HR@N) and mean average precision (MAP@N) as evaluation metrics (where N=1,3,10,50). For both HR and MAP, higher values indicate higher predictive performance.
**Implementation Details.** We implement KDGene with PyTorch and have made our source code available on GitHub.1 In our experiments, we carried out extensive grid search, over the following ranges of hyperparameter values: batch size in {128, 256, 512, 1024}, learning rate in {0.01, 0.03, 0.05, 0.1}, regularization coefficient in {0.001, 0.01, 0.05, 0.1, 0.2, 0.5}, the entity dimension in {1000, 1500, 2000, 2500} and the relation dimension in {500, 1000, 1500, 2000}. The optimal parameters used in KDGene can be seen on Github.1 Adagrad algorithm [1] is adopted to optimize all trainable parameters. For all the baselines, we follow the best hyperparameters they provide and obtain disease gene prediction results on the DisGeNet dataset.
Footnote 1: [https://github.com/sienna-wxy/KDGene](https://github.com/sienna-wxy/KDGene)
### Results and Analysis
Table 2 reports the evaluation results of disease gene prediction on the DisGeNet dataset. It can be seen that KDgene outperforms all the baselines consistently and significantly. Specifically, compared with DGP baselines, KDGene realizes an average improvement of 16.59% and over 25% improvement on HR metric in particular. Compared with KGC baselines, in terms of HR@1, HR@3, HR@10, HR@50, MAP@1, MAP@3, MAP@10, MAP@50, the performance gains achieved by our model are 17.24%, 21.44%, 31.15%, 20.35%, 17.23%, 21.12%, 23.52%, 23.59%, with an average improvement of 21.96%. These results illustrate the effectiveness of KDGene for disease gene prediction.
Our results also suggest that the success of KDGene does not mean that KGC-based models are the best in all DGP models (e.g., the MAP metric of GLIM_DG is better than all existing KGC methods), but they can still outperform most DGP models. It's very promising for KGC-based methods to become the best in terms of the HR metric, showing a powerful recall ability, which indicates that combining complex relations in biological knowledge is beneficial for predicting candidate genes comprehensively.
Existing DGP models combined with KGE [23] usually adopt conventional KGC methods. Our experiments confirm that existing KGC-based models are effective but not necessarily optimal. Among these typical KGC models, methods based on tensor decomposition perform better. Adopting tensor decomposition, we further introduce an interaction module based on the gating mechanism. It is worth noting that the result of CP-N3 can be regarded as an ablation study. Compared with CP-N3, KDGene realizes an increase by 39.86%, 47.35%, 52.12%, 31.81%, 39.85%, 46.30%, 47.52%, and 45.39% on HR@1, HR@3, HR@10, HR@50, MAP@1, MAP@3, MAP@10, and MAP@50, respectively, with an anavege improvement of 43.77%. The impressive improvement compared with CP-N3 demonstrates the significance of our proposed interaction module, which enables KDGene with more precise representations of entities and relations to predict disease genes.
### Comparison of Different KGs
To evaluate the impact of KGs composed of different relations on KDGene, we use different combinations of relations to construct biological KGs from \(KG_{1}\) to \(KG_{6}\) and evaluate the performance of KDGene. The results are shown in Figure 2. \(KG_{1}\) consists only of the disease-gene facts in the training set, with no external relations introduced. Based on \(KG_{1}\), \(KG_{2}\) and \(KG_{3}\) introduce the facts about Disease-Symptom and Protein-Protein Interaction, respectively. \(KG_{2}\) achieves the best performance, indicating that disease-symptom associations are beneficial for candidate gene prediction, while PPI has little effect. \(KG_{4}\), which jointly introduces the two relations, fails to achieve the cumulative effect. Referring [22], \(KG_{5}\) introduces GO and Pathway associations of genes. \(KG_{6}\) reintroduces the disease-symptom relation based on \(KG_{5}\). From the results, the former gets no obvious improvement, and the latter has improved, but still has not reached the performance of \(KG_{2}\). The possible reason for the performance degradation of \(KG_{3}\) and \(KG_{5}\) that further introduce Protein-Protein or GO/Pathway-Protein associations is that the relations about proteins add more similar entities into KG, which could be noise.
### Comparison of Different PPI Score
To analyze the impact of knowledge with different confidence levels on KDGene, we consider scores in Protein-Protein Interaction facts. This score is often higher than the individual sub-scores, expressing increased confidence when an association is supported by several types of evidence [23]. We select three grades of scores for evaluation, that is, the interaction scores \(\geq 700\), \(\geq 850\), and \(\geq 950\), respectively. For a fair comparison, all three evaluations are performed on the KG with the same disease-gene facts, with no differences other than the PPI facts. In Figure 2(a), as the score threshold increases, the performance of KDGene gradually improves, which indicates the introduction of reliable biological knowledge into the KG is more beneficial for KDGene to learn the representations of entities and relations.
### Comparison of Different interaction modules
To evaluate the impact of different interaction modules on the performance of KDGene, we conduct experiments with similar structures such as RNNCell and GRUCell, and the results are shown in Figure 2(b). Among the three gating mech
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline DGP Models & HR@1 & HR@3 & HR@10 & HR@50 & MAP@1 & MAP@3 & MAP@10 & MAP@50 \\ \hline DADA & 0.012 & 0.025 & 0.047 & 0.107 & 0.045 & 0.044 & 0.049 & 0.053 \\ GUILD & 0.023 & 0.032 & 0.049 & 0.107 & 0.073 & 0.076 & 0.080 & 0.084 \\ RWRH & 0.082 & 0.153 & 0.269 & 0.486 & 0.297 & 0.268 & 0.272 & 0.286 \\ PDGNet & 0.020 & 0.031 & 0.045 & 0.068 & 0.094 & 0.056 & 0.044 & 0.043 \\ PRINCE & 0.006 & 0.011 & 0.024 & 0.074 & 0.025 & 0.026 & 0.028 & 0.031 \\ RWR\_PPI & 0.070 & 0.148 & 0.271 & 0.474 & 0.257 & 0.241 & 0.255 & 0.270 \\ RWR\_HMLN & 0.094 & 0.180 & 0.304 & 0.502 & 0.342 & 0.303 & 0.306 & 0.320 \\ GLIM\_DG & 0.105 & 0.194 & 0.312 & 0.508 & 0.383 & 0.335 & 0.329 & 0.342 \\ KDGene (ours) & **0.126** & **0.243** & **0.416** & **0.620** & **0.406** & **0.365** & **0.361** & **0.370** \\ Improvement & +19.96\% & +25.16\% & +33.25\% & +22.04\% & +5.82\% & +8.91\% & +9.48\% & +8.12\% \\ \hline \hline KGC Models & HR@1 & HR@3 & HR@10 & HR@50 & MAP@1 & MAP@3 & MAP@10 & MAP@50 \\ \hline TransE & 0.086 & 0.160 & 0.272 & 0.472 & 0.278 & 0.243 & 0.241 & 0.252 \\ RotatE & 0.085 & 0.159 & 0.272 & 0.477 & 0.275 & 0.241 & 0.241 & 0.252 \\ DistMult & 0.107 & 0.200 & 0.309 & 0.406 & 0.346 & 0.301 & 0.292 & 0.299 \\ ComplEx & 0.103 & 0.193 & 0.317 & 0.515 & 0.331 & 0.288 & 0.281 & 0.291 \\ TuckER & 0.096 & 0.182 & 0.288 & 0.394 & 0.308 & 0.269 & 0.261 & 0.269 \\ CP-N3 & 0.090 & 0.165 & 0.273 & 0.471 & 0.290 & 0.249 & 0.244 & 0.254 \\ KDGene (ours) & **0.126** & **0.243** & **0.416** & **0.620** & **0.406** & **0.365** & **0.361** & **0.370** \\ Improvement & +17.24\% & +21.44\% & +31.15\% & +20.35\% & +17.23\% & +21.12\% & +23.52\% & +23.59\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Disease gene prediction results on DisGeNet. The table consists of two parts, the upper part is about the baselines for typical Disease Gene Prediction (DGP) models, and the lower is about the baselines of KGC models. **Bold** numbers are the best results of all and underline numbers are the best results of baseline models. Improved results of the last row compare the performance of KDGene with the best of baseline models.
Figure 2: Results of KDGene with different biological KGs. We use five relations associated with disease and gene to evaluate the effect of six combinations on KDGene performance. Except for the different KGs used, other experimental settings are kept the same.
anisms, the relation embedding is used as the input, and the head entity embedding as the hidden layer. The results of introducing different interaction structures are all better than the model without the gating mechanism (here we compare with CP-N3). The results of introducing different interaction structures are all better than the model without the gating mechanism (here we compare with CP-N3), illustrating the significance of the interaction module for tensor decomposition models. Among the three, the result of LSTMCell is slightly better than the remaining two. The possible reason is that the setting of the forget gate makes it more parameters to learn more details.
### Case study
We take Diabetes Mellitus as a case example to illustrate the high network closeness and functional relevance between genes in the training set and the candidate genes predicted by KDGene. For the disease, we keep all 42 genes in the training set and 6 genes in the testing set of the DisGeNet dataset and take out the top 50 candidate genes predicted by KDGene. The dense links (221 real links vs. 41.23 expected links, P = 2.05E-88, binomial test) that hold in the PPI network indicate that those two kinds of genes would tend to have closer interactions than expectation and rely on the same functional module in the PPI network. The top ten gene prediction results of KDGene hit three diabetes mellitus-associated genes in the testing set. Meanwhile, we find that more than half of the predicted genes have corresponding literature evidence for association with diabetes. For example, interleukin-6 (IL-6, the top predicted gene), is not in the testing set, but [20] indicates pro-inflammatory cytokines, such as interleukin-6 (IL-6), have been considered as key factors in type 1 diabetes mellitus (T1DM) and diabetic nephropathy. The results illustrate the accuracy and reliability of KDGene's prediction, which are promising to provide valuable references for further wet experiments.
## 6 Conclusion
In this paper, we first utilize the biological knowledge bases to create KGs and develop a scalable end-to-end knowledge graph completion model using an interactional tensor decomposition to identify disease-gene associations. KDGene introduces a gating mechanism-based interaction module between the embeddings of entities and relations to tensor decomposition, which can effectively enhance the information interaction in biological knowledge. Experimental results show that KDGene performs optimally among existing disease gene prediction methods. We also evaluate the impacts of KGs composed of knowledge with different relation types and degrees of confidence on KDGene's performance. Furthermore, the comprehensive biological analysis of the case of diabetes mellitus confirms KDGene's ability to identify new and accurate candidate genes. Future work will explore the capability of attention-based interaction modules in disease gene prediction. We will also extend this kind of module to other KGC models.
|
2308.13336 | An Explicit Expression of Generating Function for One-Loop Tensor
Reduction | This work introduces an explicit expression for the generation function for
the reduction of an $n$-gon to an $(n-k)$-gon. A novel recursive relation of
generation function is formulated based on Feynman Parametrization in
projective space, involving a single ordinary differential equation. The
explicit formulation of generation functions provides crucial insights into the
complex analytic structure inherent in loop amplitudes. | Chang Hu, Tingfei Li, Jiyuan Shen, Yongqun Xu | 2023-08-25T12:13:09Z | http://arxiv.org/abs/2308.13336v6 | # An Explicit Expression of Generation Function for One-Loop Tensor Reduction
###### Abstract
This work introduces an explicit expression for the generation function for the reduction of an \(n\)-gon to an \((n-k)\)-gon. A novel recursive relation of generation function is formulated based on Feynman Parametrization in projective space, involving a single ordinary differential equation. The explicit formulation of generation functions provides crucial insights into the complex analytic structure inherent in loop amplitudes.
## 1 Background and Motivation
Scattering amplitudes occupy a crucial position in quantum field theory as they form the bridge between theoretical frameworks and experimental observations, particularly in Large Hadron Collider (LHC) experiments[1]. Therefore, it is essential to explore efficient methods for their computation.
On-shell techniques, which have seen significant advancements over the past few decades, stand out as a promising approach[2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Among these methods, the unitarity cut technique
simplifies the computation of one-loop amplitudes by using on-shell tree-level amplitudes as inputs, enabling us to directly calculate them[4; 5; 7; 11; 12]. This method has gained recognition for its straightforward approach and effectiveness.
Loop computation in quantum field theory generally involves two steps: the creation of integrands and then executing the integration. While the integrand construction using Feynman diagrams is well understood, it isn't always the most efficient approach, thus seeking an optimized way for integrand construction is a current research direction.
The integration process is greatly aided by the method of reduction. It is established that any integral can be expressed as a linear combination of a set of basis integrals, referred to as master integrals, with coefficients being the rational function of external momenta, masses, and spacetime dimension. By employing reduction, the process of loop integration can be divided into two simultaneous tasks: calculating the master integrals and designing an algorithm to efficiently derive the reduction coefficients. Advances in either task will facilitate the handling of more complex integrations[13]. There are some mature software packages available for calculating higher-loop[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39].
The process of reduction can be divided into two categories: integrand-level reduction and integral-level reduction. Integrand-level reduction can be systematically resolved using computational algebraic geometry[40; 41; 42; 43; 44]. As for the integral-level reduction, the initial proposal was the notable Passarino-Veltman reduction (PV-reduction) method[45]. Other propositions include the Integration-by-Parts (IBP) method[46; 47; 48; 29; 50; 51], the unitarity cut method[52; 53; 54; 55; 56; 57; 12; 54; 57; 58; 59; 60; 61; 62; 63; 64], and the Intersection number method[58; 59; 60; 61; 62; 63; 64]. Despite numerous advancements in integral-level reduction, the growing complexity of computations warrants further improvements.
In recent publications[65; 66; 67; 68; 69], we introduced an auxiliary vector \(R\) to enhance the conventional PV-reduction method. By using \(R\), we can formulate differential operators and establish an algebraic recurrence relation to ascertain reduction coefficients analytically. This approach has also been generalized to the two-loop sunset diagram[68], where the traditional PV-reduction method falls short. When auxiliary vector \(R\) is incorporated into the IBP method, the improvement of efficiency of reduction has been shown in[70; 71; 72].
Another notable advancement is the introduction of the "generation function" [73; 74; 75; 76]. In fact, the concept of a generation function is well-established in both physics and mathematics. The application of generation functions in higher-order QFT calculations was introduced earlier in [73] and has been utilized up to the level of 3- and 4-loop calculations. More recently, these functions have also been discussed in [74], detailing how to directly proceed from these representations to the physical result.
In the reduction process, the reduction coefficients for high tensor rank still present a considerable challenge. However, when we sum up the reduction coefficients of different tensor ranks, we might arrive at a simpler solution. For instance, in our method, the numerator of the integral is \((2l\cdot R)^{r}\) with rank \(r\). We can sum them up in two typical ways as shown in the equations below:
\[\psi_{1}(\mathpzc{c})=\sum_{r=0}^{\infty}\mathpzc{c}^{r}\cdot(2l\cdot R)^{r}= \frac{1}{1-\mathpzc{c}(2l\cdot R)},\ \ \psi_{2}(\mathpzc{c})=\sum_{r=0}^{\infty}\frac{(2l\cdot R)^{r}\cdot\mathpzc{c}^{r }}{r!}=e^{\mathpzc{c}\cdot(2l\cdot R)}. \tag{1}\]
In recent research[75; 76], Bo Feng proposed a recursive method to calculate the generation function, establishing several partial differential equations based on the auxiliary \(\mathbf{R}\) method. Subsequent work[76] has not only improved upon this at the one-loop level but also extended it to the 2-loop level by setting up and solving differential equations of these generating functions, utilizing the Integration-by-Parts (IBP) method. Both pieces of work underscore the considerable potential of generation functions. However, in both studies, authors provide only an iterative approach to compute the generation functions, and it remains strenuous to directly write generation functions explicitly, even at the one-loop level. Fortunately, we discovered a new recursive relation of generation functions based on investigations into Feynman parametrization in the projective space[69; 77; 78]. This new relation consists of a single ordinary differential equation on \(\mathpzc{c}\) instead of a complex set of partial differential equations, allowing us to directly write an **explicit expression** of the generation function for the reduction of \(n\)-gon to \((n-k)\)-gon for **universal**\(k\) without recursion.
The organization of this paper is as follows. Section 2 introduces essential notations used throughout the paper and establishes our new recursive relation. In Section 3, we compute the generation function of \(n\)-gon to \(n\)-gon as a warm-up. Section 4 and Section 5 detail the derivation of the generation function of \(n\)-gon to \((n-1)\)-gon and \(n\)-gon to \((n-2)\)-gon, respectively, as two non-trivial examples. After summarizing the previous results in Section 6, we provide an explicit expression of the generation function for \(n\)-gon to \((n-k)\)-gon of universal \(k\). Section 7 offers an inductive proof of our results. Finally, Section 8 provides a brief summary and discussion. Beside, we present the solution of the typical differential equation in Appendix A. We also provide the numerical verification in Appendix B.
## 2 Preparation
In this paper, our goal is to provide an explicit expression for the generating functions of one-loop tensor reduction. As we know, by introducing an auxiliary vector \(R\), the general one loop integral in \(D\) dimension can be written as
\[I_{n}^{(r)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{(2R\cdot l)^{r}}{\prod_{j=1}^{n }(l-q_{j})^{2}-M_{j}^{2}}. \tag{2}\]
In this formula, \(n\) represents the number of propagators, and \(r\) denotes the tensor rank number.
### Notations
Let us start with the introduction of some notations that we'll be using subsequently.
* Some \(n\)-dimension vectors: \[\begin{split}&\mathbf{L}:\ L_{i}=1,\\ &\mathbf{V}:\ V_{i}=R\cdot q_{i},\\ &\mathbf{H}_{b}=\{0,\cdots,0,1,0,\cdots,0\},\end{split}\] (2) where \(1\) is in the \(b\)-th position.
* \(Q\) is an \(n\times n\) matrix defined as \[Q_{ij}=\frac{M_{i}^{2}+M_{j}^{2}-(q_{i}-q_{j})^{2}}{2}.\] (3)
* The notations \((\overline{AB})\) and \((\overline{AB})_{\bf b}\), with a label list \({\bf b}=\{b_{1},b_{2},...,b_{k}\}\) for two vectors \(A\) and \(B\), are defined as follows: \[\begin{split}(\overline{AB})=& A\cdot Q^{-1}\cdot B,\\ (\overline{AB})_{\bf b}=& A_{\widehat{\bf b}}\cdot( Q_{\widehat{\bf b}\widehat{\bf c}})^{-1}\cdot B_{\widehat{\bf c}}.\end{split}\] (4) In this context, \(A_{\widehat{\bf b}}\) and \(B_{\widehat{\bf c}}\) are two vectors derived by removing all the \(b_{i}\)-th components from vectors \(A\) and \(B\). \(Q_{\widehat{\bf b}\widehat{\bf c}}\) is a matrix obtained by eliminating all the \(b_{i}-\)th rows and \(b_{i}-\)th columns from matrix \(Q\). For example, with \(n=4\) and \({\bf b}=\{2,3\}\), we have \[(\overline{VL})_{2,3}=\begin{pmatrix}R\cdot q_{1},\,R\cdot q_{4}\end{pmatrix} \begin{pmatrix}M_{1}^{2}&\frac{M_{1}^{2}+M_{4}^{2}+(q_{1}-q_{4})^{2}}{2}\\ \frac{M_{4}^{2}+M_{1}^{2}+(q_{4}-q_{1})^{2}}{2}&M_{4}^{2}\end{pmatrix}^{-1} \begin{pmatrix}1\\ 1\end{pmatrix}.\] (5)
* If \(\Omega\) is an analytic expression composed of \((\overline{AB})\) or \((\overline{AB})_{\bf b}\), then \([\Omega]_{\bf a}\) with a label set \({\bf a}\) outside the square brackets represents the analytic expression obtained by appending subscript \({\bf a}\) on each term of the form \((\overline{AB})\) or \((\overline{AB})_{\bf b}\) present in \(\Omega\). For example \[[(\overline{VL})]_{1,2}=(\overline{VL})_{1,2},\quad[(\overline{ LL})_{2}]_{3}=(\overline{LL})_{2,3}.\] (6) If \[P=\frac{(\overline{H_{1}L})(\overline{VV})_{3}+(\overline{LL})_{2}R^{2}}{( \overline{VL})_{2}(H_{3}V)_{1}},\] (7) then \[[P]_{4,5}=\frac{(\overline{H_{1}L})_{4,5}(\overline{VV})_{3,4,5}+(\overline{ LL})_{2,4,5}R^{2}}{(\overline{VL})_{2,4,5}(H_{3}V)_{1,4,5}}.\] (8)
* \(I_{n,\widehat{\bf b}}^{(r)}\) represents one loop integral removed all the \(b_{i}-\)th propagator of \(I_{n}^{(r)}\) \[I_{n,\widehat{\bf b}}^{(r)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{(2R\cdot l)^{r }}{\prod_{j=1,j\notin{\bf b}}^{n}(l-q_{j})^{2}-M_{j}^{2}}.\] (9)
### Recursive relation
After introducing the above notations, we begin to derive the recursive relations for the generation function of the reduction coefficients. As pointed out in [78] and [69], there exists a non-trivial recursion relation for one-loop tensor integrals for \(r\geq 1\)1
Footnote 1: Where \(b\) represents a single label \(b\).
\[\begin{split} I_{n}^{(r)}&=\frac{2(D+2r-n-2)}{D+r-n- 1}\cdot\frac{(\overline{VL})}{(\overline{LL})}\cdot I_{n}^{(r-1)}+\frac{4(r-1 )}{D+r-n-1}\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\cdot I_{n}^{(r-2 )}\\ &+\sum_{b=1}^{n}(X^{(b)}\cdot I_{n,\widehat{b}}^{(r-1)}+\frac{2(r- 1)\cdot Y^{(b)}}{D+r-n-1}\cdot I_{n,\widehat{b}}^{(r-2)}),\end{split} \tag{10}\]
where the coefficients are
\[\begin{split} X^{(b)}=&\left((\overline{H_{b}L})( \overline{VL})_{b}-(\overline{H_{b}V})(\overline{LL})_{b}\right)/(\overline{ LL}),\\ Y^{(b)}=&\left((\overline{H_{b}L})R^{2}+(\overline {H_{b}V})(\overline{VL})_{b}-(\overline{H_{b}L})(\overline{VV})_{b}\right)/( \overline{LL}).\end{split} \tag{11}\]
We focus on the summation of the tensor integrals with form
\[\begin{split}\phi_{n}(\mathpzc{t})=&\sum_{r=0}^{ \infty}\mathpzc{t}^{r}\cdot I_{n}^{(r)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{1}{ 1-\mathpzc{t}(2R\cdot l)}\frac{1}{\prod_{j=1}^{n}(l-q_{j})^{2}-M_{j}^{2}},\\ \phi_{n,\widehat{\mathbf{b}}}(\mathpzc{t})=&\sum_{r= 0}^{\infty}\mathpzc{t}^{r}\cdot I_{n,\widehat{\mathbf{b}}}^{(r)}=\int\frac{d^ {D}l}{i\pi^{D/2}}\frac{1}{1-\mathpzc{t}(2R\cdot l)}\frac{1}{\prod_{j=1,j \notin\mathbf{b}}^{n}(l-q_{j})^{2}-M_{j}^{2}}.\end{split} \tag{12}\]
Then, we multiply both sides of equation (10) by \(\mathpzc{t}^{r}\) and sum over \(r\) from \(1\) to \(\infty\). The following relations can be used:
\[\begin{split}&\sum_{r=1}^{\infty}\mathpzc{t}^{r}\cdot I_{n}^{(r)}= \phi_{n}(\mathpzc{t})-I_{n}^{(0)},\\ &\sum_{r=1}^{\infty}r\cdot\mathpzc{t}^{r}\cdot I_{n}^{(r)}= \mathpzc{t}\cdot\sum_{r=0}^{\infty}r\cdot\mathpzc{t}^{(r-1)}\cdot I_{n}^{(r) }=\mathpzc{t}\cdot\phi_{n}^{\prime}(\mathpzc{t}),\\ &\sum_{r=1}^{\infty}\mathpzc{t}^{r}\cdot I_{n}^{(r-1)}=\sum_{r= 0}^{\infty}\mathpzc{t}\cdot\mathpzc{t}^{r}\cdot I_{n}^{(r)}=\mathpzc{t}\cdot \phi_{n}(\mathpzc{t}),\\ &\sum_{r=1}^{\infty}r\cdot\mathpzc{t}^{r}\cdot I_{n}^{(r-1)}= \sum_{r=0}^{\infty}(r+1)\cdot\mathpzc{t}\cdot\mathpzc{t}^{r}\cdot I_{n}^{(r) }=\mathpzc{t}^{2}\cdot\phi_{n}^{\prime}(\mathpzc{t})+\mathpzc{t}\cdot\phi_{n }(\mathpzc{t}),\\ &\sum_{r=2}^{\infty}\mathpzc{t}^{r}\cdot I_{n}^{(r-2)}=\sum_{r= 0}^{\infty}\mathpzc{t}^{r+2}\cdot I_{n}^{(r)}=\mathpzc{t}^{2}\cdot\phi_{n}( \mathpzc{t}),\\ &\sum_{r=2}^{\infty}r\cdot\mathpzc{t}^{r}\cdot I_{n}^{(r-2)}= \sum_{r=0}^{\infty}(r+2)\cdot\mathpzc{t}^{(r+2)}\cdot I_{n}^{(r)}=\mathpzc{t}^ {3}\phi_{n}^{\prime}(\mathpzc{t})+2\mathpzc{t}^{2}\phi_{n}(\mathpzc{t}). \end{split} \tag{13}\]
We obtain a differential function of the summation \(\phi_{n}(\boldsymbol{\ell})\) :
\[\bigg{(}(D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{(\overline{LL})} \boldsymbol{\ell}-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\boldsymbol {\ell}^{2}\bigg{)}\phi_{n}(\boldsymbol{\ell}) \tag{14}\] \[+\bigg{(}\boldsymbol{\ell}-4\cdot\frac{(\overline{VL})}{( \overline{LL})}\boldsymbol{\ell}^{2}-4\cdot\frac{R^{2}-(\overline{VV})}{( \overline{LL})}\boldsymbol{\ell}^{3}\bigg{)}\phi^{\prime}_{n}(\boldsymbol{ \ell})-(D-n-1)I^{(0)}_{n}\] \[= \sum_{b=1}^{n}\Bigg{\{}X^{(b)}\bigg{(}(D-n)\boldsymbol{\ell}\cdot \phi_{n;\widehat{b}}(\boldsymbol{\ell})+\boldsymbol{\ell}^{2}\cdot\phi^{\prime }_{n;\widehat{b}}(\boldsymbol{\ell})\bigg{)}+2Y^{(b)}\bigg{(}\boldsymbol{ \ell}^{3}\cdot\phi^{\prime}_{n;\widehat{b}}(\boldsymbol{\ell})+\boldsymbol{ \ell}^{2}\cdot\phi_{n;\widehat{b}}(\boldsymbol{\ell})\bigg{)}\Bigg{\}}.\]
We know that any integral can be written as the linear combination of certain irreducible scalar integrals (which are the master integrals in arbitrary spacetime dimension) with coefficients as the rational functions of external momenta, masses and space-time dimension \(D\).
\[I^{(r)}_{n}=\sum_{\mathbf{b}\subseteq\{1,2,\ldots,n\}}C^{(r)}_{ n\to n;\widehat{\mathbf{b}}}\cdot I^{(0)}_{n;\widehat{\mathbf{b}}}\, \tag{15}\] \[\phi_{n}(\boldsymbol{\ell})=\sum_{\mathbf{b}\subseteq\{1,2,\ldots,n\}}\Big{\{}\sum_{r=0}^{\infty}\boldsymbol{\ell}^{r}\cdot C^{(r)}_{n\to n; \widehat{\mathbf{b}}}\Big{\}}\cdot I^{(0)}_{n;\widehat{\mathbf{b}}}=\sum_{ \mathbf{b}\subseteq\{1,2,\ldots,n\}}\mathbf{GF}_{n\to n;\widehat{\mathbf{b}}} (\boldsymbol{\ell})\cdot I^{(0)}_{n;\widehat{\mathbf{b}}}\.\]
Thus, equation (14) is transformed into a recursive formula for the generation functions of the reduction coefficients.
\[\bigg{(}(D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{(\overline{ LL})}\boldsymbol{\ell}-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{ LL})}\boldsymbol{\ell}^{2}\bigg{)}\mathbf{GF}_{n\to n;\widehat{\mathbf{b}}}( \boldsymbol{\ell}) \tag{16}\] \[+\bigg{(}\boldsymbol{\ell}-4\cdot\frac{(\overline{VL})}{( \overline{LL})}\boldsymbol{\ell}^{2}-4\cdot\frac{R^{2}-(\overline{VV})}{( \overline{LL})}\boldsymbol{\ell}^{3}\bigg{)}\mathbf{GF}^{\prime}_{n\to n; \widehat{\mathbf{b}}}(\boldsymbol{\ell})\] \[= \sum_{b_{i}\in\mathbf{b}}\Bigg{\{}X^{(b_{i})}\bigg{(}(D-n) \boldsymbol{\ell}\cdot\mathbf{GF}_{n;\widehat{b_{i}}\to n;\widehat{\mathbf{b} }}(\boldsymbol{\ell})+\boldsymbol{\ell}^{2}\cdot\mathbf{GF}^{\prime}_{n; \widehat{b_{i}}\to n;\widehat{\mathbf{b}}}(\boldsymbol{\ell})\bigg{)}\] \[+2Y^{(b_{i})}\bigg{(}\boldsymbol{\ell}^{3}\cdot\mathbf{GF}^{ \prime}_{n;\widehat{b_{i}}\to n;\widehat{\mathbf{b}}}(\boldsymbol{\ell})+ \boldsymbol{\ell}^{2}\cdot\mathbf{GF}_{n;\widehat{b_{i}}\to n;\widehat{ \mathbf{b}}}(\boldsymbol{\ell})\bigg{)}\Bigg{\}}+(D-n-1)C^{(0)}_{n\to n; \widehat{\mathbf{b}}}\,\]
where \(C^{(0)}_{n\to n;\widehat{\mathbf{b}}}\) is the reduction coefficient of scalar integral \(I^{(0)}_{n}\) to Master Integral \(I^{(0)}_{n;\widehat{\mathbf{b}}}\). Because \(I^{(0)}_{n}\) is an irreducible scalar integral, the reduction coefficient \(C^{(0)}_{n\to n;\widehat{\mathbf{b}}}\) is 1 if \(\mathbf{b}\) is the null set, i.e, \(\mathbf{b}=\emptyset\) and \(C^{(0)}_{n\to n;\widehat{\mathbf{b}}}=0\) otherwise. \(\mathbf{GF}_{n;\widehat{b_{i}}\to n;\widehat{\mathbf{b}}}(\boldsymbol{\ell})\) in the right hand side of (16) is the generation function for the reduction coefficient of \(\phi_{n;\widehat{b_{i}}}(\boldsymbol{\ell})\) to Master Integral \(I^{(0)}_{n;\widehat{\mathbf{b}}}\). The equation (16) serves as the cornerstone of this paper, drawing a connection between the generation functions in the reduction of an \(n\)-gon to an \((n-k)\)-gon and an \((n-1)\)-gon to an \(((n-1)-(k-1))\)-gon. At first, we set \(\mathbf{b}=\emptyset\), the terms inside the curly braces on the right-hand side of (16) vanish. The remaining part is a first-order non-homogeneous ordinary differential equation with respect to the generation function \(\mathbf{GF}_{n\to n}(\boldsymbol{\ell})\). By solving this
differential equation, we obtain analytic expression of the generation function for an \(n\)-gon to an \(n\)-gon. Once we have obtained the analytic expression for the generation function for the reduction of an \(n\)-gon to \((n-(k-1))\)-gon, due to the arbitrariness of \(n\), we can also know the analytic expression about an \((n-1)\)-gon to \(((n-1)-(k-1))\)-gon (which is what appears on the right side of the equation). Then, the equation (16) becomes a first-order non-homogeneous differential equation about the generation function of an \(n\)-gon to \((n-k)\)-gon. Continuously repeating this process, we can calculate all the generation functions.
## 3 \(n\)-gon to \(n\)-gon
For the case of reducing an \(n\)-gon to an \(n\)-gon, that is, \(\mathbf{b}=\emptyset\). In this case, \(C_{n\to n}^{(0)}=1\) and the terms inside the curly braces on the right-hand side of equation (16) vanish. Consequently, the differential equation for the generation function \(\mathbf{GF}_{n\to n}(\ell)\) simplifies to:
\[\begin{split}&\left((D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{( \overline{LL})}\ell-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^ {2}\right)\mathbf{GF}_{n\to n}(\ell)\\ &+\left(\ell-4\cdot\frac{(\overline{VL})}{(\overline{LL})}\ell^{ 2}-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{3}\right) \mathbf{GF}^{\prime}_{n\to n}(\ell)=(D-n-1).\end{split} \tag{17}\]
The general solution to this differential equation is2:
Footnote 2: See Appendix A.
\[\ell^{1-D+n}\left(1-x_{+}\cdot\ell\right)^{\frac{D-n-2}{2}}\left(1-x_{-}\cdot \ell\right)^{\frac{D-n-2}{2}}\cdot C_{1}+\frac{1}{1-x_{+}\cdot\ell}\cdot\ _{2}F_{1}\left(\begin{array}{c}1,\frac{D-n}{2}\\ D-n\end{array}\middle|\frac{(x_{-}-x_{+})\cdot\ell}{1-x_{+}\cdot\ell}\right), \tag{18}\]
where
\[x_{\pm}=\frac{2\bigg{(}(\overline{VL})\pm\sqrt{(\overline{LL})R^{2}+( \overline{VL})^{2}-(\overline{LL})(\overline{VV})}\bigg{)}}{(\overline{LL})}. \tag{19}\]
The Generalized Hypergeometric Function is defined as:
\[{}_{p}F_{q}\left(\begin{array}{c}a_{1},a_{2},\cdots,a_{p}\\ b_{1},b_{2},\cdots,b_{q}\end{array}\middle|z\right)=\sum_{n=0}^{\infty}\frac{( a_{1})_{n}\cdots(a_{p})_{n}}{(b_{1})_{n}\cdots(b_{q})_{n}}\cdot\frac{z^{n}}{n!}, \tag{20}\]
where we have used the **Pochhammer symbol**
\[(x)_{n}\equiv\frac{\Gamma(x+n)}{\Gamma(x)}=\prod_{i=1}^{n}(x+(i-1)). \tag{21}\]
At first glance, one might expect the undetermined constant to be resolved by the initial condition \(\mathbf{GF}_{n\to n}(0)=C_{n\to n}^{(0)}=1\). However, this doesn't work in our case. So, how should we determine \(C_{1}\)? The definition of \(\mathbf{GF}_{n\to n}(0)\) specifies that the generation functions should be represented as a Taylor series of \(\ell\). Nevertheless, the term
\[\ell^{1-D+n}\left(1-x_{+}\cdot\ell\right)^{\frac{D-n-2}{2}}\left(1-x_{-}\cdot \ell\right)^{\frac{D-n-2}{2}}, \tag{22}\]
cannot be expanded into a Taylor series3 of \(\ell\). Thus, the constant \(C_{1}=0\). Ultimately, we obtain the generation function of the reduction from an \(n\)-gon to an \(n\)-gon as follows:
Footnote 3: Typically, \(D\) is not selected as an integer due to dimensional regularization
\[\mathbf{GF}_{n\to n}(\ell)=\frac{1}{1-x_{+}\cdot\ell}\cdot\ _{2}F_{1}\left( \begin{array}{c}1,\frac{D-n}{2}\\ D-n\end{array}\right|\frac{(x_{-}-x_{+})\ell}{1-x_{+}\cdot\ell}\right). \tag{10}\]
### Analytical Results
Upon deriving the formula for the generation function pertaining to the reduction from an \(n\)-gon to an \(n\)-gon, as delineated in equation (10), we subsequently elucidate the corresponding analytical formulism.
Substituting (11) into (10), we can obtain:
\[\mathbf{GF}_{n\to n}(\ell)=\frac{1}{1-\frac{2[(\overline{VL})+\sqrt{( LL)R^{2}+(VL)^{2}}-(\overline{LL})(\overline{VV})]}{(\overline{LL})}\cdot\ell} \cdot\ _{2}F_{1}\left(\begin{array}{c}1,\frac{D-n}{2}\\ D-n\end{array}\right|\boldsymbol{x}(\ell)\right), \tag{11}\]
where
\[\boldsymbol{x}(\ell)=-\frac{4\sqrt{(\overline{LL})R^{2}+(\overline{VL})^{2}-( \overline{LL})(\overline{VV})}\cdot\ell}{(\overline{LL})-2[(\overline{VL})+ \sqrt{(\overline{LL})R^{2}+(\overline{VL})^{2}}-(\overline{LL})(\overline{VV} )]\cdot\ell}, \tag{12}\]
and
\((\overline{LL})=L\cdot Q^{-1}\cdot L\)
\((\overline{VL})=V\cdot Q^{-1}\cdot L\)
\((\overline{VV})=V\cdot Q^{-1}\cdot V\)
\(=\left(R\cdot q_{1}\ R\cdot q_{2}\ \cdots\ R\cdot q_{n}\right)\cdot\begin{pmatrix}M_{1}^ {2}&\frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&\cdots&\frac{M_{1}^{2}+M_ {n}^{2}-(q_{1}-q_{n})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}&\cdots&\frac{M_{2}^{2 }+M_{n}^{2}-(q_{2}-q_{n})^{2}}{2}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{M_{n}^{2}+M_{1}^{2}-(q_{n}-q_{1})^{2}}{2}&\frac{M_{n}^{2}+M_{2}^{2}-(q_ {n}-q_{2})^{2}}{2}&\cdots&M_{n}^{2}\end{pmatrix}^{-1}\cdot\begin{pmatrix}1\\ 1\\ \vdots\\ 1\end{pmatrix},\)
\((\overline{VV})=V\cdot Q^{-1}\cdot V\)
\(=\left(R\cdot q_{1}\ R\cdot q_{2}\ \cdots\ R\cdot q_{n}\right)\cdot\begin{pmatrix}M_{1}^ {2}&\frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&\cdots&\frac{M_{1}^{2}+M_ {n}^{2}-(q_{1}-q_{n})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}&\cdots&\frac{M_{2}^{2 }+M_{n}^{2}-(q_{2}-q_{n})^{2}}{2}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{M_{n}^{2}+M_{1}^{2}-(q_{n}-q_{1})^{2}}{2}&\frac{M_{n}^{2}+M_{2}^{2}-(q_ {n}-q_{2})^{2}}{2}&\cdots&M_{n}^{2}\end{pmatrix}^{-1}\cdot\begin{pmatrix}R \cdot q_{1}\\ R\cdot q_{2}\\ \vdots\\ R\cdot q_{n}.\end{pmatrix}.\)
By expanding the generation function as a series in terms of \(\ell\), we obtain:
\[{\bf GF}_{n\to n}(\ell)=\sum_{r=0}^{\infty}\ell^{r}\cdot C_{n\to n}^{(r)}, \tag{3.11}\]
where \(C_{n\to n}^{(r)}\) are the reduction coefficients of tensor integral
\[I_{n}^{(r)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{(2R\cdot l)^{r}}{\prod_{j=1}^{n} (l-q_{j})^{2}-M_{j}^{2}}, \tag{3.12}\]
to the irreducible scalar integral (Master Integral) \(I_{n}^{(0)}\). Furthermore, using the Taylor series expansion,
\[{}_{p}F_{q}\left(\begin{array}{c}a_{1},a_{2},\cdots,a_{p}\\ b_{1},b_{2},\cdots,b_{q}\end{array}\bigg{|}\frac{m\cdot z}{1-n\cdot z}\right)=1 +\sum_{k=1}^{\infty}\Bigg{(}\sum_{i=1}^{k}\frac{{\bf C}_{k-1}^{i}(a_{1})_{i}( a_{2})_{i}\cdots(a_{p})_{i}m^{i}n^{k-i}}{i!(b_{1})_{i}(b_{2})_{i}\cdots(b_{q})_{i}} \Bigg{)}z^{k}, \tag{3.13}\]
where \({\bf C}_{k-1}^{i}\) are binomial coefficients defined as \({\bf C}_{k-1}^{i}=\frac{(k-1)!}{i!(k-1-i)!}\), we can obtain the coefficients for each order in the series expansion:
\[C_{n\to n}^{(r)}=x_{+}^{r}+\sum_{j=0}^{r-1}\sum_{i=0}^{j}\Bigg{(}\frac{( \frac{D-n}{2})_{i}}{(D-n)_{i}}\big{(}x_{-}-x_{+}\big{)}^{i}x_{+}^{r-i}{\bf C} _{j}^{i}\Bigg{)}. \tag{3.14}\]
## 4 \(n\)-gon to \((n-1)\)-gon
Next, let us consider the case of the \((n-1)\)-gon. Without loss of generality, we select the Master Integral as follows:
\[I_{n,\widehat{a_{1}}}^{(0)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{1}{\prod_{j=1, j\neq a_{1}}^{n}(l-q_{j})^{2}-M_{j}^{2}}. \tag{4.1}\]
In this case, \(C_{n\to n;\widehat{a_{1}}}^{(0)}=0\). Then equation (2.16) transforms into:
\[\begin{split}&\Bigg{(}(D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{ (\overline{LL})}\ell-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{ 2}\Bigg{)}\,{\bf GF}_{n\to n;\widehat{a_{1}}}(\ell)\\ +&\Bigg{(}\ell-4\cdot\frac{(\overline{VL})}{( \overline{LL})}\ell^{2}-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})} \ell^{3}\Bigg{)}\,{\bf GF}_{n\to n;\widehat{a_{1}}}^{\prime}(\ell)\\ =& X^{(a_{1})}\bigg{(}(D-n)\ell\cdot{\bf GF}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}}}(\ell)+\ell^{2}\cdot{\bf GF}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}}}^{\prime}(\ell)\bigg{)}\\ +& 2Y^{(a_{1})}\bigg{(}\ell^{3}\cdot{\bf GF}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}}}^{\prime}(\ell)+\ell^{2}\cdot{\bf GF}_{ n;\widehat{a_{1}}\to n;\widehat{a_{1}}}^{\prime}(\ell)\bigg{)},\end{split} \tag{4.2}\]
where \({\bf GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}}(\ell)\) can be obtained from (3.7). All we need to do is replace all instances of \(n\) with \((n-1)\) in expression (3.7), and replace all instances of \((\overline{LL}),(\overline{VL}),(\overline{VV})\) with \((\overline{LL})_{a_{1}},(\overline{VL})_{a_{1}},(\overline{VV})_{a_{1}}\) respectively. That is, \({\bf GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}}(\ell)=[{\bf GF}_{n-1\to n -1}(\ell)]_{a_{1}}\),
\[{\bf GF}_{n,\widehat{a_{1}}\to n,\widehat{a_{1}}}(\ell)= \frac{1}{1-[x_{+}]_{a_{1}}\cdot\ell}\cdot\,\,_{2}F_{1}\left(\begin{array} []{c}1,\frac{D-n+1}{2}\\ D-n+1\end{array}\bigg{|}\frac{[x_{-}-x_{+}]_{a_{1}}\cdot\ell}{1-[x_{+}]_{a_{1}} \cdot\ell}\right), \tag{4.3}\]
where the notation
\[[x_{\pm}]_{a_{1}}=\frac{2\Big{(}(\overline{VL})_{a_{1}}\pm\sqrt{(\overline{LL})_{a _{1}}R^{2}+(\overline{VL})_{a_{1}}^{2}-(\overline{LL})_{a_{1}}(\overline{VV})_{a _{1}}}\Big{)}}{(\overline{LL})_{a_{1}}}, \tag{4.4}\]
as defined in Section 2. The general solution to equation (4.2) is4
Footnote 4: See Appendix A.
\[\begin{split}&\ell^{1-D+n}(1-x_{+}\cdot\ell)^{\frac{n-2D-n}{2}}(1-x_{-} \cdot\ell)^{\frac{-2\triangleleft D-n}{2}}\\ \times&\Bigg{\{}C_{1}+\int_{0}^{\ell}\mathbf{\iota}^{- 1+D-n}(1-x_{+}\cdot\mathbf{\iota})^{\frac{n-D}{2}}(1-x_{-}\cdot\mathbf{\iota})^{\frac {n-D}{2}}\\ &\times\Bigg{(}2Y^{(a_{1})}\big{(}\mathbf{\iota}^{2}\cdot\mathbf{GF} ^{\prime}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}}(\mathbf{\iota})+\mathbf{\iota} \cdot\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}}(\mathbf{\iota}) \big{)}\\ &+X^{(a_{1})}\big{(}(D-n)\mathbf{GF}_{n;\widehat{a_{1}}\to n; \widehat{a_{1}}}(\mathbf{\iota})+\mathbf{\iota}\cdot\mathbf{GF}^{\prime}_{n;\widehat{ a_{1}}\to n;\widehat{a_{1}}}(\mathbf{\iota})\big{)}\Bigg{)}d\mathbf{\iota}\Bigg{\}}.\end{split} \tag{4.5}\]
Clearly, the initial condition \(\mathbf{GF}_{n\to n,\widehat{a_{1}}}(0)=0\) is insufficient to determine the undetermined coefficient \(C_{1}\). However, since we require the generation function to be a Taylor series of \(\ell\), we must have \(C_{1}=0\).5
Footnote 5: \((1-x_{+}\cdot\mathbf{\iota})^{\frac{n-D}{2}}(1-x_{-}\cdot\mathbf{\iota})^{\frac{n-D} {2}}\) and the terms in the large square brackets can be expanded as a Taylor series of \(\mathbf{\iota}\). The factor \(\mathbf{\iota}^{-1+D-n}\) in the integrand will cancel the factor \(\ell^{1-D+n}\) outside the integrand. This makes the result a Taylor series of \(\ell\) after integrating.
The integral in (4.5) cannot be evaluated directly. An intuitive idea would be to first expand the integrand into a series in terms of \(\mathbf{\iota}\), and then integrate. However, fully expanding a hypergeometric function into a series would render the expression overly complex. Therefore, we decide to select a suitable integral formula and perform a series expansion only on part of the integrand. The integral formula we select is
\[\int dz\ _{p}F_{q}\left(\begin{matrix}a_{1},...,a_{p}\\ b_{1},...,b_{q}\end{matrix}\Big{|}k\cdot z\right)\cdot z^{m}=\frac{z^{m+1}}{m +1}\cdot\ _{(p+1)}F_{(q+1)}\left(\begin{matrix}a_{1},...,a_{p},1+m\\ b_{1},...,b_{q},2+m\end{matrix}\Big{|}k\cdot z\right)+Const. \tag{4.6}\]
Then in order to evaluate the integral explicitly, we need to change the integration variable
\[\mathbf{\iota}\to\mathbf{x}_{a_{1}}(\mathbf{\iota})=\frac{\mathbf{\iota}}{1-[x_{+}]_{a_{1}} \cdot\mathbf{\iota}}. \tag{4.7}\]
Note that in (4.5) there are terms involving the derivative of the generating function \(\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}}\), which requires the use of the following derivative formula for the generalized hypergeometric function
\[\frac{d}{dz}\ _{p}F_{q}\left(\begin{matrix}a_{1},...,a_{p}\\ b_{1},...,b_{q}\end{matrix}\Big{|}z\right)=-\frac{\prod_{i=1}^{p}a_{i}}{\prod_{ j=1}^{q}b_{j}}\cdot\ _{p}F_{q}\left(\begin{matrix}1+a_{1},...,1+a_{p}\\ 1+b_{1},...,1+b_{q}\end{matrix}\Big{|}z\right). \tag{4.8}\]
Substituting (4.3) in (4.5), after altering the integration variable, the integral part becomes
\[\int_{0}^{x_{a_{1}}(\ell)}x^{D-n-1}\Bigg{\{}(1+{\bf A}_{a_{1}}x+{\bf B }_{a_{1}}x^{2})^{\frac{n-D}{2}}({\bf P}^{(0)}_{a_{1}}+{\bf P}^{(1)}_{a_{1}}x) \cdot\ _{2}F_{1}\left(\frac{1,\frac{D-n+1}{2}}{D-n+1}\left|[x_{-}-x_{+}]_{a_{1}} \cdot x\right.\right) \tag{4.9}\] \[+(1+{\bf A}_{a_{1}}x+{\bf B}_{a_{1}}x^{2})^{\frac{n-D}{2}}({\bf Q} ^{(1)}_{a_{1}}x+{\bf Q}^{(2)}_{a_{1}}x^{2})\cdot\ _{2}F_{1}\left(\frac{2,\frac{D-n+3}{2}}{D-n+2}\left|[x_{-}-x_{+}]_{a_{1}}\cdot x \right.\right)\Bigg{\}}dx,\]
where
\[{\bf A}_{a_{1}}=-(x_{+}+x_{-})+2[x_{+}]_{a_{1}}, \tag{4.10}\] \[{\bf B}_{a_{1}}=x_{+}\cdot x_{-}-(x_{+}+x_{-})\cdot[x_{+}]_{a_{1} }+[x_{+}^{2}]_{a_{1}},\] \[{\bf P}^{(0)}_{a_{1}}(n)=(D-n)\cdot X^{(a_{1})},\] \[{\bf P}^{(1)}_{a_{1}}=2Y^{(a_{1})}+[x_{+}]_{a_{1}}\cdot X^{(a_{1} )},\] \[{\bf Q}^{(1)}_{a_{1}}=-[x_{+}-x_{-}]_{a_{1}}\cdot X^{(a_{1})}/2,\] \[{\bf Q}^{(2)}_{a_{1}}=-\frac{1}{2}\cdot(2Y^{(a_{1})}+[x_{+}]_{a_{ 1}}\cdot X^{(a_{1})})\cdot[x_{+}-x_{-}]_{a_{1}}.\]
We expand
\[(1+{\bf A}_{a_{1}}x+{\bf B}_{a_{1}}x^{2})^{\frac{n-D}{2}}({\bf P}^ {(0)}_{a_{1}}+{\bf P}^{(1)}_{a_{1}}x)=\sum_{m=0}^{\infty}{\bf C}^{(1)}_{a_{1} }(n;m)\cdot x^{m}, \tag{4.11}\] \[(1+{\bf A}_{a_{1}}x+{\bf B}_{a_{1}}x^{2})^{\frac{n-D}{2}}({\bf Q} ^{(1)}_{a_{1}}x+{\bf Q}^{(2)}_{a_{1}}x^{2})=\sum_{m=0}^{\infty}{\bf C}^{(2)}_{ a_{1}}(n;m)\cdot x^{m},\]
as a Taylor series of \(x\). By formula
\[(1+{\bf A}x+{\bf B}x^{2})^{c}=\sum_{m=0}^{\infty}\Big{\{}\sum_{i=0}^{\lfloor \frac{m}{2}\rfloor}\frac{(c-m+i+1)_{(m-i)}}{(m-2i)!\cdot i!}{\bf A}^{m-2i}{ \bf B}^{i}\Big{\}}x^{m}, \tag{4.12}\]
we have the expansion coefficients as follows:
\[{\bf C}^{(1)}_{a_{1}}(n;m)= {\bf P}^{(0)}_{a_{1}}(n)\cdot\sum_{i=0}^{\lfloor\frac{m}{2}\rfloor }\frac{(\frac{n-D}{2}-m+i+1)_{(m-i)}}{(m-2i)!\cdot i!}{\bf A}^{m-2i}_{a_{1}}{ \bf B}^{i}_{a_{1}} \tag{4.13}\] \[+ {\bf P}^{(1)}_{a_{1}}\cdot\sum_{i=0}^{\lfloor\frac{m-1}{2}\rfloor }\frac{(\frac{n-D}{2}-m+i+2)_{(m-1-i)}}{(m-1-2i)!\cdot i!}{\bf A}^{m-1-2i}_{a_ {1}}{\bf B}^{i}_{a_{1}},\] \[{\bf C}^{(2)}_{a_{1}}(n;m)= {\bf Q}^{(1)}_{a_{1}}\cdot\sum_{i=0}^{\lfloor\frac{m-1}{2}\rfloor }\frac{(\frac{n-D}{2}-m+i+2)_{(m-1-i)}}{(m-1-2i)!\cdot i!}{\bf A}^{m-1-2i}_{a_ {1}}{\bf B}^{i}_{a_{1}}\] \[+ {\bf Q}^{(2)}_{a_{1}}\cdot\sum_{i=0}^{\lfloor\frac{m-2}{2}\rfloor }\frac{(\frac{n-D}{2}-m+i+3)_{(m-2-i)}}{(m-2-2i)!\cdot i!}{\bf A}^{m-2-2i}_{a_ {1}}{\bf B}^{i}_{a_{1}}.\]
Then the integral part (excluding the constant) becomes:
\[\sum_{m=0}^{\infty}{\bf C}^{(1)}_{a_{1}}(n;m)\int_{0}^{x_{a_{1}}( \ell)}dx\ x^{D-n-1+m}\cdot\ _{2}F_{1}\left(\frac{1,\frac{D-n+1}{2}}{D-n+1}\left|[x_{-}-x_{+}]_{a_{1}}\cdot x\right)\right. \tag{4.14}\] \[+\sum_{m=0}^{\infty}{\bf C}^{(2)}_{a_{1}}(n;m)\int_{0}^{x_{a_{1}}( \ell)}dx\ x^{D-n-1+m}\cdot\ _{2}F_{1}\left(\frac{2,\frac{D-n+3}{2}}{D-n+2}\left|[x_{-}-x_{+}]_{a_{1}}\cdot x \right).\]
By using integral formula of generalized hypergeometric function(4.6), we can explicitly solve for the generation function of the reduction from an \(n\)-gon to an \((n-1)\)-gon,
\[{\bf GF}_{n\to n;\widehat{a_{1}}}(\ell) \tag{4.15}\] \[= \left((1-x_{+}\cdot\ell)(1-x_{-}\cdot\ell)\right)^{\frac{-2+D-n}{ 2}}\times\sum_{m=0}^{\infty}\Bigg{\{}\frac{\ell^{m+1}}{(m+D-n)(1-[x_{+}]_{a_ {1}}\cdot\ell)^{m+D-n}}\] \[\times\Bigg{\{}{\bf C}^{(1)}_{a_{1}}(n;m)\cdot\ _{3}F_{2}\left( \begin{array}{c}1,\frac{D-n+1}{2},m+D-n\\ D-n+1,m+D-n+1\end{array}\left|\frac{[x_{-}-x_{+}]_{a_{1}}\cdot\ell}{1-[x_{+}]_{ a_{1}}\cdot\ell}\right.\right)\] \[+{\bf C}^{(2)}_{a_{1}}(n;m)\cdot\ _{3}F_{2}\left( \begin{array}{c}2,\frac{D-n+3}{2},m+D-n\\ D-n+2,m+D-n+1\end{array}\left|\frac{[x_{-}-x_{+}]_{a_{1}}\cdot\ell}{1-[x_{+}]_{ a_{1}}\cdot\ell}\right.\right)\Bigg{\}}\Bigg{\}}.\]
## 5 \(n\)-gon to \((n-2)\)-gon
Now, let us discuss the reduction from an \(n\)-gon to an \((n-2)\)-gon. Without loss of generality, we select the Master Integral:
\[I^{(0)}_{n,\widehat{a_{1}};\widehat{a_{2}}}=\int\frac{d^{D}l}{i\pi^{\frac{D}{2 }}}\frac{1}{\prod_{j=1,j\neq a_{1},a_{2}}^{n}(l-q_{j})^{2}-M_{j}^{2}}. \tag{5.1}\]
In this case, \(C^{(0)}_{n\to n;\widehat{a_{1}};\widehat{a_{2}}}=0\). Then recursive relation (2.16) transforms into
\[\left((D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{(\overline{LL})} \ell-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{2}\right){\bf GF }_{n\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell) \tag{5.2}\] \[+\left(\ell-4\cdot\frac{(\overline{VL})}{(\overline{LL})}\ell^{2} -4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{3}\right){\bf GF}^ {\prime}_{n\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\] \[= \Bigg{\{}2Y^{(a_{1})}\left(\ell^{3}\cdot{\bf GF}^{\prime}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)+\ell^{2}\cdot{\bf GF }_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\right)\] \[+X^{(a_{1})}\left((D-n)\ell\cdot{\bf GF}_{n;\widehat{a_{1}}\to n; \widehat{a_{1}},\widehat{a_{2}}}(\ell)+\ell^{2}\cdot{\bf GF}^{\prime}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\right)\Bigg{\}}\] \[+\Bigg{\{}2Y^{(a_{2})}\left(\ell^{3}\cdot{\bf GF}^{\prime}_{n; \widehat{a_{2}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)+\ell^{2}\cdot{\bf GF }_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\right)\] \[+X^{(a_{2})}\left((D-n)\ell\cdot{\bf GF}_{n;\widehat{a_{2}}\to n; \widehat{a_{1}},\widehat{a_{2}}}(\ell)+\ell^{2}\cdot{\bf GF}^{\prime}_{n; \widehat{a_{2}}\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\right)\Bigg{\}}.\]
The general solution of (5.2) is
\[\epsilon^{1-D+n}(1-x_{+}\cdot\epsilon)^{\frac{-2+D-n}{2}}(1-x_{-} \cdot\epsilon)^{\frac{-2+D-n}{2}}\] \[\times\Bigg{\{}C_{1}+\int_{0}^{\ell}\mathbf{\iota}^{-1+D-n}(1-x_{+} \cdot\mathbf{\iota})^{\frac{n-D}{2}}(1-x_{-}\cdot\mathbf{\iota})^{\frac{n-D}{2}}\] \[\quad\times\Big{\{}2Y^{(a_{1})}\left(\mathbf{\iota}^{2}\cdot\mathbf{ GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{\prime}(\mathbf{ \iota})+\mathbf{\iota}\cdot\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}}, \overline{a_{2}}}(\mathbf{\iota})\right)\] \[+X^{(a_{1})}\left((D-n)\cdot\mathbf{GF}_{n;\widehat{a_{1}}\to n; \widehat{a_{1}},\overline{a_{2}}}(\mathbf{\iota})+\mathbf{\iota}\cdot\mathbf{GF}_{n; \widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{\prime}(\mathbf{\iota})\right)\] \[+2Y^{(a_{2})}\left(\mathbf{\iota}^{2}\cdot\mathbf{GF}_{n;\widehat{a_ {2}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{\prime}(\mathbf{\iota})+\mathbf{\iota} \cdot\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}( \mathbf{\iota})\right)\] \[+X^{(a_{2})}\left((D-n)\cdot\mathbf{GF}_{n;\widehat{a_{2}}\to n; \widehat{a_{1}},\overline{a_{2}}}(\mathbf{\iota})+\mathbf{\iota}\cdot\mathbf{GF}_{n; \widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{\prime}(\mathbf{\iota}) \right)\Big{\}}d\mathbf{\iota}\Bigg{\}}, \tag{5.3}\]
where \(\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell)\) and \(\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell)\) can be obtain from (4.15) by
\[\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell)= [\mathbf{GF}_{n-1\to n-1;\widehat{a_{2}}}(\ell)]_{a_{1}},\] \[\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2} }}(\ell)= [\mathbf{GF}_{n-1\to n-1;\widehat{a_{1}}}(\ell)]_{a_{2}}. \tag{5.4}\]
We select the undetermined constant \(C_{1}=0\) since the generation function should be a Taylor series of \(\ell\). We can see that in the integrand of expression (5.3), \(a_{1}\) and \(a_{2}\) are completely symmetric. We can calculate only one half and obtain the other half by exploiting this symmetry. Moreover, since we are going to use integral formula (4.6) to compute the integral, we can divide \(\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell)\) and \(\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell)\) according to the generalized hypergeometric function as follows:
\[\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell) =\sum_{m_{1}}^{\infty}\sum_{l=1,2}[C_{a_{2}}^{(l)}(n-1;m_{1})]_{a _{1}}\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(l )(m_{1})}(\ell), \tag{5.5}\] \[\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}(\ell) =\sum_{m_{1}}^{\infty}\sum_{l=1,2}[C_{a_{1}}^{(l)}(n-1;m_{1})]_{a _{2}}\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(l )(m_{1})}(\ell),\]
where
\[\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(1)(m_ {1})}(\ell)=\frac{1}{m_{1}+D-n+1}(1-[x_{+}]_{a_{1}}\cdot\ell)^{\frac{-1+D-n}{ 2}}(1-[x_{-}]_{a_{1}}\cdot\ell)^{\frac{-1+D-n}{2}}\] \[\times\Bigg{\{}\frac{\ell^{m_{1}+1}}{(1+[x_{+}]_{a_{1},a_{2}} \cdot\ell)^{m_{1}+D-n+1}}\cdot\ _{3}F_{2}\left(\frac{1,\frac{D-n+2}{2},m_{1}+D-n+1}{D-n+2}\left|\frac{[x_{-}-x_ {+}]_{a_{1},a_{2}}\cdot\ell}{1-[x_{+}]_{a_{1},a_{2}}\cdot\ell}\right.\right) \Bigg{\}},\]
\[\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(2)(m_ {1})}(\ell)=\frac{1}{m_{1}+D-n+1}(1-[x_{+}]_{a_{1}}\cdot\ell)^{\frac{-1+D-n}{ 2}}(1-[x_{-}]_{a_{1}}\cdot\ell)^{\frac{-1+D-n}{2}}\] \[\times\Bigg{\{}\frac{\ell^{m_{1}+1}}{(1+[x_{+}]_{a_{1},a_{2}} \cdot\ell)^{m_{1}+D-n+1}}\cdot\ _{3}F_{2}\left(\frac{2,\frac{D-n+4}{2},m_{1}+D-n+1}{D-n+2}\left|\frac{[x_{-}-x_ {+}]_{a_{1},a_{2}}\cdot\ell}{1-[x_{+}]_{a_{1},a_{2}}\cdot\ell}\right.\right) \Bigg{\}},\]
\[\mathbf{GF}_{n;\widehat{a_{2}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(1)(m_ {1})}(\ell)=\mathbf{GF}_{n;\widehat{a_{1}}\to n;\widehat{a_{1}},\overline{a_{2}}}^{(1)(m_ {1})}(\ell)\Big{|}_{a_{1}\leftrightarrow a_{2}},\]
\[\mathbf{GF}^{(2)(m_{1})}_{n;\widehat{a_{2}}\to n;\widehat{a_{1},a_{2}}}( \epsilon)=\mathbf{GF}^{(2)(m_{1})}_{n;\widehat{a_{1}}\to n;\widehat{a_{1},a_{2}}} (\epsilon)\Big{|}_{a_{1}\leftrightarrow a_{2}}. \tag{5.6}\]
Each part involves only one generalized hypergeometric function. The constants \([C^{(1)}_{a2}(n-1;m_{1})]_{a_{1}}\), \([C^{(1)}_{a_{1}}(n-1;m_{1})]_{a_{2}}\), \([C^{(2)}_{a_{2}}(n-1;m_{1})]_{a_{1}}\), and \([C^{(2)}_{a_{1}}(n-1;m_{1})]_{a_{2}}\) can be factored out of the integral sign. By this separation, the integral in equation (5.3) can be divided into several parts. We can calculate each part separately and add it up. Next we will use the following part as an example to illustrate the process:
\[\begin{split}&\int_{0}^{\ell}\omega^{-1+D-n}(1-x_{+}\cdot \boldsymbol{\omega})^{\frac{n-D}{2}}(1-x_{-}\cdot\boldsymbol{\omega})^{\frac{ n-D}{2}}\\ &\times\Big{\{}2Y^{(a_{1})}\left(\boldsymbol{\omega}^{2}\cdot \mathbf{GF}^{(1)(m_{1})}_{n;\widehat{a_{1}}\to n;\widehat{a_{1},a_{2}}}( \boldsymbol{\omega})+\boldsymbol{\iota}\cdot\mathbf{GF}^{(1)(m_{1})}_{n; \widehat{a_{1}}\to n;\widehat{a_{1},a_{2}}}(\boldsymbol{\iota})\right)\\ &+X^{(a_{1})}\left((D-n)\cdot\mathbf{GF}^{(1)(m_{1})}_{n; \widehat{a_{1}}\to n;\widehat{a_{1},a_{2}}}(\boldsymbol{\iota})+\boldsymbol{ \iota}\cdot\mathbf{GF}^{(1)(m_{1})}_{n;\widehat{a_{1}}\to n;\widehat{a_{1},a_ {2}}}(\boldsymbol{\iota})\right)\Big{\}}d\boldsymbol{\iota}.\end{split} \tag{5.7}\]
**The main difference** from what we did in Section 4 is that, we use another derivative formula of the generalized hypergeometric function
\[\begin{split}&\frac{d}{dz}\ _{p}F_{q}\left(\left.\begin{matrix}a_{1},a_{2}, \cdots,a_{p-1},\alpha\\ b_{1},b_{2},\cdots,b_{q-1},\alpha+1\end{matrix}\right|\!z\right)\\ =&\frac{\alpha}{z}\cdot\left({}_{(p-1)}F_{(q-1)} \left(\left.\begin{matrix}a_{1},a_{2},\cdots,a_{p-1}\\ b_{1},b_{2},\cdots,b_{q-1}\end{matrix}\right|\!z\right)-\ _{p}F_{q}\left(\left. \begin{matrix}a_{1},a_{2},\cdots,a_{p-1},\alpha\\ b_{1},b_{2},\cdots,b_{q-1},\alpha+1\end{matrix}\right|\!z\right)\right).\end{split} \tag{5.8}\]
After changing the integration variable from \(\boldsymbol{\iota}\) to
\[\boldsymbol{\iota}\to x_{a_{1},a_{2}}(\boldsymbol{\iota})=\frac{\boldsymbol{ \iota}}{1-[x_{+}]_{a_{1},a_{2}}\cdot\boldsymbol{\iota}}, \tag{5.9}\]
the integral (5.7) becomes
\[\begin{split}&\int_{0}^{x_{a_{1},a_{2}}(\ell)}dx\ x^{D-n+m_{1}}\\ &\Bigg{\{}\Big{\{}(1+\mathbf{A}_{1}(a_{1};a_{2})x+\mathbf{A}_{2 }(a_{1};a_{2})x^{2})^{-\frac{D-n}{2}}(1+\mathbf{B}_{1}(a_{1};a_{2})x+\mathbf{ B}_{2}(a_{1};a_{2})x^{2})^{\frac{D-n-1}{2}}\\ &\times(\mathbf{Q}_{0}(a_{1};a_{2})+\mathbf{Q}_{1}(a_{1};a_{2})x) \cdot\ _{2}F_{1}\left(\left.\begin{matrix}1,\frac{D-n+2}{2}\\ D-n+2\end{matrix}\right|\![x_{-}-x_{+}]_{a_{1},a_{2}}\cdot x\right)\Big{\}}\\ +&\Big{\{}(1+\mathbf{A}_{1}(a_{1};a_{2})x+\mathbf{A}_{2}(a_{1};a_{2 })x^{2})^{-\frac{D-n}{2}}(1+\mathbf{B}_{1}(a_{1};a_{2})x+\mathbf{B}_{2}(a_{1}; a_{2})x^{2})^{\frac{D-n-3}{2}}\\ &\times\frac{(D-n-1)\Big{(}\mathbf{P}_{1}(a_{1};a_{2})x+\mathbf{ P}_{2}(a_{1};a_{2})x^{2}\Big{)}}{2(D-n+1+m_{1})}\cdot\ _{3}F_{2}\left(\left.\begin{matrix}1,\frac{D-n+2}{2},m_{1}+D-n+1\\ D-n+2,m_{1}+D-n+2\end{matrix}\right|\![x_{-}-x_{+}]_{a_{1},a_{2}}\cdot x\Big{)} \Big{\}}\Bigg{\}},\end{split} \tag{5.10}\]
where coefficients are
\[\mathbf{A}_{1}(a_{1};a_{2}) =-[x_{+}+x_{-}]+2[x_{+}]_{a_{1},a_{2}},\] \[\mathbf{A}_{2}(a_{1};a_{2}) =[x_{+}^{2}]_{a_{1},a_{2}}+x_{+}\cdot x_{-}-2(x_{+}+x_{-})[x_{+}]_ {a_{1},a_{2}},\] \[\mathbf{B}_{1}(a_{1};a_{2}) =-[x_{+}+x_{-}]_{a_{2}}+2[x_{+}]_{a_{1},a_{2}},\] \[\mathbf{B}_{2}(a_{1};a_{2}) =[x_{+}^{2}]_{a_{1},a_{2}}+[x_{+}\cdot x_{-}]_{a_{2}}-2[x_{+}+x_{- }]_{a_{2}}\cdot[x_{+}]_{a_{1},a_{2}},\] \[\mathbf{P}_{1}(a_{1};a_{2}) =-\Big{(}[x_{+}+x_{-}]_{a_{2}}X^{(a_{2})}+4Y^{(a_{2})}\Big{)},\] \[\mathbf{P}_{2}(a_{1};a_{2}) =2Y^{(a_{2})}([x_{+}+x_{-}]_{a_{2}}-2[x_{+}]_{a_{1},a_{2}})+X^{(a _{2})}(2[x_{+}\cdot x_{-}]_{a_{2}}-[x_{+}+x_{-}]_{a_{2}}[x_{+}]_{a_{1},a_{2}}),\] \[\mathbf{Q}_{0}(a_{1};a_{2}) =X^{(a_{2})},\] \[\mathbf{Q}_{1}(a_{1};a_{2}) =2Y^{(a_{2})}+[x_{+}]_{a_{1},a_{2}}\cdot X^{(a_{2})}. \tag{5.11}\]
**Note that labels \(a_{1},a_{2}\) are no longer symmetric in the above coefficients**. By equation (4.12), we can expand the integral (5.10) as follows:
\[\sum_{m_{2}=0}^{\infty}\int_{0}^{x_{a_{1},a_{2}}(\ell)}x^{m_{1}+m _{2}+D-n}\Big{\{}(\mathbf{0})_{m_{2}}(n;a_{1};a_{2};m_{1})\cdot\ _{2}F_{1}\left(\begin{array}{c}1,\frac{D-n+2}{2}\\ D-n+2\end{array}\Big{|}[x_{-}-x_{+}]_{a_{1},a_{2}}\cdot x\right)\] \[+(\mathbf{1})_{m_{2}}(n;a_{1};a_{2};m_{1})\cdot\ _{3}F_{2}\left( \begin{array}{c}1,\frac{D-n+2}{2},m_{1}+D-n+1\\ D-n+2,m_{1}+D-n+2\end{array}\Big{|}[x_{-}-x_{+}]_{a_{1},a_{2}}\cdot x\right) \Big{\}}dx, \tag{5.12}\]
where the expansion coefficients are
\[(\mathbf{1})_{m_{2}}(n;a_{1};a_{2},m_{1})= \frac{(D-n-1)\cdot\Big{(}\mathbf{P}_{1}(a_{1};a_{2})\cdot \mathbf{N}_{m_{2}-1}^{(1)}(a_{1};a_{2})+\mathbf{P}_{2}(a_{1};a_{2})\cdot \mathbf{N}_{m_{2}-2}^{(1)}(a_{1};a_{2})\Big{)}}{2(D-n+1+m_{1})},\] \[(\mathbf{0})_{m_{2}}(n;a_{1};a_{2};m_{1})= \mathbf{Q}_{0}(a_{1};a_{2})\cdot\mathbf{N}_{m_{2}}^{(0)}(a_{1};a _{2})+\mathbf{Q}_{1}(a_{1};a_{2})\cdot\mathbf{N}_{m_{2}-1}^{(0)}(a_{1};a_{2}). \tag{5.13}\]
The \(\mathbf{N}_{m^{\prime}}^{(1)}(a_{1};a_{2})\) and \(\mathbf{N}_{m^{\prime}}^{(2)}(a_{1};a_{2})\) come from the Taylor expansion of terms like \((1+A_{1}x+A_{2}x^{2})^{c_{1}}(1+B_{1}x+B_{2}x^{2})^{c_{2}}\),which are
\[(1+\mathbf{A}_{1}(a_{1};a_{2})x+\mathbf{A}_{2}(a_{1};a_{2})x^{2}) ^{-\frac{D-n}{2}}(1+\mathbf{B}_{1}(a_{1};a_{2})x+\mathbf{B}_{2}(a_{1};a_{2})x ^{2})^{\frac{D-n-3}{2}}\] \[= \sum_{m^{\prime}=0}^{\infty}\mathbf{N}_{m^{\prime}}^{(1)}(a_{1}; a_{2})\cdot x^{m^{\prime}},\] \[(1+\mathbf{A}_{1}(a_{1};a_{2})x+\mathbf{A}_{2}(a_{1};a_{2})x^{2}) ^{-\frac{D-n}{2}}(1+\mathbf{B}_{1}(a_{1};a_{2})x+\mathbf{B}_{2}(a_{1};a_{2})x ^{2})^{\frac{D-n-1}{2}}\] \[= \sum_{m^{\prime}=0}^{\infty}\mathbf{N}_{m^{\prime}}^{(0)}(a_{1}; a_{2})\cdot x^{m^{\prime}}. \tag{5.14}\]
By formula (4.12), the Taylor expansion coefficients are
\[{\bf N}^{(1)}_{m^{\prime}}(a_{1};a_{2})= \sum_{l=0}^{m^{\prime}}\sum_{i=0}^{\lfloor\frac{l}{2}\rfloor}\sum_ {j=0}^{\lfloor\frac{m^{\prime}-l}{2}\rfloor}\frac{(-\frac{D-n}{2}-l+i+1)_{(l-i) }(\frac{D-n-3}{2}-m^{\prime}+l+j+1)_{(m^{\prime}-l-j)}}{(l-2i)!\cdot i!(m^{ \prime}-l-2j)!\cdot j!}\] \[\times{\bf A}_{1}(a_{1};a_{2})^{l-2i}{\bf A}_{2}(a_{1};a_{2})^{i }{\bf B}_{1}(a_{1};a_{2})^{m^{\prime}-l-2j}{\bf B}_{2}(a_{1};a_{2})^{j},\] \[{\bf N}^{(0)}_{m^{\prime}}(a_{1};a_{2})= \sum_{l=0}^{m^{\prime}}\sum_{i=0}^{\lfloor\frac{l}{2}\rfloor} \sum_{j=0}^{\lfloor\frac{m^{\prime}-l}{2}\rfloor}\frac{(-\frac{D-n}{2}-l+i+1)_ {(l-i)}(\frac{D-n-1}{2}-m^{\prime}+l+j+1)_{(m^{\prime}-l-j)}}{(l-2i)!\cdot i! \cdot(m^{\prime}-l-2j)!\cdot j!}\] \[\times{\bf A}(a_{1};a_{2})^{l-2i}_{1}{\bf A}_{2}(a_{1};a_{2})^{i }{\bf B}_{1}(a_{1};a_{2})^{m^{\prime}-l-2j}{\bf B}_{2}(a_{1};a_{2})^{j}. \tag{5.15}\]
**Again, we need to emphasize label \(a_{1},a_{2}\) are not symmetric** in \({\bf N}^{(1)}_{m^{\prime}}(a_{1};a_{2})\), \({\bf N}^{(0)}_{m^{\prime}}(a_{1};a_{2})\) and \(({\bf 1})_{m_{2}}(n;a_{1};a_{2},m_{1})\), \(({\bf 0})_{m_{2}}(n;a_{1};a_{2},m_{1})\). By using integral formula(4.6), we can calculate the integral (5.7). Eventually, by summing everything up, we obtain the final generation function of \(n\)-gon to \((n-2)\)-gon as
\[{\bf GF}_{n\to n;\widehat{a_{1},a_{2}}}(\ell)\] \[= (1-x_{+}\cdot\ell)^{\frac{-2+D-n}{2}}(1-x_{-}\cdot\ell)^{\frac{- 2+D-n}{2}}\] \[\times\sum_{m_{1},m_{2}=0}^{\infty}\Bigg{\{}\frac{1}{m_{1}+m_{2}+ D-n+1}\cdot\frac{\ell^{m_{1}+m_{2}+2}}{(1-[x_{+}]_{a_{1},a_{2}}\cdot\ell)^{m_{1}+m _{2}+D-n+1}}\] \[\times\Big{\{}\left(({\bf 1})_{m_{2}}(n;a_{1};a_{2};m_{1})[{\bf C}^{(1 )}_{a_{1}}(n-1;m_{1})]_{a_{2}}+({\bf 1})_{m_{2}}(n;a_{2};a_{1};m_{1})[{\bf C}^{(1 )}_{a_{2}}(n-1;m_{1})]_{a_{1}}\right)\] \[\times\ _{4}F_{3}\left(\begin{array}{c}1,\frac{D-n+2}{2},m_{1}+D-n+1,m_{1}+m_{2}+D-n+1\\ D-n+2,m_{1}+D-n+2,m_{1}+m_{2}+D-n+2\end{array}\Big{|}\frac{[x_{-}-x_{+}]_{a_{1 },a_{2}}\cdot\ell}{1-[x_{+}]_{a_{1},a_{2}}\cdot\ell}\right)\] \[+\Big{(}({\bf 0})_{m_{2}}(n;a_{1};a_{2};m_{1})[{\bf C}^{(1 )}_{a_{1}}(n-1;m_{1})]_{a_{2}}+({\bf 0})_{m_{2}}(n;a_{2};a_{1};m_{1})[{\bf C}^{(1 )}_{a_{2}}(n-1;m_{1})]_{a_{1}}\Big{)}\] \[\times\ _{3}F_{2}\left(\begin{array}{c}1,\frac{D-n+2}{2},m_{1}+m_{2}+ D-n+1\\ D-n+2,m_{1}+m_{2}+D-n+2\end{array}\Big{|}\frac{[x_{-}-x_{+}]_{a_{1},a_{2}}\cdot\ell}{1-[ x_{+}]_{a_{1},a_{2}}\cdot\ell}\right)\] \[+\Big{(}({\bf 0})_{m_{2}}(n;a_{1};a_{2};m_{1})[{\bf C}^{(2 )}_{a_{1}}(n-1;m_{1})]_{a_{2}}+({\bf 0})_{m_{2}}(n;a_{2};a_{1};m_{1})[{\bf C}^{(1 )}_{a_{2}}(n-1;m_{1})]_{a_{1}}\Big{)}\] \[\times\ _{3}F_{2}\left(\begin{array}{c}1,\frac{D-n+2}{2},m_{1}+m_{2}+ D-n+1\\ D-n+2,m_{1}+m_{2}+D-n+2\end{array}\Big{|}\frac{[x_{-}-x_{+}]_{a_{1},a_{2}}\cdot\ell}{1-[ x_{+}]_{a_{1},a_{2}}\cdot\ell}\right)\] \[+\Big{(}({\bf 0})_{m_{2}}(n;a_{1};a_{2};m_{1})[{\bf C}^{(2 )}_{a_{1}}(n-1;m_{1})]_{a_{2}}+({\bf 0})_{m_{2}}(n;a_{2};a_{1};m_{1})[{\bf C}^{(2 )}_{a_{2}}(n-1;m_{1})]_{a_{1}}\Big{)}\] \[\times\ _{3}F_{2}\left(\begin{array}{c}2,\frac{D-n+4}{2},m_{1}+m_{2}+ D-n+1\\ D-n+3,m_{1}+m_{2}+D-n+2\end{array}\Big{|}\frac{[x_{-}-x_{+}]_{a_{1},a_{2}}\cdot\ell}{1-[ x_{+}]_{a_{1},a_{2}}\cdot\ell}\right)\Big{\}}\Bigg{\}}. \tag{5.16}\]
## 6 Generation funciton of \(n\)-gon to \((n-k)\)-gon
From(4.15) and(5.16), we found \({\bf GF}_{n\to n;\widehat{a_{1}},\widehat{a_{2}}}(\ell)\) has an analogous functional form with \({\bf GF}_{n\to n;\widehat{a_{1}}}(\ell)\). This suggests that the generation function \({\bf GF}_{n\to n;\widehat{{\bf I}}_{k}}(\ell)\) (where the label
list \(\mathbf{I}_{k}=\{a_{1},a_{2},...,a_{k}\}\)) should also have the same functional form. In this section, we will provide the form of \(\mathbf{GF}_{n\to n;\mathbf{\hat{I}}_{k}^{\prime}}(\boldsymbol{\ell})\) directly. A proof by induction will be given in the next section to show it does satisfy the recursion relation (10).
Before that, we provide some notations to represent generalized hypergeometric functions. First, we construct an array including \(k\) elements\(\{b_{1},b_{2},...,b_{k}\}\)with each \(b_{i}\) is either 0 or 1. For example, for \(k=4\), the array can be one of
\[\begin{split}&\{0,0,0,0\},\{0,0,0,1\},\{0,0,1,0\},\{0,0,1,1 \},\{0,1,0,0\},\{0,1,0,1\},\{0,1,1,0\},\{0,1,1,1\},\\ &\{1,0,0,0\},\{1,0,0,1\},\{1,0,1,0\},\{1,0,1,1\},\{1,1,0,0\},\{1,1,0,1\},\{1,1,1,0\},\{1,1,1,1\}.\end{split} \tag{11}\]
Now we can construct two types of generalized hypergeometric functions based on the array as follows:
\[\begin{split}\mathbf{HG}_{1}(n,k;\{b_{1},\cdots,b_{k}\};z)=& _{(2+k)}\ F_{(1+k)}\left(\begin{array}{l}1,\frac{D-n+k}{2},S_{1}-b_{1},S_{2} -b_{2},\cdots,S_{k}-b_{k}\\ D-n+k,S_{1},S_{2},\cdots,S_{k}\end{array}\bigg{|}z\right),\\ \mathbf{HG}_{2}(n,k;\{b_{1},\cdots,b_{k}\};z)=&_{(2+k)}\ F_{(1+k)} \left(\begin{array}{l}2,\frac{D-n+k+2}{2},S_{1}-b_{1},S_{2}-b_{2},\cdots,S_{ k}-b_{k}\\ D-n+k+1,S_{1},S_{2},\cdots,S_{k}\end{array}\bigg{|}z\right),\end{split} \tag{12}\]
where
\[S_{i}=\sum_{j=1}^{i}m_{j}+D-n+k. \tag{13}\]
For example,
\[\begin{split}&\mathbf{HG}_{1}(n,3;\{0,0,1\};z)\\ =&_{5}F_{4}\left(\begin{array}{l}1,\frac{D-n+3}{2},m_{1}+D-n+3,m_{1} +m_{2}+D-n+3,m_{1}+m_{2}+m_{3}+D-n+2\\ D-n+3,m_{1}+D-n+3,m_{1}+m_{2}+D-n+3,m_{1}+m_{2}+m_{3}+D-n+3\end{array}\bigg{|} z\right)\\ =&_{3}F_{2}\left(\begin{array}{l}1,\frac{D-n+3}{2},m_{1}+m_{2}+m_{3}+D-n+2 \\ D-n+3,m_{1}+m_{2}+m_{3}+D-n+3\end{array}\bigg{|}z\right),\end{split}\]
\[\begin{split}&\mathbf{HG}_{2}(n,3;\{1,0,1\};z)\\ =&_{5}F_{4}\left(\begin{array}{l}2,\frac{D-n+5}{2},m_{1}+D-n+2,m_{1} +m_{2}+D-n+3,m_{1}+m_{2}+m_{3}+D-n+2\\ D-n+4,m_{1}+D-n+3,m_{1}+m_{2}+D-n+3,m_{1}+m_{2}+m_{3}+D-n+3\end{array}\bigg{|} z\right)\\ =&_{4}F_{3}\left(\begin{array}{l}2,\frac{D-n+5}{2},m_{1}+D-n+2,m_{1} +m_{2}+m_{3}+D-n+2\\ D-n+4,m_{1}+D-n+3,m_{1}+m_{2}+m_{3}+D-n+3\end{array}\bigg{|}z\right).\end{split} \tag{14}\]
Then generation function \(\mathbf{GF}_{n\to n;\mathbf{\hat{I}}_{k}^{\prime}}\) is
\[\mathbf{GF}_{n\to n;\mathbf{\hat{I}}_{k}^{\prime}}(\boldsymbol{\ell})=(1-x_{+ }\cdot\boldsymbol{\ell})^{\frac{-2+D-n}{2}}\cdot(1-x_{-}\cdot\boldsymbol{\ell })^{\frac{-2+D-n}{2}}\]
\[\times\sum_{m_{1},...,m_{k}=0}^{\infty}\Bigg{\{}\frac{1}{\sum_{i=1}^{k}m_{i}+D -n+k-1}\cdot\frac{\ell^{\sum_{i=1}^{k}m_{i}+k}}{(1-[x_{+}]_{\mathbf{I}_{k}}\cdot \ell)^{\sum_{i=1}^{k}m_{i}+D-n+k-1}}\] \[\times\sum_{\{a^{\prime}_{1},...,a^{\prime}_{k}\}\in\sigma(\mathbf{ I}_{k}),\ \ b_{1},b_{2},...,b_{k-1}=0}^{1}\mathbf{C}_{\{b_{1},...,b_{k-1}\}}^{(a^{\prime}_{1 },...,a^{\prime}_{k})}(n)\] \[\times\Big{\{}[\mathbf{C}_{a^{\prime}_{1}}^{(1)}(n-k+1;m_{1})]_{a ^{\prime}_{2},...,a^{\prime}_{k}}\cdot\mathbf{HG}_{1}(n,k;\{b_{1},b_{2},...,b_ {k-1},1\};\mathscr{W}_{\mathbf{I}_{k}}(\ell))\] \[\quad+[\mathbf{C}_{a^{\prime}_{1}}^{(2)}(n-k+1;m_{1})]_{a^{ \prime}_{2},...,a^{\prime}_{k}}\cdot\mathbf{HG}_{2}(n,k;\{b_{1},b_{2},...,b_ {k-1},1\};\mathscr{W}_{\mathbf{I}_{k}}(\ell))\Big{\}}\Bigg{\}}, \tag{6.5}\]
where
\[\mathscr{W}_{\mathbf{I}_{k}}(\ell)=\frac{[x_{-}-x_{+}]_{\mathbf{I}_{k}}\cdot \ell}{1-[x_{+}]_{\mathbf{I}_{k}}\cdot\ell}. \tag{6.6}\]
The first summation in the penultimate line of (6.5) runs over all the permutation of \(\mathbf{I}_{k}=\{a_{1},a_{2},...,a_{k}\}\). For example for \(k=3\), this summation should runs over
\[\{a^{\prime}_{1},a^{\prime}_{2},a^{\prime}_{3}\}=\Big{\{}\{a_{1},a_{2},a_{3} \},\ \{a_{1},a_{3},a_{2}\},\ \{a_{2},a_{1},a_{3}\},\ \{a_{2},a_{3},a_{1}\},\ \{a_{3},a_{1},a_{2}\},\ \{a_{3},a_{2},a_{1}\}\Big{\}}. \tag{6.7}\]
It's important to note that the second summation on that line does **not** include \(b_{k}\). For example for \(k=4\), this summation runs over without \(b_{4}\) but
\[\{b_{1},b_{2},b_{3}\}=\Big{\{}\{0,0,0\},\{0,0,1\},\{0,1,0\},\{0,1,1\},\{1,0,0 \},\{1,0,1\},\{1,1,0\},\{1,1,1\}\Big{\}}. \tag{6.8}\]
For a given permutation \((a^{\prime}_{1},a^{\prime}_{2}...,a^{\prime}_{k})\) and a fixed \(\{b_{1},b_{2},...,b_{k-1}\}\), the \(\mathbf{C}_{\{b_{1},...,b_{k-1}\}}^{(a^{\prime}_{1},...,a^{\prime}_{k})}(n)\) in the penultimate line is constructed as
\[\mathbf{C}_{\{b_{1},...,b_{k-1}\}}^{(a^{\prime}_{1},...,a^{\prime }_{k})}(n) \tag{6.9}\] \[= [(\mathbf{b}_{1})_{m_{2}}(n-k+2;1;a^{\prime}_{1};a^{\prime}_{2};m _{1})]_{a^{\prime}_{3},...,a^{\prime}_{k}}\] \[\cdot [(\mathbf{b}_{2})_{m_{3}}(n-k+3;2;\{a^{\prime}_{1},a^{\prime}_{2} \};a^{\prime}_{3};m_{1}+m_{2})]_{a^{\prime}_{4},...,a^{\prime}_{k}}\] \[\ldots\] \[\cdot [(\mathbf{b}_{i})_{m_{i+1}}(n-k+i+1;i;\{a^{\prime}_{1},...,a^{ \prime}_{i}\};a^{\prime}_{i+1};m_{1}+\cdots+m_{i})]_{a^{\prime}_{i+2},...,a^{ \prime}_{k}}\] \[\ldots\] \[\cdot (\mathbf{b}_{k-1})_{m_{k}}(n;k-1;\{a^{\prime}_{1},...,a^{\prime }_{k-1}\};a^{\prime}_{k};m_{1}+...+m_{k-1}).\]
For \(k=4\), \(\{b_{1},b_{2},b_{3}\}=\{0,1,0\}\) as an example, the \(\mathbf{C}_{\{0,1,0\}}^{(a^{\prime}_{1},...,a^{\prime}_{4})}(n)\) is6
Footnote 6: Remember notation \([\Delta]_{\mathbf{b}}\) means we add a subscript \(\mathbf{b}\) on each term in \(\Delta\) with form \((\overline{AB})\) or \((\overline{AB})_{\mathbf{a}}\). As we explained in second chapter.
\[\mathbf{C}_{\{0,1,0\}}^{(a^{\prime}_{1},...,a^{\prime}_{4})}(n)= [(\mathbf{0})_{m_{2}}(n-2;1;a^{\prime}_{1};a^{\prime}_{2};m_{1})]_{a ^{\prime}_{3},a^{\prime}_{4}}\cdot[(\mathbf{1})_{m_{3}}(n-1;2;\{a^{\prime}_{1},a^{\prime}_{2}\};a^{\prime}_{3};m_{1}+m_{2})]_{a^{\prime}_{4}}\] \[\cdot (\mathbf{0})_{m_{4}}(n;3;\{a^{\prime}_{1},a^{\prime}_{2},a^{\prime }_{3}\};a^{\prime}_{4};m_{1}+m_{2}+m_{3}).\]
There is a slight difference between the definitions of \(\mathbf{0}\) and \(\mathbf{1}\) when compared to the definitions in formal section (5.13). There includes one more variable \(k^{\prime}\), defined as
\[\begin{split}(\mathbf{1})_{m^{\prime}}(n^{\prime};k^{\prime}; \mathbf{a};b;sum)=&\frac{(D-n^{\prime}-1)\cdot\left(\mathbf{P}_{1 }(\mathbf{a};b)\cdot\mathbf{N}_{m^{\prime}-1}^{(1)}(\mathbf{a};b)+\mathbf{P}_ {2}(\mathbf{a};b)\cdot\mathbf{N}_{m^{\prime}-2}^{(1)}(\mathbf{a};b)\right)}{2 (D-n^{\prime}+k^{\prime}+sum)},\\ (\mathbf{0})_{m^{\prime}}(n^{\prime};k^{\prime};\mathbf{a};b;sum)=& \mathbf{Q}_{0}(\mathbf{a};b)\cdot\mathbf{N}_{m^{\prime}}^{(0)}( \mathbf{a};b)+\mathbf{Q}_{1}(\mathbf{a};b)\cdot\mathbf{N}_{m^{\prime}-1}^{(0 )}(\mathbf{a};b).\end{split} \tag{6.11}\]
The variables \(n^{\prime}\), \(k^{\prime}\), \(m^{\prime}\), and \(sum\) must be non-negative integers. The variable \(\mathbf{a}\) can be fixed either a single label or a list of labels, while \(b\) solely represents a single label. Upon setting \(m^{\prime}=m_{2}\), \(n^{\prime}=n\), \(k=1\), \(\mathbf{a}=a_{1}\), \(b=a_{2}\), and \(sum=m_{1}\), we revert to the form (5.13) as outlined in the previous section. The other coefficients \(\mathbf{P}_{1}(\mathbf{a};b)\), \(\mathbf{P}_{2}(\mathbf{a};b)\), \(\mathbf{Q}_{0}(\mathbf{a};b)\), \(\mathbf{Q}_{1}(\mathbf{a};b)\), \(\mathbf{N}_{m^{\prime}}(\mathbf{a};b)\) are defined by merely replacing all instances of the labels \(a_{1}\) and \(a_{2}\) in (5.11), and (5.15) with variables \(\mathbf{a}\) and \(b\) respectively. We have thus completed the explanation of every term in \(\mathbf{GF}_{n\to n;\widehat{\mathbf{f}}_{k}}(\ell)\) (6.5). Furthermore, if we additionally define \(\mathbf{C}_{\emptyset}^{a_{1}}\equiv 1\) for \(k=1\), we find that the function (6.5) aligns with \(\mathbf{GF}_{n\to n;\widehat{a_{1}}}(\ell)\)(4.15) and \(\mathbf{GF}_{n\to n;\widehat{a_{1},\widehat{a_{2}}}}(\ell)\)(5.16).
### Analytic example
#### 6.1.1 Triangle to Tadpole
Now we present the reduction of an tensor triangle to tadpole \(\mathbf{GF}_{3\to 3:\widehat{2,3}}\) as an example to illustrate the analyticity. The generation function can be obtain from (6.5) by selecting \(n=3,k=2,a_{1}=2,a_{2}=3\) and
\[\begin{split} x_{\pm}=\frac{2\bigg{(}(\overline{VL})\pm\sqrt{( \overline{LL})R^{2}+(\overline{VL})^{2}-(\overline{LL})(\overline{VV})}\bigg{)} }{(\overline{LL})},\\ [x_{\pm}]_{2}=\frac{2\bigg{(}(\overline{VL})_{2}\pm\sqrt{( \overline{LL})_{2}R^{2}+(\overline{VL})_{2}^{2}-(\overline{LL})_{2}(\overline{ VV})_{2}}\bigg{)}}{(\overline{LL})_{2}},\\ [x_{\pm}]_{3}=\frac{2\bigg{(}(\overline{VL})_{3}\pm\sqrt{( \overline{LL})_{3}R^{2}+(\overline{VL})_{3}^{2}-(\overline{LL})_{3}(\overline{ VV})_{3}}\bigg{)}}{(\overline{LL})_{3}},\\ [x_{\pm}]_{2,3}=\frac{2\bigg{(}(\overline{VL})_{2,3}\pm\sqrt{( \overline{LL})_{2,3}R^{2}+(\overline{VL})_{2,3}^{2}-(\overline{LL})_{2,3}( \overline{VV})_{2,3}}\bigg{)}}{(\overline{LL})_{2,3}},\end{split} \tag{6.12}\]
\[X^{(2)} =\left((\overline{H_{2}L})(\overline{VL})_{2}-(\overline{H_{2}V})( \overline{LL})_{2}\right)/(\overline{LL}), \tag{6.13}\] \[X^{(3)} =\left((\overline{H_{3}L})(\overline{VL})_{3}-(\overline{H_{3}V})( \overline{LL})_{3}\right)/(\overline{LL}),\] \[[X^{(2)}]_{3} =\left((\overline{H_{2}L})_{3}(\overline{VL})_{2,3}-(\overline{H _{2}V})_{3}(\overline{LL})_{2,3}\right)/(\overline{LL})_{3},\] \[[X^{(3)}]_{2} =\left((\overline{H_{3}L})_{2}(\overline{VL})_{3,2}-(\overline{H _{3}V})_{2}(\overline{LL})_{3,2}\right)/(\overline{LL})_{2},\] \[Y^{(2)} =\left((\overline{H_{2}L})R^{2}+(\overline{H_{2}V})(\overline{VL} )_{2}-(\overline{H_{2}L})(\overline{V}\overline{V})_{2}\right)/(\overline{LL}),\] \[Y^{(3)} =\left((\overline{H_{3}V})R^{2}+(\overline{H_{3}V})(\overline{VL })_{3}-(\overline{H_{3}L})(\overline{V}\overline{V})_{3}\right)/(\overline{ LL}),\] \[[Y^{(2)}]_{3} =\left((\overline{H_{2}L})_{3}R^{2}+(\overline{H_{2}V})_{3}( \overline{VL})_{2,3}-(\overline{H_{2}L})_{3}(\overline{V}\overline{V})_{2,3} \right)/(\overline{LL})_{3},\] \[[Y^{(3)}]_{2} =\left((\overline{H_{3}L})_{2}R^{2}+(\overline{H_{3}V})_{2}( \overline{VL})_{3,2}-(\overline{H_{3}L})_{2}(\overline{V}\overline{V})_{3,2} \right)/(\overline{LL})_{2},\]
where
\[(\overline{LL}) =L\cdot Q^{-1}\cdot L \tag{6.14}\] \[(\overline{VL}) =V\cdot Q^{-1}\cdot L\] \[(\overline{VV}) =V\cdot Q^{-1}\cdot L\] (6.15) \[(\overline{LL})_{2} =L_{\overline{2}}\cdot Q_{\overline{2}}^{-1}\frac{1}{2}\cdot L_{ \overline{2}}=\left(1\ 1\right)\left(\frac{M_{1}^{2}}{M_{3}^{2}+M_{1}^{2}-(q_{3}-q_{1})^{2}}{2} \frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{3})^{2}}{2}\right)^{-1}\left(\begin{matrix} 1\\ 1\end{matrix}\right),\] (6.17) \[(\overline{VL})_{2} =V_{\widehat{2}}\cdot Q_{\widehat{2}}^{-1}\frac{1}{2}\cdot L_{ \widehat{2}}=\left(R\cdot q_{1}\ R\cdot q_{3}\right)\left(\frac{M_{1}^{2}}{M_ {3}^{2}+M_{1}^{2}-(q_{3}-q_{1})^{2}}{2}\frac{M_{3}^{2}+M_{1}^{2}-(q_{3}-q_{1})^ {2}}{2}\right)^{-1}\left(\begin{matrix}1\\ 1\end{matrix}\right), \tag{6.18}\]
\[(\overline{VV})_{2}=V_{\widehat{2}}\cdot Q_{\widehat{2}\,\,\widehat{2}}^{-1} \cdot V_{\widehat{2}}=\begin{pmatrix}R\cdot q_{1}&R\cdot q_{3}\end{pmatrix} \begin{pmatrix}M_{1}^{2}&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}&\frac{M _{1}^{2}+M_{2}^{2}-(q_{1}-q_{3})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{3}-q_{1})^{2}}{2}&M_{2}^{2}\end{pmatrix}^{-1} \begin{pmatrix}R\cdot q_{1}\\ R\cdot q_{3}\end{pmatrix}, \tag{6.19}\]
\[(\overline{LL})_{3}=L_{\widehat{2}}\cdot Q_{\widehat{3}\,\,\widehat{3}}^{-1} \cdot L_{\widehat{2}}=\begin{pmatrix}1&1\end{pmatrix}\begin{pmatrix}M_{1}^{2}& \frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}\end{pmatrix}^{-1} \begin{pmatrix}1\\ 1\end{pmatrix}, \tag{6.20}\]
\[(\overline{VL})_{3}=V_{\widehat{2}}\cdot Q_{\widehat{3}\,\,\widehat{3}}^{-1} \cdot L_{\widehat{2}}=\begin{pmatrix}R\cdot q_{1}&R\cdot q_{2}\end{pmatrix} \begin{pmatrix}M_{1}^{2}&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}\end{pmatrix}^{-1} \begin{pmatrix}1\\ 1\end{pmatrix}, \tag{6.21}\]
\[(\overline{VV})_{3}=V_{\widehat{2}}\cdot Q_{\widehat{3}\,\,\widehat{3}}^{-1} \cdot V_{\widehat{2}}=\begin{pmatrix}R\cdot q_{1}&R\cdot q_{2}\end{pmatrix} \begin{pmatrix}M_{1}^{2}&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}\end{pmatrix}^{-1} \begin{pmatrix}R\cdot q_{1}\\ R\cdot q_{2}\end{pmatrix}, \tag{6.22}\]
\[(\overline{LL})_{2,3}=L_{\widehat{2},\widehat{3}}\cdot Q_{\widehat{2},\, \widehat{3}}^{-1}\cdot L_{\widehat{2},\widehat{3}}=1\cdot(M_{1}^{2})^{-1}\cdot 1, \tag{6.23}\]
\[(\overline{VL})_{2,3}=V_{\widehat{2},\widehat{3}}\cdot Q_{\widehat{2},\, \widehat{3}}^{-1}\cdot L_{\widehat{2},\widehat{3}}=(R\cdot q_{1})\cdot(M_{1} ^{2})^{-1}\cdot 1, \tag{6.24}\]
\[(\overline{VV})_{2,3}=V_{\widehat{2},\widehat{3}}\cdot Q_{\widehat{2},\, \widehat{3}}^{-1}\cdot V_{\widehat{2},\widehat{3}}=(R\cdot q_{1})\cdot(M_{1} ^{2})^{-1}\cdot(R\cdot q_{1}), \tag{6.25}\]
\[(\overline{H_{2}L}) =H_{2}\cdot Q^{-1}\cdot L\] \[=\begin{pmatrix}0&1&0\end{pmatrix}\begin{pmatrix}M_{1}^{2}&\frac{M _{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{ 3})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}&\frac{M_{2}^{2}+M_{ 2}^{2}-(q_{2}-q_{3})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{3}-q_{1})^{2}}{2}&\frac{M_{2}^{2}+M_{2}^{2}-(q_{ 3}-q_{2})^{2}}{2}&M_{3}^{2}\end{pmatrix}^{-1}\begin{pmatrix}1\\ 1\end{pmatrix}, \tag{6.26}\]
\[(\overline{H_{2}L})_{3}=(H_{2})_{\widehat{3}}\cdot Q_{\widehat{3}}^{-1} \cdot L_{\widehat{3}}=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}M_{1}^{2}& \frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}\end{pmatrix}^{-1} \begin{pmatrix}1\\ 1\end{pmatrix}, \tag{6.27}\]
\[(\overline{H_{2}V}) =H_{2}\cdot Q^{-1}\cdot V\]
\[(\overline{H_{2}V})_{3}=(H_{2})_{\widehat{3}}\cdot Q_{\widehat{3}}^{-1} \cdot V_{\widehat{3}}=\left(0\ 1\right)\begin{pmatrix}M_{1}^{2}&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{2})^{2}}{2 }&\frac{M_{1}^{2}+M_{2}^{2}-(q_{1}-q_{3})^{2}}{2}\\ \frac{M_{2}^{2}+M_{1}^{2}-(q_{2}-q_{1})^{2}}{2}&M_{2}^{2}&\frac{M_{2}^{2}+M_{ 2}^{2}-(q_{2}-q_{3})^{2}}{2}\\ \frac{M_{3}^{2}+M_{1}^{2}-(q_{3}-q_{1})^{2}}{2}&\frac{M_{3}^{2}+M_{2}^{2}-(q_{ 3}-q_{2})^{2}}{2}&M_{3}^{2}\end{pmatrix}^{-1}\begin{pmatrix}R\cdot q_{1}\\ R\cdot q_{2}\\ R\cdot q_{3}\end{pmatrix}, \tag{6.33}\]
Expanding the generation function as a series of \(\ell\), we can obtain:
\[\mathbf{GF}_{3\to 3;\widehat{2};\widehat{3}}(\ell)=\sum_{r=0}^{\infty}\ell^{r} \cdot C_{3\to 3;\widehat{2};\widehat{3}}^{(r)}, \tag{6.34}\]
where \(C_{3\to 3;\widehat{2};\widehat{3}}^{(r)}\) are the reduction coefficients of tensor triangle
\[I_{3}^{(r)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{(2R\cdot l)^{r}}{((l-q_{1})^{2} -M_{1}^{2})((l-q_{2})^{2}-M_{3}^{2})((l-q_{3})^{2}-M_{3}^{2})}, \tag{6.35}\]
to the master integral
\[I_{1}^{(0)}=\int\frac{d^{D}l}{i\pi^{D/2}}\frac{1}{(l-q_{1})^{2}-M_{1}^{2}}. \tag{6.36}\]
Moreover, we list the first two non-zero orders in the expansion:
\[C_{3\to 3;\widehat{2};\widehat{3}}^{(2)}= X^{(2)}\cdot[X^{(3)}]_{2}+X^{(3)}\cdot[X^{(2)}]_{3},\] \[C_{3\to 3;\widehat{2};\widehat{3}}^{(3)}= \frac{1}{2(D-1)}\Bigg{\{}D\left(X^{(2)}\cdot[X^{(3)}]_{2}\cdot[x_ {+}+x_{-}]_{2}+X^{(3)}\cdot[X^{(2)}]_{3}\cdot[x_{+}+x_{-}]_{3}\right)\] \[+ \left(X^{(2)}\cdot[X^{(3)}]_{2}+X^{(3)}\cdot[X^{(2)}]_{3}\right) \left((D-1)\cdot[x_{+}+x_{-}]_{2,3}+(D+1)\cdot(x_{+}+x_{-})\right)\] \[+ 8\left(Y^{(2)}\cdot[X^{(3)}]_{2}+Y^{(3)}\cdot[X^{(2)}]_{3}\right) +4\left(X^{(2)}\cdot[Y^{(3)}]_{2}+X^{(3)}\cdot[Y^{(2)}]_{3}\right)\Bigg{\}}.\]
#### 6.1.2 Reduction Coefficients of \(n\)-gon to \((n-k)\)-gon
In addition, for the general reduction of an \(n-\)gon to an \((n-k)\)-gon for \(k\geq 1\), we expand
\[\mathbf{GF}_{n\to n;\widehat{\mathbf{I}}_{k}}=\sum_{r=0}^{\infty}\ell^{r} \cdot C_{n\to n;\widehat{\mathbf{I}}_{k}}^{(r)}. \tag{6.38}\]
The reduction coefficients are
\[C_{n\to n;\widehat{\mathbf{I}}_{k}}^{(r)}= \sum_{l_{1}+l_{2}+l_{3}+\sum_{i=1}^{k}m_{i}+k=r}\Bigg{\{}\mathbf{ N}_{1}^{(l_{1})}\Bigg{(}\sum_{m_{1},...,m_{k}=0}^{\sum_{i=1}^{k}m_{i}+k\leq r }\mathbf{N}_{2}^{(l_{2})}(m_{1},...,m_{k}) \tag{6.39}\] \[\times\sum_{\{a_{1}^{\prime},...,a_{k}^{\prime}\}\in\sigma( \mathbf{I}_{k}),\;b_{1},b_{2},...,b_{k-1}=0}\mathbf{C}_{\{b_{1},...,b_{k-1}\} }^{(a_{1}^{\prime},...,a_{k}^{\prime})}(n)\] \[\times \Big{(}[\mathbf{C}_{a_{1}^{\prime}}^{(1)}(n-k+1;m_{1})]_{a_{2}^{ \prime},...,a_{k}^{\prime}}\cdot\mathbf{M}_{1;\{b_{1},b_{2},...,b_{k-1}\}}^{( l_{3})}(m_{1},...,m_{k})\] \[+[\mathbf{C}_{a_{1}^{\prime}}^{(2)}(n-k+1;m_{1})]_{a_{2}^{\prime},...,a_{k}^{\prime}}\cdot\mathbf{M}_{2;\{b_{1},b_{2},...,b_{k-1}\}}^{(l_{3})}( m_{1},...,m_{k})\Big{)}\Bigg{)}\Bigg{\}}.\]
The sources of those coefficients are shown as follows:
* \(\mathbf{N}_{1}^{(l_{1})}\) comes from \[(1-x_{+}\cdot\ell)^{\frac{-2+D-n}{2}}(1-x_{-}\cdot\ell)^{\frac{-2+ D-n}{2}}\] (6.40) \[= \sum_{l_{1}=0}^{\infty}\Bigg{(}\sum_{i=0}^{l_{1}}\frac{(-x_{+})^ {i}(\frac{-2+D-n}{2}-i+1)_{i}}{i!}\cdot\frac{(-x_{-})^{l_{1}-i}(\frac{-2+D-n}{ 2}-(l_{1}-i)+1)_{(l_{1}-i)}}{(l_{1}-i)!}\Bigg{)}\ell^{l_{1}}\] \[= \sum_{l_{1}=0}^{\infty}\mathbf{N}_{1}^{(l_{1})}\cdot\ell^{l_{1}}.\]
* \(\mathbf{N}_{2}^{(l_{2})}(m_{1},...,m_{k})\) comes from \[\frac{1}{\sum_{i=1}^{k}m_{i}+D-n+k-1}\frac{\ell^{\sum_{i=1}^{k}m_{i }+k}}{(1-[x_{+}]_{\mathbf{I}_{k}}\cdot\ell)^{\sum_{i=1}^{k}m_{i}+D-n+k-1}}\] (6.41) \[= \sum_{l_{2}=0}^{\infty}\left(\frac{(-[x_{+}]_{\mathbf{I}_{k}})^{ l_{2}}\left(-(\sum_{i=1}^{k}m_{i}+D-n+k-1)-l_{2}+1\right)_{l_{2}}}{\left(\sum_{i=1}^{ k}m_{i}+D-n+k-1\right)\cdot l_{2}!}\right)\cdot\ell^{l_{2}+\sum_{i=1}^{k}m_{i}+k}\] (6.42) \[= \sum_{l_{2}=0}^{\infty}\mathbf{N}_{2}^{(l_{2})}(m_{1},...,m_{k}) \cdot\ell^{l_{2}+\sum_{i=1}^{k}m_{i}+k}.\] (6.43)
* By (3.13), \(\mathbf{M}_{1;\{b_{1},b_{2},...,b_{k-1}\}}^{(l_{3})}(m_{1},...,m_{k})\) and \(\mathbf{M}_{2;\{b_{1},b_{2},...,b_{k-1}\}}^{(l_{3})}(m_{1},...,m_{k})\) come from \[\mathbf{HG}_{1}(n,k;\{b_{1},b_{2},...,b_{k-1},1\};\mathscr{W}_{ \mathbf{I}_{k}}(\ell))\] \[= 1+\sum_{l_{3}=1}^{\infty}\Bigg{(}\sum_{i=1}^{l_{3}}\frac{ \mathbf{C}_{l_{3}-1}^{i}(\frac{D-n+k}{2})_{i}([x_{-}-x_{+}]_{\mathbf{I}_{k}}) ^{i}([x_{+}]_{\mathbf{I}_{k}})^{l_{3}-i}}{(D-n+k)_{i}}\cdot\mathscr{S}_{i}(\{b _{1},...,b_{k-1}\})\Bigg{)}\ell^{l_{3}}\] \[= \sum_{l_{3}=0}^{\infty}\Big{(}\mathbf{M}_{1;\{b_{1},b_{2},...,b_ {k-1}\}}^{(l_{3})}(m_{1},...,m_{k})\Big{)}\cdot\ell^{l_{3}},\] (6.44) \[\mathbf{HG}_{2}(n,k;\{b_{1},b_{2},...,b_{k-1},1\};\mathscr{W}_{ \mathbf{I}_{k}}(\ell))\] \[= 1+\sum_{l_{3}=1}^{\infty}\Bigg{(}\sum_{i=1}^{l_{3}}\frac{ \mathbf{C}_{l_{3}-1}^{i}(\frac{D-n+k+2}{2})_{i}([x_{-}-x_{+}]_{\mathbf{I}_{k}} )^{i}([x_{+}]_{\mathbf{I}_{k}})^{l_{3}-i}}{(D-n+k+1)_{i}}\cdot\mathscr{S}_{i}(\{ b_{1},...,b_{k-1}\})\Bigg{)}\ell^{l_{3}}\] \[= \sum_{l_{3}=0}^{\infty}\Big{(}\mathbf{M}_{2;\{b_{1},b_{2},...,b_ {k-1}\}}^{(l_{3})}(m_{1},...,m_{k})\Big{)}\cdot\ell^{l_{3}},\] (6.45) where the factor \(\mathscr{S}_{i}(\{b_{1},...,b_{k-1}\})\) are defined as \[\mathscr{S}_{i}(\{b_{1},...,b_{k-1}\})\] \[= \frac{\left(\prod_{\alpha=1}^{k-1}(\sum_{\beta=1}^{\alpha}m_{ \beta}+D-n+k-b_{\alpha})_{i}\right)\left(\sum_{\beta=1}^{k}m_{\beta}+D-n+k-1 \right)_{i}}{\left(\prod_{\alpha=1}^{k-1}(\sum_{\beta=1}^{\alpha}m_{\beta}+D-n +k)_{i}\right)\left(\sum_{\beta=1}^{k}m_{\beta}+D-n+k\right)_{i}}.\] (6.46)
## 7 Proof
In this section, we will provide an inductive proof of parameter \(k\) to verify the correctness of (6.5). The main methodology echoes the computation of \(\mathbf{GF}_{n\to n;\widehat{a_{1},a_{2}}}(\ell)\). Suppose that expression (6.5) holds for \(k\). We can evaluate the generation function for reduction of an \(n\)-gon to \((n-(k+1))\)-gon by solving the differential equation (2.16). After writing down the general solution, we divided the integration into several parts according to the generalized
hypergeometry function. During the steps of integration, we firstly alter the integration variable. Subsequently, we conduct the Taylor expansion on a part of the integrand function. Lastly, by selecting the suitable integration formula(4.6), we affirm that the statement (6.5) holds true for \(k+1\). Readers who are already familiar with this process may choose to skip this section. The derivation and integration rules that we employ during our proof are outlined in (5.8) and (4.6). In the context of (6.2), these two rules transform into:
\[\begin{split}&\frac{d}{dx}\mathbf{HG}_{l}(n-1;k;\{b_{1},...,b_{k- 1},1\};z)\\ =&\frac{\sum_{j=1}^{k}m_{j}+D-n-k}{z}\Big{(}\mathbf{HG }_{l}(n-1;k;\{b_{1},...,b_{k-1},0\};z)-\mathbf{HG}_{l}(n-1;k;\{b_{1},...,b_{k-1 },1\};z)\Big{)},\\ &\text{for }l=1,2,\end{split} \tag{7.1}\]
and
\[\begin{split}&\int dz\ z^{\sum_{j=1}^{k+1}m_{j}+D-n-k-1}\cdot \mathbf{HG}_{l}(n-1;k;\{b_{1},...,b_{k-1},l\};h\cdot z)\\ =&\frac{z^{\sum_{j=1}^{k+1}m_{j}+D-n-k}}{\sum_{j=1} ^{k+1}m_{j}+D-n-k}\cdot\mathbf{HG}_{l}(n;k+1;\{b_{1},...,b_{k-1},l,1\};h\cdot z ),\ \text{for }l=1,2.\end{split} \tag{7.2}\]
Supposing generation function of \(n\)-gon to \((n-k)\)-gon \(\mathbf{GF}_{n\to n;\mathbf{I}_{k}}(\ell)\) satisfies (6.5), then recursive relation of \(\mathbf{GF}_{n\to n;\widetilde{\mathbf{I}_{k+1}}}(\ell)\) with \(\mathbf{I}_{k+1}=\{a_{1},a_{2},...,a_{k+1}\}\) is
\[\begin{split}&\left((D-n-1)-2(D-n)\cdot\frac{(\overline{VL})}{ (\overline{LL})}\ell-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{2 }\right)\mathbf{GF}_{n\to n;\widetilde{\mathbf{I}_{k+1}}}(\ell)\\ +&\left(\ell-4\cdot\frac{(\overline{VL})}{(\overline {LL})}\ell^{2}-4\cdot\frac{R^{2}-(\overline{VV})}{(\overline{LL})}\ell^{3} \right)\mathbf{GF}^{\prime}_{n\to n;\widetilde{\mathbf{I}_{k+1}}}(\ell)\\ =&\sum_{a_{i},i=1}^{k+1}\Bigg{\{}2Y^{(a_{i})}\left( \ell^{3}\cdot\mathbf{GF}^{\prime}_{n;\widehat{a_{i}}\to n;\widetilde{ \mathbf{I}_{k+1}}}(\ell)+\ell^{2}\cdot\mathbf{GF}_{n;\widehat{a_{i}}\to n; \widetilde{\mathbf{I}_{k+1}}}(\ell)\right)\\ +& X^{(a_{i})}\left((D-n)\ell\cdot\mathbf{GF}_{n; \widehat{a_{i}}\to n;\widetilde{\mathbf{I}_{k+1}}}(\ell)+\ell^{2}\cdot \mathbf{GF}^{\prime}_{n;\widehat{a_{i}}\to n;\widetilde{\mathbf{I}_{k+1}}}( \ell)\right)\Bigg{\}}.\end{split} \tag{7.3}\]
The general solution is
\[\begin{split}&\ell^{1-D+n}(1-x_{+}\cdot\ell)^{\frac{-2+D-n}{2}}(1- x_{-}\cdot\ell)^{\frac{-2+D-n}{2}}\\ \times&\Bigg{\{}C_{1}+\int_{0}^{\ell}\mathbf{\iota}^{-1+ D-n}(1-x_{+}\cdot\mathbf{\iota})^{\frac{n-D}{2}}(1-x_{-}\cdot\mathbf{\iota})^{\frac{n-D}{2}} \\ &\times\sum_{a_{i},i=1}^{k+1}\Big{\{}2Y^{(a_{i})}\left(\mathbf{\iota} ^{2}\cdot\mathbf{GF}^{\prime}_{n;\widehat{a_{i}}\to n;\widetilde{\mathbf{I}_{ k+1}}}(\mathbf{\iota})+\mathbf{\iota}\cdot\mathbf{GF}_{n;\widehat{\mathbf{I}_{k+1}}}(\mathbf{ \iota})\right)\\ +& X^{(a_{i})}\left((D-n)\cdot\mathbf{GF}_{n;\widehat{ a_{i}}\to n;\widetilde{\mathbf{I}_{k+1}}}(\mathbf{\iota})+\mathbf{\iota}\cdot\mathbf{GF}^{ \prime}_{n;\widehat{a_{i}}\to n;\widetilde{\mathbf{I}_{k+1}}}(\mathbf{\iota}) \right)\Big{\}}d\mathbf{\iota}\Bigg{\}}.\end{split} \tag{7.4}\]
The undetermined constant \(C_{1}\) is determined to be zero, as the generation function must conform to the requirement of being a Taylor series in terms of \(\ell\). Where \(\mathbf{GF}_{n;\widehat{a_{i}}\to n;\widehat{\mathbf{I}_{k+1}}}(\ell)=[ \mathbf{GF}_{n-1\to n-1;\widehat{\mathbf{I}_{k+1}};\widehat{a_{i}}}(\ell)]_{a_ {i}}\). We split it into
\[\begin{split}\mathbf{GF}_{n;\widehat{a_{i}}\to n;\widehat{ \mathbf{I}_{k+1}}}(\ell)&=\sum_{m_{1},\ldots,m_{k}=0,\;\{a_{1}^{ \prime},\ldots,a_{k}^{\prime}\}\in\sigma(\mathbf{I}_{k+1}/a_{i}),\;b_{1}, \ldots,b_{k-1}=0}^{1}[\mathbf{C}_{\{b_{1},\ldots,b_{k-1}\}}^{(a_{1}^{\prime}, \ldots,a_{k}^{\prime})}(n-1)]_{a_{i}}\\ &\times\left(\sum_{l=1,2}\left[\mathbf{C}_{a_{1}^{\prime}}^{(l)} (n-k;m_{1})\right]_{a_{2}^{\prime},\ldots,a_{k}^{\prime},a_{i}}\cdot\mathbf{GF }_{n;\widehat{a_{i}}\to n;\widehat{\mathbf{I}_{k+1}}}^{(l);m_{1},\ldots,m_{k} ;b_{1},\ldots,b_{k-1}}(\ell)\right),\end{split} \tag{7.5}\]
where
\[\begin{split}&\mathbf{GF}_{n;\widehat{a_{i}}\to n;\widehat{ \mathbf{I}_{k+1}}}^{(l);m_{1},\ldots,m_{k};b_{1},\ldots,b_{k-1}}(\ell)=(1-[x_ {+}]_{a_{i}}\cdot\ell)^{\frac{-1+D-n}{2}}(1-[x_{-}]_{a_{i}}\cdot\ell)^{\frac{- 1+D-n}{2}}\\ &\times\Bigg{\{}\frac{1}{\sum_{i=1}^{k}m_{i}+D-n+k}\cdot\frac{ \ell\sum_{i=1}^{k}m_{i}+k}{(1-[x_{+}]_{\mathbf{I}_{k+1}}\cdot\ell)^{\sum_{i=1 }^{k}m_{i}+D-n+k}}\\ &\times\mathbf{HG}_{l}(n-1,k;\{b_{1},b_{2},...,b_{k-1},1\}; \mathscr{W}_{\mathbf{I}_{k+1}}(\ell))\Bigg{\}},\text{ for }l=1,2.\end{split} \tag{7.6}\]
Since the coefficients \([\mathbf{C}_{\{b_{1},\ldots,b_{k-1}\}}^{(a_{1}^{\prime},\ldots,a_{k}^{\prime} )}(n-1)]_{a_{i}}\) and \([C_{a_{1}^{\prime}}^{(1)/(2)}(n-k;m_{1})]_{a_{2}^{\prime},\ldots,a_{k}^{\prime },a_{i}}\) in (7.5) are independent of \(\ell\), they can be pulled out of the integral. Then solution (7.4) can be separated into
\[\begin{split}&(1-x_{+}\cdot\ell)^{\frac{-2+D-n}{2}}(1-x_{-}\cdot \ell)^{\frac{-2+D-n}{2}}\\ &\times\sum_{l=1}^{2}\sum_{a_{i},i=1}^{k+1}\sum_{m_{1},\ldots,m_{ k}=0}^{\infty}\sum_{\{a_{1}^{\prime},\ldots,a_{k}^{\prime}\}\in\sigma(\mathbf{I}_{k+1 }/a_{i})}\sum_{b_{1},\ldots,b_{k-1}=0}^{1}[\mathbf{C}_{\{b_{1},\ldots,b_{k-1}\} }^{(a_{1}^{\prime},\ldots,a_{k}^{\prime})}(n-1)]_{a_{i}}[\mathbf{C}_{a_{1}^{ \prime}}^{(l)}(n-k;m_{1})]_{a_{2}^{\prime},\ldots,a_{k}^{\prime},a_{i}}\\ &\times\Bigg{\{}\ell^{1-D+n}\int_{0}^{\ell}\boldsymbol{\iota}^{- 1+D-n}(1-x_{+}\cdot\boldsymbol{\iota})^{\frac{n-D}{2}}(1-x_{-}\cdot\boldsymbol {\iota})^{\frac{n-D}{2}}\\ &\quad\times\Big{\{}2Y^{(a_{i})}\left(\boldsymbol{\iota}^{2} \cdot\mathbf{GF}_{n;\widehat{a_{i}}\to n;\widehat{\mathbf{I}_{k+1}}}^{(l);m_ {1},\ldots,m_{k};b_{1},\ldots,b_{k-1}}(\boldsymbol{\iota})+\boldsymbol{\iota }\cdot\mathbf{GF}_{n;\widehat{a_{i}}\to n;\widehat{\mathbf{I}_{k+1}}}^{(l);m_ {1},\ldots,m_{k};b_{1},\ldots,b_{k-1}}(\boldsymbol{\iota})\right)\\ &\quad+X^{(a_{i})}\left((D-n)\cdot\mathbf{GF}_{n;\widehat{a_{i}} \to n;\widehat{\mathbf{I}_{k+1}}}^{(l);m_{1},\ldots,m_{k};b_{1},\ldots,b_{k-1 }}(\boldsymbol{\iota})+\boldsymbol{\iota}\cdot\mathbf{GF}_{n;\widehat{a_{i}} \to n;\widehat{\mathbf{I}_{k+1}}}^{(l);m_{1},\ldots,m_{k};b_{1},\ldots,b_{k-1 }}(\boldsymbol{\iota})\right)\Big{\}}d\boldsymbol{\iota}\Bigg{\}}.\end{split} \tag{7.7}\]
Next we focus on the part inside the curly braces. After changing the integration variable \(\boldsymbol{\iota}\) to \(\boldsymbol{x_{\mathbf{I}_{k+1}}}(\boldsymbol{\iota})\) :
\[\boldsymbol{x_{\mathbf{I}_{k+1}}}(\boldsymbol{\iota})=\frac{\boldsymbol{\iota}} {1-[x_{+}]_{\mathbf{I}_{k+1}}\cdot\boldsymbol{\iota}}, \tag{7.8}\]
and applying derivative formula (7.1), terms inside the big curly braces of (7.7) become
\[\ell^{1-D+n}\int_{0}^{x\mathbf{I}_{k+1}(\ell)}dx\ x^{\sum_{j=1}^{k+1}m_{j}+D-n+k-1}\] \[\times\Biggl{\{}\biggl{\{}\bigl{(}\mathbf{Q}_{0}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_{i})+\mathbf{Q}_{1}(\{a^{\prime}_{1},...,a^{\prime}_{k}\}; a_{i})\cdot x\bigr{)}\] \[\quad\times\bigl{(}1+\mathbf{A}_{1}(\{a^{\prime}_{1},...,a^{ \prime}_{k}\};a_{i})x+\mathbf{A}_{2}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_{ i})x^{2}\bigr{)}^{-\frac{D-n}{2}}\] \[\quad\times\bigl{(}1+\mathbf{B}_{1}(\{a^{\prime}_{1},...,a^{ \prime}_{k}\};a_{i})x+\mathbf{B}_{2}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_{ i})x^{2}\bigr{)}^{\frac{D-n-1}{2}}\] \[\quad\times\mathbf{HG}_{l}(n-1;k;\{b_{1},b_{2},...,b_{k-1},0\}; [x_{-}-x_{+}]\mathbf{I}_{k+1}\cdot x)\biggr{\}} \tag{7.9}\] \[+\Bigl{\{}\frac{(D-n-1)\bigl{(}\mathbf{P}_{1}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_{i})x+\mathbf{P}_{2}(\{a^{\prime}_{1},...,a^{\prime}_{k} \};a_{i})x^{2}\bigr{)}}{2(D-n+k+m_{1}+m_{2}+\cdots+m_{k})}\] \[\quad\times\bigl{(}1+\mathbf{A}_{1}(\{a^{\prime}_{1},...,a^{ \prime}_{k}\};a_{i})x+\mathbf{A}_{2}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_ {i})x^{2}\bigr{)}^{-\frac{D-n}{2}}\] \[\quad\times\bigl{(}1+\mathbf{B}_{1}(\{a^{\prime}_{1},...,a^{ \prime}_{k}\};a_{i})x+\mathbf{B}_{2}(\{a^{\prime}_{1},...,a^{\prime}_{k}\};a_ {i})x^{2}\bigr{)}^{\frac{D-n-3}{2}}\] \[\quad\times\mathbf{HG}_{l}(n-1;k;\{b_{1},b_{2},...,b_{k-1},1\}; [x_{-}-x_{+}]\mathbf{I}_{k+1}\cdot x)\bigr{\}}\Biggr{\}}.\]
The coefficients \(\mathbf{P}_{1}(\mathbf{a};b)\), \(\mathbf{P}_{2}(\mathbf{a};b)\), \(\mathbf{Q}_{0}(\mathbf{a};b)\), \(\mathbf{Q}_{1}(\mathbf{a};b)\), \(\mathbf{A}_{1}(\mathbf{a};b)\), \(\mathbf{A}_{2}(\mathbf{a};b)\), \(\mathbf{B}_{1}(\mathbf{a};b)\), and \(\mathbf{B}_{2}(\mathbf{a};b)\) are defined by simply replacing all instances of labels \(a_{1}\) and \(a_{2}\) in (5.11), (5.15) with the label list \(\{a^{\prime}_{1},...,a^{\prime}_{k}\}\) and the single label \(a_{i}\), respectively. Then, upon carrying out the series expansion, it transforms into
\[\ell^{1-D+n}\sum_{m_{k+1}=0}^{\infty}\ \int_{0}^{x\mathbf{I}_{k+1}( \ell)}dx\ x^{\sum_{j=1}^{k+1}m_{j}+D-n+k-1}\] \[\times\Bigl{\{}(\mathbf{0})_{m_{k+1}}(n;k;\{a^{\prime}_{1},...,a^{ \prime}_{k}\};a_{i};\sum_{j=1}^{k}m_{j})\cdot\mathbf{HG}_{l}(n-1;k;\{b_{1},b_{ 2},...,b_{k-1},0\};[x_{-}-x_{+}]\mathbf{I}_{k+1}\cdot x)\] \[\quad+(\mathbf{1})_{m_{k+1}}(n;k;\{a^{\prime}_{1},...,a^{\prime}_ {k}\};a_{i};\sum_{j=1}^{k}m_{j})\cdot\mathbf{HG}_{l}(n-1;k;\{b_{1},b_{2},...,b _{k-1},1\};[x_{-}-x_{+}]\mathbf{I}_{k+1}\cdot x)\Bigr{\}}. \tag{7.10}\]
According to integration rule (7.2), above equation becomes
\[\sum_{m_{k+1}=0}^{\infty}\Bigg{\{}\frac{1}{\sum_{j=1}^{k+1}m_{j}+ D-n+k}\cdot\frac{\ell^{\sum_{j=1}^{k+1}m_{j}+k+1}}{(1-[x_{+}]\mathbf{I}_{k+1} \cdot\ell)^{\sum_{j=1}^{k+1}m_{j}+D-n+k}}\] \[\times\sum_{b_{k}=0}^{1}(\mathbf{b}_{k})_{m_{k+1}}(n;k;\{a^{ \prime}_{1},...a^{\prime}_{k}\};a_{i};\sum_{j=1}^{k}m_{j})\cdot\mathbf{HG}_{l }(n;k+1;\{b_{1},b_{2},...,b_{k-1},b_{k},1\};\mathscr{W}_{\mathbf{I}_{k+1}}( \ell))\Bigg{\}}. \tag{7.11}\]
When everything is added together, we obtain the generation function of an \(n\)-gon to an \((n-(k+1))\)-gon as follows:
\[\begin{split}&\mathbf{GF}_{n\to n;\mathbf{I}_{k+1}^{-}}(\ell)\\ =&(1-x_{+}\cdot\ell)^{\frac{-2+D-n}{2}}(1-x_{-} \cdot\ell)^{\frac{-2+D-n}{2}}\times\sum_{l=1}^{2}\sum_{m_{1},...,m_{k+1}=0}^{ \infty}\sum_{b_{1},...,b_{k}=0}^{\infty}\sum_{a_{i},i=1}^{1}\sum_{\{a^{\prime}_ {1},...,a^{\prime}_{k}\}\in\sigma(\mathbf{I}_{k+1}/a_{i})}\\ \times&\Bigg{\{}\frac{1}{\sum_{j=1}^{k+1}m_{j}+D-n+ k}\cdot\frac{\ell^{\sum_{j=1}^{k+1}m_{j}+k+1}}{(1-[x_{+}]_{\mathbf{I}_{k+1}} \cdot\ell)^{\sum_{j=1}^{k+1}m_{j}+D-n+k}}\\ \times&(\mathbf{b}_{k})_{m_{k+1}}(n;k;\{a^{\prime}_ {1},...a^{\prime}_{k}\};a_{i};\sum_{j=1}^{k}m_{j})\cdot[\mathbf{C}^{(a^{\prime }_{1},...,a^{\prime}_{k})}_{\{b_{1},...,b_{k-1}\}}(n-1)]_{a_{i}}[\mathbf{C}^{ (l)}_{a^{\prime}_{1}}(n-(k+1)-1;m_{1})]_{a^{\prime}_{2},...,a^{\prime}_{k},a _{i}}\\ \times&\mathbf{HG}_{l}(n;k+1;\{b_{1},b_{2},...,b_{k- 1},b_{k},1\};\mathscr{W}_{\mathbf{I}_{k+1}}(\ell))\Bigg{\}}.\end{split} \tag{7.12}\]
From the definition of \(\mathbf{C}^{(a^{\prime}_{1},...,a^{\prime}_{k})}_{\{b_{1},...,b_{k-1}\}}(n)\) (6.9), we have
\[\begin{split}&[\mathbf{C}^{(a^{\prime}_{1},...,a^{\prime}_{k})}_{ \{b_{1},...,b_{k-1}\}}(n-1)]_{a_{i}}\cdot(\mathbf{b}_{k})_{m_{k+1}}(n;k;\{a^{ \prime}_{1},...a^{\prime}_{k}\};a_{i};\sum_{j=1}^{k}m_{j})\\ =&[(\mathbf{b}_{1})_{m_{2}}(n-k+1;1;a^{\prime}_{1} ;a^{\prime}_{2};m_{1})]_{a^{\prime}_{3},...,a^{\prime}_{k},a_{i}}\\ \cdot&[(\mathbf{b}_{2})_{m_{3}}(n-k+2;2;\{a^{\prime }_{1},a^{\prime}_{2}\};a^{\prime}_{3};m_{1}+m_{2})]_{a^{\prime}_{4},...,a^{ \prime}_{k},a_{i}}\\ &\ldots\\ \cdot&[(\mathbf{b}_{j})_{m_{j+1}}(n-k+j;j;\{a^{\prime}_ {1},...,a^{\prime}_{j}\};a^{\prime}_{j+1};m_{1}+\cdots+m_{j})]_{a^{\prime}_{j+ 2},...,a^{\prime}_{k},a_{i}}\\ &\ldots\\ \cdot&[(\mathbf{b}_{k-1})_{m_{k}}(n;k-1;\{a^{\prime}_{1},...,a^{\prime}_{k-1}\};a^{\prime}_{k};m_{1}+...+m_{k-1})]_{a_{i}}\\ \cdot&(\mathbf{b}_{k})_{m_{k+1}}(n;k;\{a^{\prime}_ {1},...a^{\prime}_{k}\};a_{i};\sum_{j=1}^{k}m_{j})\\ =&\mathbf{C}^{(a^{\prime}_{1},...,a^{\prime}_{k},a _{i})}_{\{b_{1},...,b_{k}\}}(n).\end{split} \tag{7.13}\]
And obviously,
\[\sum_{a_{i},i=1}^{k+1}\sum_{\{a^{\prime}_{1},...,a^{\prime}_{k}\}\in\sigma( \mathbf{I}_{k+1}/a_{i})}\mathbf{C}^{(a^{\prime}_{1},...,a^{\prime}_{k},a_{i}) }_{\{b_{1},...,b_{k}\}}(n)=\sum_{\{a^{\prime}_{1},...,a^{\prime}_{k+1}\}\in \sigma(\mathbf{I}_{k+1})}\mathbf{C}^{(a^{\prime}_{1},...,a^{\prime}_{k},a^{ \prime}_{k+1})}_{\{b_{1},...,b_{k}\}}(n). \tag{7.14}\]
Then, we can see that (7.12) precisely aligns with our result (6.5) in the \(k+1\) case. Therefore, we have successfully validated the correctness of our result.
## 8 Conclusion
In this paper, we present an explicit expression for the generation function for the reduction of an \(n\)-gon to an \((n-k)\)-gon, where \(k\) is a general value. We formulate a novel
recursive relation of generation functions, which is based on our investigation into Feynman Parametrization in projective space. This newly established relation comprises a single ordinary differential equation in terms of variable \(\ell\). To solve this equation, it is required to carry out the integral part by the following steps: (1) Changing the integration variable. (2) Implementing the Taylor expansion on the appropriate part of the integrand. (3) Selecting the suitable integration formula. In the end, we unearthed the rule of the general term formula of generation functions.
In addition, there are several comments that we need to supplement. Firstly, it is understood that the reduction coefficients should be **rational** functions of some Lorentz invariants such as \(q_{i}\cdot q_{j},M_{i}^{2}\) and spacetime dimension \(D\). However, in the expression (6.5), there exists irrational terms such as the square root term
\[[x_{+}-x_{-}]_{\mathbf{I}_{k}}=4\sqrt{(\overline{L}\overline{L})_{\mathbf{I}_{ k}}R^{2}+(\overline{VL})_{\mathbf{I}_{k}}^{2}-(\overline{L}\overline{L})_{ \mathbf{I}_{k}}(\overline{VV})_{\mathbf{I}_{k}}}/(\overline{L}\overline{L})_{ \mathbf{I}_{k}}. \tag{8.1}\]
Expressing rational numbers with irrational numbers is not unusual in mathematics, with the general term formula of the famous Fibonacci sequence being an example
\[F_{n}=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left( \frac{1-\sqrt{5}}{2}\right)^{n}\right). \tag{8.2}\]
This may suggest that in the existing work, the study of the analytic structure of the master integrals or reduction coefficients is perhaps not yet deep enough.
Secondly, the generation function we discuss in this paper is in any spacetime dimension \(D\) where all scalar integrals with single pole are irreducible. When the spacetime dimension \(D\) takes a finite value (such as \(4-2\epsilon\)), some scalar integrals will become reducible scalar integrals, then what would the form of the generation functions be in this case?
Thirdly, the simplicity of the method in this paper lies in the fact that the recursion relation established is an ordinary differential equation rather than a set of partial differential equations. The foundation of this relation is the study of the analytic structure of Feynman Parameterization in projective space. Could this logic be extrapolated to two loops, thus obtaining the general form of the two-loop generation function, or at least, obtain some simpler recursion relations?
###### Acknowledgements.
We would like to thank Bo Feng, Jiaqi Chen and Xiang Li for useful discussion. We are also grateful to Prof. Dr. Johannes Bluemlein for providing valuable feedback on this paper.
## Appendix A Solving differential equations
The standard form of the first-order linear ordinary differential equation is:
\[y^{\prime}+p(x)y=q(x).\] (A.1)
The method of solving the differential equation, i.e, Method of Variation of Parameters, is as follows:
Firstly, we solve the corresponding homogeneous equation:
\[y^{\prime}_{H}+p(x)y_{H}=0. \tag{104}\]
It's not hard to get:
\[y_{H}=Ce^{-\int p(x)\,dx}. \tag{105}\]
Then we change the constant \(C\) to \(C(x)\), that is:
\[y_{P}=C(x)e^{-\int p(x)\,dx}. \tag{106}\]
Substituting this into the original equation, it is found that:
\[y^{\prime}_{P}+p(x)y_{P}=C^{\prime}(x)e^{-\int p(x)\,dx}=q(x). \tag{107}\]
By rearranging terms and integrating, we get:
\[C(x)=\int q(x)e^{\int p(x)\,dx}\,dx. \tag{108}\]
Therefore, the solution of the original equation is:
\[y=e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx. \tag{109}\]
This result can be used as a formula.
In our cases(16), the differential equation has the typical form with a non-homogeneous term \(g(x)\)
\[\left((a-1)+(a/2)A_{1}x+A_{2}x^{2}\right)y(x)+x(1+A_{1}x+A_{2}x^{2})y^{\prime }(x)=g(x), \tag{110}\]
where \(a,A_{1},A_{2}\) are several coefficients which are independent of the variable \(x\). Then \(p(x)\), \(q(x)\) in(105):
\[p(x)=\frac{(a-1)+(a/2)A_{1}x+A_{2}x^{2}}{x(1+A_{1}x+A_{2}x^{2})}, \tag{111}\]
\[q(x)=\frac{g(x)}{x(A_{1}+A_{2}x+A_{3}x^{2})}. \tag{112}\]
We split the integral of \(p(x)\) into
\[\begin{split}\int p(x)dx&=\int\frac{(a-1)+(a/2)A_{1 }x+A_{2}x^{2}}{x(1+A_{1}x+A_{2}x^{2})}\ dx\\ &=\int\left(\frac{a-1}{x}+(1-\frac{a}{2})\frac{(A_{1}+2A_{2}x)} {(1+A_{1}x+A_{2}x^{2})}\right)\,dx\\ =&\int\left(\frac{a-1}{x}+(1-\frac{a}{2})\frac{(1+A _{1}x+A_{2}x^{2})^{\prime}}{1+A_{1}x+A_{2}x^{2}}\right)\,dx\\ =&(a-1)\ln(x)+(1-\frac{a}{2})\ln(1+A_{1}x+A_{2}x^{2 })+Const.\end{split} \tag{113}\]
Then the general solution of the homogeneous part is
\[y_{H}(x)=e^{-\int p(x)\,dx}=Const\cdot x^{1-a}(1+A_{1}x+A_{2}x^{2})^{\frac{a-2}{2}}.\] (A.12)
By (A.7), the solution of the original equation is
\[\begin{split} y(x)&=e^{-\int p(x)\,dx}\int q(x)e^{ \int p(x)\,dx}\,dx\\ &=x^{1-a}(1+A_{1}x+A_{2}x^{2})^{\frac{a-2}{2}}\left(\int_{0}^{x} g(u)u^{a-2}(1+A_{1}u+A_{2}u^{2})^{-\frac{a}{2}}du+Const\right).\end{split}\] (A.13)
In the case of the reduction of an \(n\)-gon to an \(n\)-gon,the non-homogeneous part is a constant \(g(x)=(a-1)\). By following integral formula:
\[\begin{split}\int(y-x_{1})^{-\frac{a}{2}}(y-x_{2})^{-\frac{a}{2} }dy&=\frac{(y-x_{1})^{-\frac{a}{2}}(y-x_{2})^{\frac{2-a}{2}}}{1-a }\cdot\ _{2}F_{1}\left[\{1,\frac{a}{2}\},\{a\},\frac{x_{2}-x_{1}}{y-x_{1}}\right]+ Const,\end{split}\] (A.14)
we can solve
\[\begin{split}& y(x)=e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx\\ &=x^{1-a}(1+A_{1}x+A_{2}x^{2})^{\frac{a-2}{2}}\left((a-1)\int_{0} ^{x}u^{a}(1+A_{1}u+A_{2}u^{2})^{-\frac{a}{2}}u^{-2}du+Const\right)\\ &=\frac{1}{x}\left(\frac{1}{x^{2}}+A_{1}\frac{1}{x}+A_{2}\right)^ {\frac{a-2}{2}}\left((a-1)\int_{\infty}^{\frac{1}{x}}-\left(\frac{1}{u^{2}}+A_ {1}\frac{1}{u}+A_{2}\right)^{-\frac{a}{2}}d\left(\frac{1}{u}\right)+Const \right)\\ &=(1-a)\frac{1}{x}\left(\frac{1}{x}-x_{+}\right)^{\frac{a-2}{2}} \left(\frac{1}{x}-x_{-}\right)^{\frac{a-2}{2}}\left(\int_{\infty}^{\frac{1}{x }}(y-x_{+})^{-\frac{a}{2}}(y-x_{-})^{-\frac{a}{2}}\,dy+Const\right)\\ &=\frac{1}{x}\left(\frac{1}{x}-x_{+}\right)^{\frac{a-2}{2}}\left( \frac{1}{x}-x_{-}\right)^{\frac{a-2}{2}}\left(\left\{(y-x_{+})^{-\frac{a}{2}} (y-x_{-})^{\frac{2-a}{2}}\cdot\ _{2}F_{1}\left(\frac{1,a/2}{a}\left|\frac{x_{-}-x_{+}}{y-x_{+}}\right.\right) \right|_{y=\infty}^{\frac{1}{x}}+Const\right)\\ &=x^{1-a}(1-x_{+}\cdot x)^{\frac{a-2}{2}}(1-x_{-}\cdot x)^{\frac{ a-2}{2}}\cdot Const+\frac{1}{1-x_{+}\cdot x}\cdot\ _{2}F_{1}\left(\frac{1,a/2}{a}\left|\frac{(x_{-}-x_{+})x}{1-x_{+}\cdot x}\right. \right),\end{split}\] (A.15)
where \(x_{-},x_{+}\) are two roots of the quadratic equation \(x^{2}+A_{1}x+A_{2}=0\),7
Footnote 7: We can also choose \(x_{+}=\frac{-A_{1}-\sqrt{A_{1}^{2}-4A_{2}}}{2},\ x_{-}=\frac{-A_{1}+\sqrt{A_{1 }^{2}-4A_{2}}}{2}\). In this way, we will have another form of solution. Two forms are equivalent due to the properties of hypergeometric functions.
\[x_{+}=\frac{-A_{1}+\sqrt{A_{1}^{2}-4A_{2}}}{2},\ x_{-}=\frac{-A_{1}-\sqrt{A_{1 }^{2}-4A_{2}}}{2}.\] (A.16)
## Appendix B Numerical check
In this section we will verified our generation function numerically by comparing the result produce by the cpp version of FIRE6[28]. The tensor integral we reduce is defined as follows:
\[\int\frac{d^{D}l}{i\pi^{D/2}}\frac{(2R\cdot l)^{r}}{D_{1}D_{2}\cdots D_{n}}. \tag{116}\]
### Tadpole
Let us first begin with a trivial Tadpole case, where we have only one master integral. We choose the numeric value of the kinetic variable as follows:
\[M_{1}^{2}\rightarrow\frac{5}{4},q_{1}^{2}\to 0,R^{2}\rightarrow\frac{133}{1 0},q_{1}\cdot R\to 0,D\rightarrow\frac{969}{50}. \tag{117}\]
* Tadpole to Tadpole \(1\to 1\)
The numerical Taylor expansion of the generation function is given as follows:
\[\mathbf{GF}_{1\to 1}(\mathpzc{t}) =1.+3.4313725490196076*\mathpzc{t}^{2}+32.0186540472129*\mathpzc{ t}^{4}\] \[+455.355109952878*\mathpzc{t}^{6}+8351.765314541557*\mathpzc{t}^{8}\] \[+182561.41492889417*\mathpzc{t}^{10}+\mathpzc{O}(\mathpzc{t}^{ 12}), \tag{118}\]
which match the result of FIRE6 perfectly.
### Bubble
We choose the numeric value of the kinetic variable as follows:
\[M_{1}^{2}\rightarrow\frac{5}{4},M_{2}^{2}\rightarrow\frac{64}{25 },q_{1}^{2}\to 0,q_{2}^{2}\rightarrow\frac{69}{50},q_{1}\cdot q_{2} \to 0,\] \[R^{2}\rightarrow\frac{133}{10},q_{1}\cdot R\to 0,q_{2} \cdot R\rightarrow\frac{367}{100},D\rightarrow\frac{969}{50}. \tag{119}\]
For a tensor Bubble integral, we have totally 3 master integrals:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 1. & 0. & 3.4313725490196076 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 0. & 32.0186540472129 & 0. \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 455.355109952878 & 0. & 8351.765314541557 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 0. & 182561.41492889417 & 0. \\ \hline \end{tabular}
\end{table}
Table 1: Reduction Result of Bubble to Bubble by FIRE6
* Bubble to Bubble \(2\to 2\) The numerical Taylor expansion of the Generation function is given as follows: \[\mathbf{GF}_{2\to 2}(\mathpzc{t})=1.+0.18615942028985508* \mathpzc{t}+0.9969550236277754*\mathpzc{t}^{2}\] \[\qquad+0.5438748755637103*\mathpzc{t}^{3}+2.7067296633008473* \mathpzc{t}^{4}\] \[\qquad+2.394357906466195*\mathpzc{t}^{5}+11.220093828873562* \mathpzc{t}^{6}\] \[\qquad+13.471511680934789*\mathpzc{t}^{7}+60.09550492544111* \mathpzc{t}^{8}\] \[\qquad+89.67183069722255*\mathpzc{t}^{9}+384.3094569884708* \mathpzc{t}^{10}\] \[\qquad+675.8039743685777*\mathpzc{t}^{11}+\mathcal{O}(\mathpzc{t} ^{12}).\] (100)
* Bubble to Tadpole \(2\to 1\) There are two Tadpole master integrals in this case.
The numerical Taylor expansion of the Generation function is given as follows[8]:
\[\mathbf{GF}_{2\to 1;\widehat{1}}(\mathpzc{t})=2.659420289855* \mathpzc{t}+20.390645089351*\mathpzc{t}^{2}+175.850995881253*\mathpzc{t}^{3}\] \[\qquad+1611.153758501756*\mathpzc{t}^{4}+15713.736765104694* \mathpzc{t}^{5}+161200.459764860606*\mathpzc{t}^{6}\] \[\qquad+1.726292310057*\mathpzc{t}^{6}*\mathpzc{t}^{7}+1.91794043 0110*\mathpzc{t}^{\wedge}7*\mathpzc{t}^{8}+2.200104554179*\mathpzc{t}^{ \wedge}8*\mathpzc{t}^{9}\] \[\qquad+2.595730084681*\mathpzc{t}^{\wedge}9*\mathpzc{t}^{10}+3.1 39868184273*\mathpzc{t}^{11}+\mathcal{O}(\mathpzc{t}^{12}). \tag{101}\]
The numerical Taylor expansion of the Generation function is given as follows[8]:
\[\mathbf{GF}_{2\to 1;\widehat{1}}(\mathpzc{t})=2.659420289855* \mathpzc{t}+20.390645089351*\mathpzc{t}^{2}+175.850995881253*\mathpzc{t}^{3}\] \[\qquad+1611.153758501756*\mathpzc{t}^{4}+15713.736765104694* \mathpzc{t}^{5}+161200.459764860606*\mathpzc{t}^{6}\] \[\qquad+1.726292310057*\mathpzc{t}^{\wedge}6*\mathpzc{t}^{7}+1.91 7940430110*\mathpzc{t}^{\wedge}7*\mathpzc{t}^{8}+2.200104554179*\mathpzc{t}^{ \wedge}8*\mathpzc{t}^{9}\] \[\qquad+2.595730084681*\mathpzc{t}^{\wedge}9*\mathpzc{t}^{10}+3.1 39868184273*\mathpzc{t}^{10}*\mathpzc{t}^{11}+\mathcal{O}(\mathpzc{t}^{12}). \tag{102}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 1. & 0.18615942028985508 & 0.9969550236277754 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 0.5438748755636772 & 2.706729663300785 & 2.394357906464815 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 11.220093828863014 & 13.471511680729403 & 60.095504925096684 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 89.67183068422794 & 384.30945689202724 & 675.8039722773336 \\ \hline \end{tabular}
\end{table}
Table 2: Reduction Result of Bubble to Bubble by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 2.6594202898550723 & 20.390645089351526 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 175.8509958812531 & 1611.1537585017563 & 15713.736765104697 \\ \hline Rank & 6 & 7 & 8 \\ \hline Rank & 6 & 7 & 8 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 2.2001045541792387*\(\wedge\)8 & 2.5957300846819887*\(\wedge\)9 & 3.1398681842738716*\(\wedge\)10 \\ \hline \end{tabular}
\end{table}
Table 3: Reduction Result of Bubble to \(D_{2}\) Tadpole by FIRE6
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{2\to 1;\widehat{2}}(\boldsymbol{\ell})=-2.6594202898550 *\boldsymbol{\ell}-0.4853067411154*\boldsymbol{\ell}^{2}-14.0698340471820* \boldsymbol{\ell}^{3}\] \[\qquad-3.9250909295456*\boldsymbol{\ell}^{4}-132.4855328633367* \boldsymbol{\ell}^{5}-40.8273633717368*\boldsymbol{\ell}^{6}\] \[\qquad-1820.7084111542733*\boldsymbol{\ell}^{7}-554.9841273406652* \boldsymbol{\ell}^{8}-32477.5147882514220*\boldsymbol{\ell}^{9}\] \[\qquad-9528.7663298311876*\boldsymbol{\ell}^{10}-697318.06069707315 79*\boldsymbol{\ell}^{11}+\mathcal{O}(\boldsymbol{\ell}^{12}). \tag{100}\]
### Triangle
We choose the numeric value of the kinetic variable as follows:
\[M_{1}^{2}\to\frac{5}{4},M_{2}^{2}\to\frac{64}{25},M_{3}^{2}\to \frac{357}{100},q_{1}^{2}\to 0,q_{2}^{2}\to\frac{69}{50},q_{3}^{2}\to\frac{129}{25},q_{1} \cdot q_{2}\to 0,q_{1}\cdot q_{3}\to 0,\] \[q_{2}\cdot q_{3}\to\frac{41}{20},R^{2}\to\frac{133}{10},q_{1} \cdot R\to 0,q_{2}\cdot R\to\frac{367}{100},q_{3}\cdot R\to\frac{89}{10},D \to\frac{969}{50}. \tag{101}\]
* Triangle to Triangle,\(3\to 3\)
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 3}(\boldsymbol{\ell})=1.+4.647429667957373* \boldsymbol{\ell}+21.424063591070208*\boldsymbol{\ell}^{2}+97.94451396121778* \boldsymbol{\ell}^{3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & -2.6594202898550723 & -0.4853067411154148 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -14.069834047182031 & -3.9250909295456946 & -132.48553286333672 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -40.82736337173687 & -1820.708411154273 & -554.9841273406652 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & -32477.51478825142 & -9528.7663298311 & -697318.0606970708 \\ \hline \end{tabular}
\end{table}
Table 4: Reduction Result of Bubble to \(D_{1}\) Tadpole by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 1. & 4.647429667957373 & 21.424063591070208 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 97.94451396121778 & 443.9628092202622 & 1994.7300798765605 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 8880.900093335815 & 39165.84906934403 & 171022.14771781594 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 739055.1488881096 & 3.15884655856376*\({}^{\wedge}\)6 & 1.3344673752179299*\({}^{\wedge}\)7 \\ \hline \end{tabular}
\end{table}
Table 5: Reduction Result of Triangle to Triangle by FIRE6
\[+443.962809220622*\epsilon^{4}+1994.7300798765605*\epsilon^{5}+888 80.900093335815*\epsilon^{6}\] \[+39165.84906934403*\epsilon^{7}+171022.14771781594*\epsilon^{8}+7390 55.1488881096*\epsilon^{9}\] \[+3.15884655856376*\epsilon^{\wedge}6*\epsilon^{10}+1.3344673752179299* \epsilon^{\wedge}7*\epsilon^{11}+\mathcal{O}(\epsilon^{12}). \tag{114}\]
* Triangle to Bubble,\(3\to 2\) There are 3 Bubble master integrals in this case, we choose two typical integrals to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 2;\widehat{2}}(\epsilon)=-0.23719288626940* \epsilon\epsilon-2.03999113570073*\epsilon^{2}-12.81229054894491*\epsilon^{3}\] \[-69.25281622582992*\epsilon^{4}-336.83725868345638*\epsilon^{5}-1 487.87950339414783*\epsilon^{6}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 2;\widehat{2}}(\epsilon)=-0.23719288626940* \epsilon\epsilon-2.03999113570073*\epsilon^{2}-12.81229054894491*\epsilon^{3}\] \[-69.25281622582992*\epsilon^{4}-336.83725868345638*\epsilon^{5}-14 87.87950339414783*\epsilon^{6}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 2;\widehat{2}}(\epsilon)=-0.23719288626940* \epsilon\epsilon-2.03999113570073*\epsilon^{2}-12.81229054894491*\epsilon^{3}\] \[-69.25281622582992*\epsilon^{4}-336.83725868345638*\epsilon^{5}-14 87.87950339414783*\epsilon^{6}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 1.867765479902683 & 27.80579292856186 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 326.58851259528217 & 3575.377043547329 & 38296.692771726164 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 408802.874852242 & 4.381795021788758*\({}^{\wedge}6\) & 4.730368560885347*\({}^{\wedge}7\) \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 5.148927898936717*\({}^{\wedge}8\) & 5.65229107270466*\({}^{\wedge}9\) & 6.2570409303155396*\({}^{\wedge}10\) \\ \hline \end{tabular}
\end{table}
Table 6: Reduction Result of Triangle to \(D_{2}D_{3}\) Bubble by FIRE6
\[-5880.36894893309985*\epsilon^{7}-19650.58517698221673*\epsilon^{8}- 44502.51591828201196*\epsilon^{9}\] \[+48219.21614887317354*\epsilon^{10}+1.50174487262692*^{\wedge}6* \epsilon^{11}+\mathscr{O}(\epsilon^{12}).\] (B.11)
* Triangle to Tadpole, \(3\to 1\) There are 3 Tadpole master integrals in this case.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 1;\overline{12}}(\ell)=3.594336378098*\ell^{2}+ 120.154053719021*\epsilon^{3}+2856.489239938459*\epsilon^{4}\] \[+60343.814251369669*\epsilon^{5}+1.220274319110*^{\wedge}6*\epsilon ^{6}+2.439974986119*^{\wedge}7*\epsilon^{7}\] \[+4.899382591712*^{\wedge}8*\epsilon^{8}+9.953033813770*^{\wedge}9* \epsilon^{9}\] \[+2.052398465612956*^{\wedge}11*\epsilon^{10}+4.300770011403*^{ \wedge}12*\epsilon^{11}+\mathscr{O}(\epsilon^{12}).\] (B.12)
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 1;\overline{23}}(\ell)=4.74548960079105*\ell^{2}+ 24.9125640269587*\ell^{3}+142.5977420103585*\ell^{4}\] \[+671.4679047318388*\epsilon^{5}+3235.5137500307829*\ell^{6}+14241. 4163190202519*\epsilon^{7}\] \[+63070.0424103008628*\epsilon^{8}+258765.7135713077273*\epsilon^{9} +1.0605665599514*^{\wedge}6*\epsilon^{10}\] \[+3.9621591091094*^{\wedge}6*\epsilon^{11}+\mathscr{O}(\epsilon^{1 2}).\] (B.13)
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 1;\overline{23}}(\ell)=4.74548960079105*\ell^{2}+ 24.9125640269587*\ell^{3}+142.5977420103585*\ell^{4}\] \[+671.4679047318388*\epsilon^{5}+3235.5137500307829*\ell^{6}+14241. 4163190202519*\epsilon^{7}\] \[+63070.0424103008628*\epsilon^{8}+258765.7135713077273*\epsilon^{9} +1.0605665599514*^{\wedge}6*\epsilon^{10}\] \[+3.9621591091094*^{\wedge}6*\epsilon^{11}+\mathscr{O}(\epsilon^{1 2}).\] (B.14)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Rank & 3 & 4 & 5 \\ \hline Rank & 120.15405371902189 & 2856.4892399384603 & 60343.814251369666 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 1.2202743191102992*\({}^{\wedge}6\) & 2.439974986119352*\({}^{\wedge}7\) & 4.8993825917122436*\({}^{\wedge}8\) \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 9.95303381377052*\({}^{\wedge}9\) & 2.0523984656129572*\({}^{\wedge}11\) & 4.3007700114033403*\({}^{\wedge}12\) \\ \hline \end{tabular}
\end{table}
Table 8: Reduction Result of Triangle to \(D_{1}\) Tadpole by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 4.745489600791057 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 24.912564026958766 & 142.5977420103585 & 671.4679047318388 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 3235.5137500307833 & 14241.416319020253 & 63070.04241030086 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 258765.71357130772 & 1.0605665599514614*\({}^{\wedge}6\) & 3.962159109109401*\({}^{\wedge}6\) \\ \hline \end{tabular}
\end{table}
Table 9: Reduction Result of Triangle to \(D_{1}\) Tadpole by FIRE6
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{3\to 1;\overline{13}}(\mathpzc{t})=-8.3398259788895* \mathpzc{t}^{2}-142.4521808032218*\mathpzc{t}^{3}-1909.4499907718186*\mathpzc{t}^{4}\] \[\quad-23603.9939245739767*\mathpzc{t}^{5}-282767.9344618817639* \mathpzc{t}^{6}-3.3472939483481*\mathpzc{t}^{\wedge}6*\mathpzc{t}^{7}\] \[\quad-3.9512933931432*\mathpzc{t}^{\wedge}7*\mathpzc{t}^{8}-4.673 4814744642*\mathpzc{t}^{\wedge}8*\mathpzc{t}^{9}\] \[\quad-5.5538293459399*\mathpzc{t}^{\wedge}9*\mathpzc{t}^{10}-6.64 26540846106*\mathpzc{t}^{\wedge}10*\mathpzc{t}^{11}+\mathcal{O}(\mathpzc{t}^{ 12}). \tag{114}\]
### Box
We choose the numeric value of the kinetic variable as follows:
\[M_{1}^{2}\to\frac{5}{4},M_{2}^{2}\to\frac{64}{25},M_{3}^{2}\to \frac{357}{100},M_{4}^{2}\to\frac{339}{50},q_{1}^{2}\to 0,q_{2}^{2}\to\frac{69}{50},q_{3}^{2}\to\frac{129}{25},\] \[q_{4}^{2}\to\frac{2069}{100},q_{1}\cdot q_{2}\to 0,q_{1}\cdot q_{3}\to 0,q_{1}\cdot q_{4}\to 0,q_{2}\cdot q_{3}\to\frac{41}{20},\] \[q_{2}\cdot q_{4}\to\frac{93}{25},q_{3}\cdot q_{4}\to\frac{56}{5},R^{2}\to\frac{133}{10},q_{1}\cdot R\to 0,\] \[q_{2}\cdot R\to\frac{367}{100},q_{3}\cdot R\to\frac{89}{10},q_{4 }\cdot R\to\frac{2413}{100},D\to\frac{969}{50}. \tag{115}\]
* Box to Box,\(4\to 4\)
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{4\to 4}(\mathpzc{t})=1.-3.6920841521541408* \mathpzc{t}+16.55377492166991*\mathpzc{t}^{2}-82.69660780758976*\mathpzc{t}^{3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & -8.339825978889557 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -142.45218080322186 & -1909.4499907718186 & -23603.993924573977 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -282767.93446188176 & -3.3472939483481306*\({}^{\wedge}\)6 & -3.951293393143239*\({}^{\wedge}\)7 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & -4.673481474464273*\({}^{\wedge}\)8 & -5.553829345939927*\({}^{\wedge}\)9 & -6.642654084610651*\({}^{\wedge}\)10 \\ \hline \end{tabular}
\end{table}
Table 10: Reduction Result of Triangle to \(D_{2}\) Tadpole by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 1. & -3.6920841521541408 & 16.55377492166991 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -82.69660780758952 & 447.6598650239185 & -2578.2813404112676 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 15614.703573346487 & -98601.17919296624 & 645114.4523761669 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & -4.35187298694318*\({}^{\wedge}\)6 & 3.0151991318039116*\({}^{\wedge}\)7 & -2.13889783460708*\({}^{\wedge}\)8 \\ \hline \end{tabular}
\end{table}
Table 11: Reduction Result of Box to Box by FIRE6
\[+447.6598650239196*\ell^{4}-2578.2813404411271*\ell^{5}+15614.703573 34658*\ell^{6}\] \[-98601.17919296597*\ell^{7}+645114.4523761913*\ell^{8}-4.35187298694 3167*\ell^{9}\] \[+3.0151991318042282*\wedge^{\prime}7*\ell^{10}-2.1388978346067291* \wedge^{\prime}8*\ell^{11}+\mathscr{O}(\ell^{12}). \tag{116}\]
* Box to Triangle, \(4\to 3\) We choose two typical master integrals to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{4\to 3;\widehat{1}}(\ell)=2.29729169547986*\ell-3.938379081 47458*\ell^{2}+60.59624140539745*\ell^{3}\] \[-100.49970077717935*\ell^{4}+2298.75968226462589*\ell^{5}-1488.72 664298211035*\ell^{6}\] \[+114099.44338526357090*\ell^{7}+128454.40427289285689*\ell^{8}+7.1 1049257419213*\wedge^{\prime}6*\ell^{9}\] \[+2.39597444287221*\wedge^{7}7*\ell^{10}+5.39796222751070*\wedge^{ \prime}8*\ell^{11}+\mathscr{O}(\ell^{12}). \tag{117}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{4\to 3;\widehat{2}}(\ell)=1.172764648306*\ell-13.4370 42633643*\ell^{2}+133.449697334675*\ell^{3}\] \[-1305.812478155072*\ell^{4}+12995.470486241735*\ell^{5}-132770.84 1742329151*\ell^{6}\] \[+1.395549705736*\wedge^{\prime}6*\ell^{7}-1.508260416218*\wedge^{ \prime}7*\ell^{8}+1.673393577774*\wedge^{8}8*\ell^{9}\] \[-1.902215247513*\wedge^{\prime}9*\ell^{10}+2.211010564682*\wedge^{ \prime}10*\ell^{11}+\mathscr{O}(\ell^{12}). \tag{118}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 1.1727646483063998 & -13.437042633643701 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 133.44969733467502 & -1305.8124781550728 & 12995.470486241737 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -132770.84174232915 & 1.395549705736787*\(\wedge\)6 & -1.5082604162182923*\(\wedge^{\prime}7\) \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 1.6733935777742624*\(\wedge\)8 & -1.902215247513259*\(\wedge\)9 & 2.2110105646825863*\(\wedge\)10 \\ \hline \end{tabular}
\end{table}
Table 13: Reduction Result of Box to \(D_{1}D_{3}D_{4}\) Triangle by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 2.297291695479869 & -3.938379081474582 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 60.596241405397464 & -100.49970077717936 & 2298.7596822646256 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -1488.7266429821102 & 114099.44338526356 & 128454.40427289286 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 7.110492574192137*\(\wedge\)6 & 2.3959744428722158*\(\wedge\)7 & 5.397962227510707*\(\wedge\)8 \\ \hline \end{tabular}
\end{table}
Table 12: Reduction Result of Box to \(D_{2}D_{3}D_{4}\) Triangle by FIRE6
* Box to Bubble, \(4\to 2\) We choose two typical integral to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{4\to 2;\overline{2}}(\ell)=14.2836444892055*\ell^{2}+204.5667527270622*\ell^{3}+4109.787640764211*\ell^{4}\] \[\quad+54424.2247638628464*\ell^{5}+614054.5523170700231*\ell^{6}+1.9047172844007*^{\wedge}6*\ell^{7}\] \[\quad-8.9857430831286*\ell^{\wedge}7*\ell^{8}-2.6301936458305* \wedge^{\wedge}9*\ell^{9}-3.1193255558707*\wedge^{\wedge}10*\ell^{10}\] \[\quad+1.6840989924241*\wedge 11*\ell^{11}+\mathscr{O}(\ell^{12}).\] (B.19)
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{4\to 2;\overline{2}}(\ell)=-9.6958862859*\ell^{2}-188.1327 746246*\ell^{3}-6091.3626749444*\ell^{4}\] \[\quad-152379.1783545349*\ell^{5}-4.0493178915*\ell^{6}-1.03462472 94844293*\wedge^{\wedge}8*\ell^{7}\] \[\quad-2.6697476615*\wedge^{\wedge}9*\ell^{8}-6.8634870548*\wedge^{ \wedge}10*\ell^{9}\] \[\quad-1.7752858181*\wedge^{\wedge}12*\ell^{10}-4.6131078625* \wedge^{\wedge}13*\ell^{11}+\mathscr{O}(\ell^{12}).\] (B.20)
* Box to Tadpole \(4\to 1\) We choose two typical integral to verify our Generation function.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & -9.695886285967024 \\ \hline Rank & 3 & 4 & 5 \\ \hline Rank & -188.13277462466073 & -6091.362674944455 & -152379.17835453493 \\ \hline Rank & 6 & 7 & 8 \\ \hline Rank & -4.049317891589467*\({}^{\wedge}\)6 & -1.0346247294844292*\({}^{\wedge}\)8 & -2.6697476615708475*\({}^{\wedge}\)9 \\ \hline Rank & 9 & 10 & 11 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & -6.863487054847528*\({}^{\wedge}\)10 & -1.7752858181645198*\({}^{\wedge}\)12 & -4.61310786256653*\({}^{\wedge}\)13 \\ \hline \end{tabular}
\end{table}
Table 15: Reduction Result of Box to \(D_{1}D_{4}\) Bubble by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 14.283644489205553 \\ \hline Rank & 3 & 4 & 5 \\ \hline Rank & 204.56675272706224 & 4109.787640764211 & 54424.224763862854 \\ \hline Rank & 6 & 7 & 8 \\ \hline Rank & 614054.5523170701 & 1.9047172844007334*\({}^{\wedge}\)6 & -8.985743083128655*\({}^{\wedge}\)7 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & -2.6301936458305697*\({}^{\wedge}\)9 & -3.119325555870743*\({}^{\wedge}\)10 & 1.6840989924241083*\({}^{\wedge}\)11 \\ \hline \end{tabular}
\end{table}
Table 14: Reduction Result of Box to \(D_{3}D_{4}\) Bubble by FIRE6
The numerical Taylor expansion of the Generation function is given as follows:
\[\begin{split}\mathbf{GF}_{4\to 1;\overline{234}}(\ell)=-2.397894561* \ell^{3}-26.096086956*\ell^{4}+986.853584210*\ell^{5}\\ +53824.370611123*\ell^{6}+1.945611485*^{\wedge}6*\ell^{7}+5.978729467*^{ \wedge}7*\ell^{8}\\ +1.726812691*^{\wedge}9*\ell^{9}+4.815140110*^{\wedge}10*\ell^{10}+1.3205248 16*^{\wedge}12*\ell^{11}+\mathcal{O}(\ell^{12}).\end{split} \tag{121}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\begin{split}\mathbf{GF}_{4\to 1;\overline{134}}(\ell)=21.912939954 2*\ell^{3}+780.2207801551*\ell^{4}+20884.5489592743*\ell^{5}\\ +490764.9683116434*\ell^{6}+1.0789658304*^{\wedge}7*\ell^{7}+2.2654 111049*^{\wedge}8*\ell^{8}\\ +4.5715970066*^{\wedge}9*\ell^{9}+8.8263188742*^{\wedge}10*\ell^{1 0}+1.6031881275*^{\wedge}12*\ell^{11}+\mathcal{O}(\ell^{12}).\end{split} \tag{122}\]
### Pentagon
We choose the numeric value of the kinetic variable as follows:
\[\begin{split}& M_{1}^{2}\rightarrow\frac{5}{4},M_{2}^{2} \rightarrow\frac{64}{25},M_{3}^{2}\rightarrow\frac{357}{100},M_{4}^{2} \rightarrow\frac{339}{50},M_{5}^{2}\rightarrow\frac{489}{100},q_{1}^{2} \to 0,q_{2}^{2}\rightarrow\frac{69}{50},q_{3}^{2}\rightarrow\frac{129}{25},\\ & q_{4}^{2}\rightarrow\frac{2069}{100},q_{5}^{2}\rightarrow\frac{1 957}{25},q_{1}\cdot q_{2}\to 0,q_{1}\cdot q_{3}\to 0,q_{1} \cdot q_{4}\to 0,q_{1}\cdot q_{5}\to 0,q_{2}\cdot q_{3} \rightarrow\frac{41}{20},\\ & q_{2}\cdot q_{4}\rightarrow\frac{93}{25},q_{2}\cdot q_{5} \rightarrow\frac{721}{100},q_{3}\cdot q_{4}\rightarrow\frac{56}{5},q_{3} \cdot q_{5}\rightarrow\frac{473}{20},q_{4}\cdot q_{5}\rightarrow\frac{4437}{1 00},R^{2}\rightarrow\frac{133}{10},q_{1}\cdot R\to 0,\end{split}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -2.39789456182502 & -26.096086956520185 & 986.8535842106996 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 53824.37061112356 & 1.945611485646892*\({}^{\wedge}6\) & 5.978729467988549*\({}^{\wedge}7\) \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 1.7268126919862216*\({}^{\wedge}9\) & 4.815140110730695*\({}^{\wedge}10\) & 1.3205248166287803*\({}^{\wedge}12\) \\ \hline \end{tabular}
\end{table}
Table 16: Reduction Result of Box to \(D_{1}\) Tadpole by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -2.39789456182502 & -26.096086956520185 & 986.8535842106996 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 53824.37061112356 & 1.945611485646892*\({}^{\wedge}6\) & 5.978729467988549*\({}^{\wedge}7\) \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 1.7268126919862216*\({}^{\wedge}9\) & 4.815140110730695*\({}^{\wedge}10\) & 1.3205248166287803*\({}^{\wedge}12\) \\ \hline \end{tabular}
\end{table}
Table 17: Reduction Result of Box to \(D_{2}\) Tadpole by FIRE6
\[q_{2}\cdot R\rightarrow\frac{367}{100},q_{3}\cdot R\rightarrow\frac{89}{10},q_{4} \cdot R\rightarrow\frac{2413}{100},q_{5}\cdot R\rightarrow\frac{3737}{100},D \rightarrow\frac{969}{50}. \tag{101}\]
* Pentagon to Pentagon \(5\to 5\) \[\begin{array}{|c|c|c|}\hline\text{Rank}&0&1&2\\ \hline\text{Result}&1.&28.14374534067907&891.8998182261959\\ \hline\text{Rank}&3&4&5\\ \hline\text{Result}&30720.54870031494&1.1282643383555224^{\ast\wedge}6&4.363353048 091063^{\ast\wedge}7\\ \hline\text{Rank}&6&7&8\\ \hline\text{Result}&1.7612041198600657^{\ast\wedge}9&7.370817943138274^{\ast \wedge}10&3.1822242795255347^{\ast\wedge}12\\ \hline\text{Rank}&9&10&11\\ \hline\text{Result}&1.411584384055702^{\ast\wedge}14&6.412540767851238^{\ast \wedge}15&2.975347813774955^{\ast\wedge}17\\ \hline\end{array}\] The numerical Taylor expansion of the Generation function is given as follows: \[\mathbf{GF}_{5\to 5}(t) =1.+28.14374534067907\ast\ell+891.8998182261959\ast\ell^{2}\] \[+30720.548700314932\ast\ell^{3}+1.1282643383555175\ast^{\wedge}6 \ast\ell^{4}\] \[+4.3633530480910465\ast^{\wedge}7\ast\ell^{5}+1.7612041198600552\ast ^{\wedge}9\ast\ell^{6}\] \[+7.370817943138432\ast^{\wedge}10\ast\ell^{7}+3.1822242795256416\ast ^{\wedge}12\ast\ell^{8}\] \[+1.411584384056077\ast^{\wedge}14\ast\ell^{9}+6.412540767842304\ast ^{\wedge}15\ast\ell^{10}\] \[+2.975347813764606\ast^{\wedge}17\ast\ell^{11}+\mathcal{O}(\ell^{1 2}).\] (102)
* Pentagon to Box, \(5\to 4\) We choose two typical integral to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.016551663401212089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.467560554307838 030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.01655166340121089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.4675605543078 38030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.01655166340121089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.4675605543078 38030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.01655166340121089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.467560554307838 030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.01655166340121089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.46756055430783 030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508 \ast\ell+117.01655166340121089687\ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.467560554307838 030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436853508\ast\ell+117.01655166340121089687 \ast\ell^{2}\] \[+5816.790907215448086546878\ast\ell^{3}+278858.471704801335516720 233\ast\ell^{4}\] \[+1.335618591657268161878\ast^{\wedge}7\ast\ell^{5}+6.46756055430783 030352\ast^{\wedge}8\ast\ell^{6}\] \[+3.179785385946040065817\ast^{\wedge}10\ast\ell^{7}+1.5892825779136 56415226\ast^{\wedge}12\ast\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{1}}(\ell)=1.979457112696436
\[+8.075129175644627972880*^{\wedge}13*\ell^{9}+4.16873605287025670255 1*^{\wedge}15*\ell^{10}\] \[+2.184890967556854672072*^{\wedge}17*\ell^{11}+\mathscr{O}(\ell^{12}). \tag{103}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 4;\widehat{3}}(\ell) =-4.281411630475701002407*\ell-344.283064611784617369902*\ell^{2}\] \[-22151.543708665336258844567*\ell^{3}-1.324001651820724484404*\ell^ {4}\] \[-7.657550600362864047165*^{\wedge}7*\ell^{5}-4.352883248799942260348* ^{\wedge}9*\ell^{6}\] \[-2.449234587999095505233*^{\wedge}11*\ell^{7}-1.369070967619795957819* ^{\wedge}13*\ell^{8}\] \[-7.61811824460816177774*^{\wedge}14*\ell^{9}-4.2250671939123482040 02*^{\wedge}16*\ell^{10}\] \[-2.337426727880305159126*^{\wedge}18*\ell^{11}+\mathscr{O}(\ell^ {19}). \tag{104}\]
* Pentagon to Triangle, \(5\to 3\) We choose two typical integral to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 3;\widehat{12}}(\ell) =7.32031110137756*\ell^{2}+873.082316242205*\ell^{3}+76037.3424 948625*\ell^{4}\] \[+\left(5.89305581731286*^{\wedge}6\right)*\ell^{5}+\left(4.32079 052040343*^{\wedge}8\right)*\ell^{6}\] \[+\left(3.07430541277095*^{\wedge}10\right)*\ell^{7}+\left(2.148945 85343427*^{\wedge}12\right)*\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 3;\widehat{12}}(\ell) =7.32031110137756*\ell^{2}+873.082316242205*\ell^{3}+76037.342494 8625*\ell^{4}\] \[+\left(5.89305581731286*^{\wedge}6\right)*\ell^{5}+\left(4.32079 052040343*^{\wedge}8\right)*\ell^{6}\] \[+\left(3.07430541277095*^{\wedge}10\right)*\ell^{7}+\left(2.148945 85343427*^{\wedge}12\right)*\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 3;\widehat{12}}(\ell) =7.32031110137756*\ell^{2}+873.082316242205*\ell^{3}+76037.342494 8625*\ell^{4}\] \[+\left(5.89305581731286*^{\wedge}6\right)*\ell^{5}+\left(4.32079 052040343*^{\wedge}8\right)*\ell^{6}\] \[+\left(3.07430541277095*^{\wedge}10\right)*\ell^{7}+\left(2.148945 85343427*^{\wedge}12\right)*\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 3;\widehat{12}}(\ell) =7.32031110137756*\ell^{2}+873.082316242205*\ell^{3}+76037.34249 48625*\ell^{4}\] \[+\left(5.89305581731286*^{\wedge}6\right)*\ell^{5}+\left(4.32079 052040343*^{\wedge}8\right)*\ell^{6}\] \[+\left(3.07430541277095*^{\wedge}10\right)*\ell^{7}+\left(2.148945 85343427*^{\wedge}12\right)*\ell^{8}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 3;\widehat{12}}(\ell) =7.32031110137756*\ell^{2}+873.082316242205*\ell^{3}+76037.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 2;\overline{23}}(\boldsymbol{\ell})=-3.450432090144 8805103*\boldsymbol{\ell}^{3}-598.2368342483164685385*\boldsymbol{\ell}^{4}\] \[\quad-68719.2579238658296473658*\boldsymbol{\ell}^{5}-6.59307223473 15969685*\boldsymbol{\wedge}^{\wedge}6*\boldsymbol{\ell}^{6}\] \[\quad-5.7222811971914510510*\boldsymbol{\wedge}^{\wedge}8* \boldsymbol{\ell}^{7}-4.6628490111370640487*\boldsymbol{\wedge}^{\wedge}10* \boldsymbol{\ell}^{8}+\mathcal{O}(\boldsymbol{\ell}^{9}). \tag{111}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 2;\overline{23}}(\boldsymbol{\ell})=-1.47485132 633965*\boldsymbol{\ell}^{2}-115.811785440317*\boldsymbol{\ell}^{3}-6395.139969 56179*\boldsymbol{\ell}^{4}\] \[\quad-288631.678755214*\boldsymbol{\ell}^{5}-\left(1.02042164191 151*\boldsymbol{\wedge}^{\wedge}7\right)*\boldsymbol{\ell}^{6}-\left(1.6925221 1527118*\boldsymbol{\wedge}^{\wedge}8\right)*\boldsymbol{\ell}^{7}\] \[\quad+\left(1.55049663413306*\boldsymbol{\wedge}^{\wedge}10\right) *\boldsymbol{\ell}^{8}+\left(2.36936064166708*\boldsymbol{\wedge}^{\wedge}12 \right)*\boldsymbol{\ell}^{9}\] \[\quad+\left(2.21810000224204*\boldsymbol{\wedge}^{\wedge}14\right) *\boldsymbol{\ell}^{10}+\left(1.77060597057117*\boldsymbol{\wedge}^{\wedge}16 \right)*\boldsymbol{\ell}^{11}+\mathcal{O}(\boldsymbol{\ell}^{12}). \tag{112}\]
* Pentagon to Bubble, \(5\to 2\) We choose two typical integral to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 2;\overline{123}}(\boldsymbol{\ell})=-3.45043209014 48805103*\boldsymbol{\ell}^{3}-598.2368342483164685385*\boldsymbol{\ell}^{4}\] \[\quad-68719.2579238658296473658*\boldsymbol{\ell}^{5}-6.5930722347 315969685*\boldsymbol{\wedge}^{\wedge}6*\boldsymbol{\ell}^{6}\] \[\quad-5.7222811971914510510*\boldsymbol{\wedge}^{\wedge}8* \boldsymbol{\ell}^{7}-4.6628490111370640487*\boldsymbol{\wedge}^{\wedge}10* \boldsymbol{\ell}^{8}+\mathcal{O}(\boldsymbol{\ell}^{9}). \tag{113}\]
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 2;\overline{235}}(\boldsymbol{\ell})=13.31930381198603 32485*\boldsymbol{\ell}^{3}+948.0673270255977993506*\boldsymbol{\ell}^{4}\] \[\quad+\left(2.21810000224204*\boldsymbol{\wedge}^{\wedge}14\right) *\boldsymbol{\ell}^{10}+\left(1.77060597057117*\boldsymbol{\wedge}^{\wedge}16 \right)*\boldsymbol{\ell}^{11}+\mathcal{O}(\boldsymbol{\ell}^{12}). \tag{14}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & -1.4748513263396505 \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -115.81178544031663 & -6395.139969561789 & -288631.67875521374 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -1.0204216419115089*\(\wedge\)7 & -1.6925221152711755*\(\wedge\)8 & 1.5504966341330645*\(\wedge\)10 \\ \hline Rank & 9 & 10 & 11 \\ \hline Result & 2.369360641667081*\(\wedge\)12 & 2.2181000022420444*\(\wedge\)14 & 1.7706059705711586*\(\wedge\)16 \\ \hline \end{tabular}
\end{table}
Table 21: Reduction Result of Pentagon to \(D_{1}D_{4}D_{5}\) Triangle by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & -3.45043209014488 & -598.2368342483164 & -68719.25792386582 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -6.593072234731598*\(\wedge\)6 & -5.722281197191452*\(\wedge\)8 & -4.662849011137064*\(\wedge\)10 \\ \hline \end{tabular}
\end{table}
Table 22: Reduction Result of Pentagon to \(D_{4}D_{5}\) Bubble by FIRE6
\[+58591.3148140837440432520*\epsilon^{5}+3.3075884251460474463*^{ \wedge}6*\epsilon^{6}\] \[+1.7962165060926027683*^{\wedge}8*\epsilon^{7}+9.5171836062295834030*^{ \wedge}9*\epsilon^{8}+\mathcal{O}(\epsilon^{9}). \tag{111}\]
* Pentagon to Tadpole \(5\to 1\) We choose two typical integral to verify our Generation function.
The numerical Taylor expansion of the Generation function is given as follows:
\[\mathbf{GF}_{5\to 1;\widehat{2345}}(\ell)=-0.46477757320116485*\ell^{4}-76.65585 039332822961*\epsilon^{5}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Result & 13.319303811986032 & 948.0673270255978 & 58591.314814083744 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & 3.3075884251460475*\({}^{\wedge}6\) & 1.7962165060926026*\({}^{\wedge}8\) & 9.517183606229582*\({}^{\wedge}9\) \\ \hline \end{tabular}
\end{table}
Table 23: Reduction Result of Pentagon to \(D_{1}D_{4}\) Bubble by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -747095.8262283974 & -6.153099393573891*\({}^{\wedge}7\) & -4.746641454844134*\({}^{\wedge}9\) \\ \hline \end{tabular}
\end{table}
Table 24: Reduction Result of Pentagon to \(D_{4}\) Tadpole by FIRE6
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Rank & 0 & 1 & 2 \\ \hline Result & 0. & 0. & 0. \\ \hline Rank & 3 & 4 & 5 \\ \hline Rank & 6 & 7 & 8 \\ \hline Result & -7394.867372623504 & -504749.27267807984 & -2.954248879763609*\({}^{\wedge}7\) \\ \hline \end{tabular}
\end{table}
Table 25: Reduction Result of Pentagon to \(D_{1}\) Tadpole by FIRE6
\[-7394.86737262350352720*\ell^{6}-504749.27267807984126975*\ell^{7}\] \[-2.95424887976360925*\wedge^{\wedge}7*\ell^{8}+\mathcal{O}(\ell^{9}).\] (B.32)
|
2307.12875 | Digital Advertising: the Measure of Mobile Visits Lifts | Mobile-phone advertising enables marketers to reach customers at a personal
level and it enables the measure of costumers reaction by novel approaches, in
real time, and at scale. By keeping a device anonymous, we can deliver custom
adverts and we can check when the device owner will visit a specific
mortar-and-brick location. This is the first step in a sale. By measuring
visits and sales, the original marketers can determine their return on
advertising and they can prove the efficacy of the marketing investments. We
turn our attention to the measure of lift: we define it as the visit
acceleration during the campaign flight with respect to a controlled baseline.
We present a theoretical description; we describe a general and a simplified
approach in composing the exposed and the control baseline; we develop two
different vertical approaches with different comparable solutions; finally, we
present how to carry the experiments and the measures for a few dozens
campaigns; these campaigns range from hundred thousands devices and counting a
few hundred visits to a handful locations, to sixty million devices and
counting million visits to thousands locations. We care about experiments at
scale. | Paolo D'Alberto, Veronica Milenkiy, Fairiz Fi Azizi | 2023-07-24T15:16:36Z | http://arxiv.org/abs/2307.12875v1 | # Digital Advertising: the Measure of Mobile Visits Lifts
###### Abstract
Modile-phone advertising enables marketers to reach customers at a personal level and it enables the measure of costumers' reaction by novel approaches, in real time, and at scale. By keeping a device anonymous, we can deliver custom adverts and we can check when the device owner will visit a specific mortar-and-brick location. This is the first step in a sale. By measuring visits and sales, the original marketers can determine their return on advertising and they can prove the efficacy of the marketing investments. We turn our attention to the measure of lift: we define it as the visit acceleration during the campaign flight with respect to a controlled baseline. We present a theoretical description; we describe a general and a simplified approach in composing the exposed and the control baseline; we develop two different vertical approaches with different comparable solutions; finally, we present how to carry the experiments and the measures for a few dozens campaigns; these campaigns range from hundred thousands devices and counting a few hundred visits to a handful locations, to sixty million devices and counting million visits to thousands locations. We care about experiments at scale.
Categories and Subject Descriptors: G.3 [**Probability and Statistics**]: Nonparametric statistics, Statistical software, Time series analysis; A.3 [**Design and analysis of algorithms**]: B.3 [**Theory and algorithms for application domains**]: C.3 [**Computational advertising theory**]
General Terms: Statistics, Algorithms
Additional Key Words and Phrases: N-Sample, series, distribution comparisons, advertising
**ACM Reference Format:**
P. D'Alberto, Veronica Milenkiy, and Fariz Fi Azizi. Visit Lifts ACM V, N, Article A (January YYYY), 27 pages. DOI = 10.1145/0000000.0000000 [http://doi.acm.org/10.1145/0000000.0000000](http://doi.acm.org/10.1145/0000000.0000000)
## 1 Introduction
Advertising reaches customers with propositions and suggestions to appeal the features of a product to a tailored clientele in order to increase the product acceptance and its craftsmen revenues. If the customer has a mobile device and has been exposed to any adverts, we can measure their influence by counting any form of active actions such as visiting brick-and-mortar locations. In practice, advertising is a social experiment at large scale. Differently from historical social experiments or medical trials, we do not have scale problems and we have a rich often-continuous feature space describing our exposed and control groups. As in a social experiment, we often have a clear intent for the experiment but we may have limited or poor means to measures the effect of the experiment. Common questions are: did the advertising campaign work? How much did it work? How can we measure visits, goals? How can we claim that the experiment brought more visits than a control baseline? We shall address most of these questions in a constructive way: First, we shall propose one measure and two methodologies; second, we shall describe how the methodologies are related (one is more general than the other); and third, we present experiment results and quantitative measures for the two different approaches. In the following paragraph, we shall sketch an intuitive outline.
We measure a campaign goals by the **lift**, which is a visit acceleration. We use a mental exercise as introduction. Two authors Paolo and Fi are reached by a campaign advert on the same day (e.g., January 21, 2017), to play golf at Pebble Beach during the next month of February. Fi plays regularly (at pebble) and Paolo is no good. Fi is an example of re-targeting customers; that is, we expose
a customer already interested. Paolo is at best a new corner. Veronica was not exposed but she regularly interacts with both Paolo and Fi: she is a candidate for control because she could have being exposed on the same day (January 21, 2017) but she was not then and thereafter. Intuitively, the campaign has an effect if Fi will go more often than before or Paolo will tee once or more. Inherently, there is a concept of acceleration; that is, exposed has to do more after exposure and more than who is not exposed. Veronica will give a reference how hard was to achieve the above goal. If Veronica plays regularly, Fi should also be compared to Veronica. If they appear teeing off at the same pace then they may prepare for the masters' (more) or avoiding the green because an incoming blizzard (less). If Veronica does not play but Paolo's excitement makes her aware of the beauty of the game, Paolo should be compared to Veronica. Our measure of lift must capture this desire to change the pace of visits as time unfold, this is why we use the term of acceleration. Of course, we must consider an average acceleration and do not forget this is a an intuitive but gross oversimplification.
The previous mental exercise introduces acceleration in combination with matching across comparable people. How can we describe customers to draw a comparison? In the example above, if we know that Fi has visited before exposure and Veronica did as well, Fi and Veronica are _better_ comparable than Paolo and Veronica. The visits before exposure is a discriminating features. If the campaign targets only males, Veronica would not be targeted and, thus, we should not use her as control baseline. Some targeting features are obtained by voluntary identification and others are the results of approximations. We represent these approximated features using continuous probabilities. In the best scenarios, we are targeting many-to-many matching and thus once again aiming to the computation of _average_ accelerations.
For third party campaigns, we do not know anything about the campaign targeting and we must infer it by the people targeted. In the example, Veronica is the closest control baseline to Fi or to Paolo as a function of the features used: previous visits or gender. Interesting, there are different approaches to matching and different matching algorithms will provide different results. In this work, we encourage the application of different methodologies and matching algorithms.
The organization of the paper follows. In Section 2, we introduce our notations and the definition of _Lift_. We present also two interpretations: a balanced approach in Section 2.1 and an unbalanced one in Section 2.2; we show results for both. We present our original contribution for the features computation in a continuous space in Section 3, which is composed of a _Location Graph_, Section 3.1, and a _User Profile_, Section 3.6. We present our original contribution about _Matching_ in Section 4. We present experimental results in Section 5 and we conclude in Section 6.
## 2 Problem Definition and Introductory Notations
We start by presenting yet another example: we start an advertising campaign where we invite the **exposed people** to visit a coffee shop. The exposure is by means of digital advertising and the **visit** is by means of a distance measure between the mobile devices to coffee shops. The goal of any advertising is to affect visits so that the exposed group has more visits than a **control group** and, because exposed and control can be quite different in size, a better visit rate. We may have very different ways to choose the exposed-control set. We distinguish two ways based on the observation that the control group is often larger than the exposed group: imagine the whole population versus coffee drinkers.
**Balanced:**: We sample the control using a heuristic so to balance control and exposed, without changing the average _response_ of control, doing so, we can compare directly the absolute number of visits but a relative measure is still preferred. Then generalize the result to the whole experiment.
**Unbalanced:**: We keep the size of each set unbalanced, as they are, and we compare their average response. There is no need to generalize the results found.
A balanced one will emphasize the actual response of each devices, because their numbers are equal see Section 2.1 and Equation 15. This is natural we like to compare things directly, one-to-one, but
exposure touches a _small_ set and we have to make the control small. An unbalanced one may show how little contribution the exposed group has in absolute number of visits, but because the exposed and control groups are now different we must emphasize their average performances. In principle, the balanced/unbalanced approach should provide good estimates to the campaign performance especially for large experiments. We shall show that the unbalanced approach is general enough so that we can derive a version of the balanced approach. In practice, there are constraints that will make the two approaches different. Of course, there can be many variations and applications, a complete comparison is beyond the scope of this work.
Assume that we have a tool to compute the **response** of any user: that is, for any device \(d_{i}\) and any date \(e_{j}\) we have a quantitative measure
\[r(d_{i},e_{j})\]
that gives us the number of visits from time \(e_{j}\) forwards and it subtracts the number of visits before \(e_{i}\). The response is a difference between two interval of times in order to adjust for features that are time sensitive (i.e., \(e_{j}\)); thus, we can account for their effects (better). In social science and medical treatments, a device owner can have a response for being exposed and a response for being not exposed: a treatment \(\gamma\) and not \(\gamma\) (i..e., \(r_{\gamma}\)), sometimes placebo means no treatment and sometime it means a different treatment, and both treatments could be given. In such a scenario, the response would be this
\[r(d_{i},e_{j})=r_{\gamma}(d_{i},e_{j})-r_{\gamma}(d_{i},e_{j})\]
and we could estimate the effect of the campaign by estimating the expectation of the response statistics:
\[L=E[r(d_{i},e_{j})]=E[r_{\gamma(e_{i})}(d_{i},e_{j})-r_{\gamma(e_{i})}(d_{i}, e_{j})]. \tag{1}\]
In practice, a user is either exposed or control. Thus for an exposed device \(r_{\gamma(e_{i})}(d_{i},e_{j})\) is zero and for a control device \(r_{\gamma}(d_{i},e_{j})\) is zero, thus for control the response has negative contribution. At the limit, the fact that exposed and control can have different sizes is no issue with the expectation of L; however, in practice the lift should be written as follows
\[Lift=E[r(d,t)|d\in\text{ Exposed}]+E[r(d,t)|d\in\text{ Control}]. \tag{2}\]
Where \(E[x|y]\) is the conditional expectation of \(x\) with respect to \(y\), the first mode of the response statistics, or lift, is a comparison between expectations. Considering that control response is negative, lift is a difference in expectations.1 First, we must consider correlation. Second, expectations are computed by averages (i.e., if we use bootstraps, several samples of averages). Equation 2 is the foundation of most comparative analysis in social science: our is yet another social experiment and it can involve million of devices and people across the country. In the following, we set the notations and definitions, and we do our best in expressing Equation 2 on a clear mathematical footing. Our main goal is the analysis of the statistics \(r(d,t)\) and provide a clear and complete characterization of it. We will dwell also in the lift statistics which is the average of the original statistics.
Footnote 1: An explicit difference \(E[r_{e}]-E[r_{e}]\) is more common in literature as we report in the following.
We start with the definition of an **impression**\(\iota(d_{i},t,\ell)\): an impression has a device identification number \(d_{i}\), it has a time stamp \(t\) in seconds, and it may have a geographical locations as \(\ell=(latitude,longitude)\). In practice, an impression represents when a device is exposed to an advert and possibly where this event happened. We may distinguish two scenarios:
1. The impression belongs to a campaign we want to measure performance, thus the time is used to specify the first time a device is **exposed** and
2. The impression has location \(\ell\), we measure how close this device is to our location of interest.
Intuitively, a campaign is a set of adverts delivered by different means: mobile web, _apps_, sometimes using conventional digital advertising such as websites' banners and sometime by TV commercial
An exposed device has a _first time seen_. We can define it also for a control device: it is when we have the fist recorded impression during the campaign. Thus, all impressions after the first time seen can be used for the measure of performance.
For us, the _goal of a campaign_ is to invite the owner of a device, a user, to visit a set of **locations**. These locations are the likes of coffee shops, department stores, or movie-theater show rooms. Here, we define a location \(l_{i}\) by its geographical location \((lat,lon)\) and a campaign location set as \(L_{C}\).
The first connection between impressions, locations, and campaign performance, is by the definition of a **hit**: a hit is an impression with the following properties:
\[h(d_{i},t,\ell)=\begin{cases}1&\text{if }\exists l_{i}\in L_{C},\exists K>0, \exists\|.\|:\ell\times\ell\to\mathbb{R},\text{ s.t. }\|\ell,l_{i}\|<K\\ 0&\text{otherwise}\end{cases} \tag{3}\]
A hit is an impression arbitrarily close to one location. We intend to use the \(L_{2}\) norm thus the hint in the notation of \(\|.\|\), and it is a distance function. Having a distance function is general enough that, if we like, we can reduce the distance to a zero-one function as to belong or not to any polygon describing the outside boundary of a building. Eventually an altitude will be part of any geographical location and more interesting distances will be used.
We shall digress a little by introducing a few important variations enriching the definition of a hit. This is not required for the understanding of our approach and it can be skipped at first reading. After this digression we shall define a visit in Equation 6 on page 6.
**Deterministic and stochastic hits.** In Equation 3, we say that an impression is a hit if it is within a radius from our location of interest. The radius is an arbitrary choice and it represents a meaningful area. This choice is made a priori before the campaign started and by the client. This is the first priority of being a hit and thus a visit.
Sometimes, the location is in a private parcel. This natural area is described by a polygon or set of vertices. Once the parcel is identified, any impression in the parcel can be a hit. This definition is extremely useful for investigating locations with large real-estate: that is, where the parking lot is as important as the inside of the facilities. For example, this simple mechanism can actually double the signal coming from hits and thus enhance the experiment. We can envisions several combinations between the radius and parcel based hit, these are beyond this short digression.
There is a third type of hit that is worth considering: it is a stochastic hit. A hit with a range in \([0,1]\) can be liberating because there are so many uncertainties that may contribute to a different measure of distance between a location and an impression. If we accept that the latitude and longitude of an impression inherit an error, if we accept that our estimation of the earth radius is an approximation that varies as function of the longitude, if we accept that we may have just a sample of the impressions available, and eventually that the users may have behavioral idiosyncrasies in their use of the mobile phone. If we accept all of the above, then we can move into a stochastic measure in order to relax the condition of _zero_ hits.
This can be an error in measure, thus we could use a normal distribution: we could argue that the distance is
\[d+\epsilon\text{ where }\epsilon\in\mathcal{N}(0,\sigma_{\ell})\]
Such an error could be used to create a gray area around the radius of interest and thus smooth the hit response function accordingly.
Another point of view is to consider the device path (i.e., the record of impressions) as a stochastic process. A user visiting a location must be entering the location and then leaving. If we measure an impression just outside of the interest radius, what could be the probability the device will step in unnoticed or it has stepped out noticed?
With major arguments to be proven, a Brownian motion with a drift describes our scenario nicely: we say that for the impressions just outside the radius, their distance from the location is like an inverse Gaussian distribution.
\[p_{x}(x|\mu,\lambda)=\sqrt{\frac{\lambda}{2\pi x^{3}}}\exp^{-\frac{\lambda(x-\mu)^{2 }}{2x\mu^{2}}} \tag{4}\]
For example, we take a location and for all impressions with distance less than three time the radius, we compute the average and variance
\[\dot{\mu} = \frac{1}{N}\sum_{i=1}^{N}d_{i}\textbf{ and }S^{2} = \frac{1}{N-1}\sum_{i=1}^{N}(d_{i}-\dot{\mu})^{2}\]
Thus we could describe our process by
\[\dot{d}\in IG(\dot{\mu},\frac{\dot{\mu}^{3}}{S^{2}})\]
and a stochastic hit is based on a survival function
\[sh(d_{i},t,\ell)=\begin{cases}1-CDF_{d}(d_{i})&\text{if }R<d_{i}\leq R+\frac{R}{2}\\ 0&\text{otherwise}\end{cases} \tag{5}\]
As for the parcel, stochastic hits are enrichment to the original definition of hits, we let the data help describing the hits in conjunction with external and accepted definition. We shall present experimental results and further discussions about this type of visits in Section 5.4.
We conclude here our digression and now we can define a visit from the definition of hits. In Equation 6, we may choose to lump multiple hits in an interval of time into a single hit that we call **visit**:
\[v(d_{i},t,\Delta t)=\begin{cases}1&\text{if }\exists t_{i}\in[t,t+\Delta t],i \in[0,N>0],\exists H>0,\exists\{\ell_{m}\}\\ &\textbf{s.t. }\sum_{j=0}^{N-1}h(d_{i},t_{j},\ell_{m})>H\\ 0&\text{otherwise}\end{cases} \tag{6}\]
A visit is a function of a device, the locations in \(L_{C}\) where we have a hit, and the interval of time \(\Delta t\). Such interval can be arbitrary, for us it can be up to one day.
Now consider the flight of a campaign as a segment on a straight line: there is a beginning and there is an end. A day is a single tick on the segment and we have a discrete set of intervals or epochs \(e_{j}\):
\[V(d_{i},e_{j})=\sum_{j}v(d_{i},t_{j},\Delta t_{j})\text{ where }t_{j},t_{j}+ \Delta t_{j}\in[e_{i},e_{i+1}) \tag{7}\]
Equation 7 represents the **visits per day** for the device \(d_{i}\) in the day specified by epoch \(e_{j}\). This is a time series. Given any epoch in this time line, we may want to compute a discrete differential function to determine the grade of increase or decrease. We call this **response** and we use an _even_ symmetric weight function:
\[w_{k}=-w_{-(1+k)},\forall k\geq 0 \tag{8}\]
where
\[\begin{cases}w_{k}=\textbf{e}^{-k}&k\in[0,M-1]\\ w_{k}=-\textbf{e}^{k+1}&k\in[-M,-1]\end{cases} \tag{9}\]
such as a modified Laplace function to weight accordingly visits at different times.
\[\Delta_{M}r(d_{i},e_{j})=\sum_{k=-M}^{M-1}V(d_{i},e_{j+k})*w_{k} \tag{10}\]
The weight function is symmetric and the response \(\Delta_{M}r(d_{i},e_{j})\) follows. The present epoch \(e_{j}\) has maximum weight \(w_{0}\sim 1\) as well as the previous day epoch \(e_{j-1}\) (\(w_{-1}\sim 1\) ). At a minimum, \(e_{j-1}\) and \(e_{j}\) summarize the discrete difference of the visit rate; such a symmetric weight function will help to smooth the response function. In practice, we introduce the concept that a visit can belong to the interval \([0,1]\). Furthermore, not every user has a response or visits, for that matter, at epoch \(e_{j}\). For example, if a device \(d_{i}\) has its first time seen epoch after \(e_{j}\), it will not contribute. If the last impression is before the epoch \(e_{j-M}\), it will not contribute either.
Given a campaign \(C\), its locations \(l_{C}\), the flight beginning \(e_{1}\) and its ending \(e_{T}\) with \(T>1\), and a set of devices \(d_{i}\) with \(i\in[1,N]\), we call the set \(D\); we may want to measure the performance of the campaign by its expectation of the response; that is, the average acceleration of visits. A clear caveat is that not all devices will be active and they will not contribute to the response at every epoch: for each epoch \(i\), \(N_{i}\) of the \(N\) will contribute to the true acceleration; if we keep \(M\) as a constant we can omit the \(\Delta_{M}\):
\[E[r(d,e)|(D,T)]\sim\frac{1}{T}\sum_{i=1}^{T}\Big{[}\frac{1}{N_{i}}\sum_{j=1}^{ N_{i}}\Delta_{M}r(d_{j},e_{i})\Big{]} \tag{11}\]
Thus, we can finally express our performance measure: We identify the **exposed group**, the set of devices that have been exposed to the campaign adverts by **E**, we find a suitable and comparable **control group** that we identify by **C**. Then the quality of the campaign is summarized by the following expectation that we call **lift**:
\[Lift=E[r(d,e)|(E,T)]-E[r(d,e)|(C,T)] \tag{12}\]
The process summarized in Equation 12 explains our original definition of lift as in Equation 2, where in this latter equations we show that time is common to both expectations and the negative contribution of control is explicit because the response is computed for exposed and control separately. However, our formulation can be restated with little modifications if users have multiple treatments, thus different responses, at different times.
**Practical considerations.** The lift as expectation is powerful, but it does not resonate to whom design the campaign. The sign is easy to understand: there is a positive effect or there is a negative effect. Often, we used to transform the lift measure into a relative number, percentage:
\[Lift_{p}=100*\frac{E[r(d,e)|(E,T)]-E[r(d,e)|(C,T)]}{E[r(d,e)|(C,T)]} \tag{13}\]
This is palatable because it provides a relative measure to a reference baseline (i.e., control). Unfortunately, for small control lifts, the relative measure can be unbounded. A small campaign can be unreasonably profitable/unprofitable and with large variances. In general, the measure that appeals the most is a relative measure based on the number of visits. For example, this is a simple normalization that we use often
\[Lift_{v}\sim 100*(lift*N*time)/(totalvisits) \tag{14}\]
This is a relative measure of extra visits provided by the campaign. We will hide under the hood which computation we use. In the experimental section, we shall show relative lift (always based on the expectation) and we use it to appeal the practitioner and for presentation purpose: a projected measure in the range \([-100,+100]\) is more _pleasant_ than the real measure in the range \([-10^{-6},10^{-6}]\).
### Exposure effect: First Time of Exposure and Balanced Control
By construction, we know the first time when we expose a device. This is the analogous to the start of treatment in a clinical trial and it is clear the importance of the response value \(\Delta_{M}r(d_{i},e_{j})\) when \(e_{j}\) is the first time seen (Equation 10). In practice, the \(e_{j}\) is completely defined by the device and we can describe this by \(\xi(d_{j})=e_{i}\). With a proper choice of the interval \(M\) and weight function, the exposed device contribution is limited to the interval \([e_{j-M},e_{j+M}]\). Thus Equation 11 can be simplified to
\[E[r|E]\sim\frac{1}{N}\sum_{j=1}^{N}\Delta_{M}r(d_{j},\xi(d_{j})=e_{i})=\frac{1 }{N}\sum_{j=1}^{N}\Delta_{M}r(d_{j})\]
Even thought it is not necessary, for symmetric purpose and for computation purpose, we should find a similar epoch for the control devices so that we can compute \(E[r|C]\) in a similar fashion. We can infer an epoch when the control device could have been exposed but we did not expose them. Such a first time seen will be given in correlation to the exposed devices and thus the computation can be carried on. This is easier if the exposed-control set is balanced, because we could use random sampling to infer an average response for otherwise an arbitrary choice of epoch.
\[E[r|E]- E[r|C]\sim \tag{15}\] \[\frac{1}{N}\sum_{j=1}^{N}\Delta_{M}r(d_{j},\xi(d_{j})|E)-\frac{1 }{N}\sum_{j=1}^{N}\Delta_{M}r(d_{j},\xi(d_{j})|C)\] \[\frac{1}{N}\sum_{j=1}^{N}\Delta_{M}r(e_{j})-\Delta_{M}r(c_{j})\] (16) \[\sim E[r_{e}-r_{c}] \tag{17}\]
This formulation and the computation are appealing for a few and important reasons.
1. The response statistics \(r_{e}-r_{c}\) is very intuitive and computationally appealing, it describes the original problem clearly.
2. If the control set is independent of the exposed set and we have a good estimate of the first time seen for both, this will be a well formulated experiment. Thus we can do matching to remove targeting bias and compare performance by slicing the data accordingly. We can compare the original lift and the matched lift.
3. The computation of lift as average is a sound estimate of the expectation and thus of the campaign lift.
4. Interesting constraints can be applied to expose and control together: for example, we could count only visits that happen in a short interval after exposure and thus reject long term effects.
5. It is like 1-1 matching is applied already.
6. The lift measure is an expectation of the difference of two functions and not the difference of expectations; thus a single variance can be associated instead of two and their correlation matrix.
From a different prospective, these advantages are hindrances. For example, loosing the relation between visits and epoch may be too much of a simplification. We may want to compute explicitly:
\[\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\Delta_{M}r(d_{j},e_{i}) \tag{18}\]
In Equation 18, we represent a time series of the visits acceleration and we show how a campaign is effective to the granularity of the epoch of \(e_{i}\). Also, exposure should have long term effects because exposure is a continuous process limited only by a frequency cap.
The estimate of first time seen for control devices is a fragile process especially for long campaigns and large control set: force a control user to be measured at an arbitrary date in the process may not represent its average behavior. Lift as expectation and variance as single number is very powerful: it comes with the condition that the implicit pairing in Equation 17 is proper, but matching will come only after this pairing, which could be too late.
Nonetheless, we shall show experimental results using this balanced approach.
### Unbalanced Control: A visit based choice.
Let us repeat here what is the goal of a campaign: it is an invitation to visit a set of locations and the exposed group is given. However, just _seeing_ one location is another invitation to visit and this is an invitation directed to any one nearby. Clearly, a close distance is of the utmost importance for both exposed and control and this is obviously true for national campaigns towards local enterprises: for example, _Blue Bottle Coffee_ has locations in Tokyo Japan, San Francisco, Palo Alto, Oakland, and New York and a connoisseurs may visits all them during the span of a week, but someone living in Los Angeles is neither a great target for exposure nor a great example for control.
First, we can draw a circle of one mile around a coffee shop and suggest that the people in this circle will visit our location. Considering that who visits will be eventually in the circle, this seals a strong group and, in it, there will be part of our exposed group. The connoisseurs will be there (if we have impressions for them, yes even in Tokyo) but also any passer by.
Given this exposed-control set, we can compute the lift for the exposed and for the control. These sets will have all the visits but they will not represent the whole exposed population and will not have enough similarities to compare to each other (i.e., matching). We may have to increase the circle to two miles to capture twice as many exposed and twice as many control. Notice, control will be much larger than exposed unless we saturate the area and the population (economically not smart). We need to have enough users so that we can do a meaningful matching and thus removing any bias in the sample: this means, when most exposed can be matched with control and viceversa.
## 3 Audience specification: User profile and location graph
In the following section, we shall describe an original and deterministic approach to create a feature space describing any user and any location by a set of keywords, which are key-value pairs and the values are probabilities in the continuous space in \([0,1]\). For example, we may introduce a probability for a user to be _female_. We shall explain our inference process but intuitively a user visiting a location will inherit some of the features of the location and, in turn, the location will do the same. First, a business location attracts an audience; Second, its neighbors will inherit some of this audience; Third, a lot or a few people may visit a business location, and Fourth, who is visiting also composes the audience of the business. All these interactions will affect the features of both businesses and visitors. We shall start describing the location graph in Section 3.1 and the user profile in Section 3.6.
### Location Graph
In general, a triplet \((pid_{i},lat_{i},lon_{i})\) defines a location \(l_{i}\) and it may represent either a business location or a census-based location. Given two locations, we can compute different type of distances: Haversine formula, Manhattan, or a version of \(L_{1}\). Independently, we denote the distance between two locations as \(d_{R}(l_{i},l_{j})\). We use an estimate of the Earth radius in line with the literature and, of course, the smaller is the real distance the better is the approximation.
Consider a \(\mathbf{cell}_{i,j}\) defined by point \((lat_{i},lon_{j})\) and point \((lat_{i}+\Delta x,lon_{j}+\Delta y)\), where \(\Delta x\) and \(\Delta y\) are arbitrary positive constants. The point \((lat_{i},lon_{j})\) describes a rectangular cell. We define as cell any \((lat,lon)\in\mathbf{cell}_{i,j}\) so that
\[lat_{i}\leq lat<lat_{i}+\Delta x\ \mathbf{and}\ lon_{j}\leq lon<lon_{j}+\Delta y.\]
Assume we can draw a cell **partition** for any geographical area (i.e., the United States): that is, any location is in one cell and any two cells have empty intersection. Any \(\mathbf{cell}_{i,j}\) has eight neighbors
specified by the following points in counter clockwise fashion \(\mathcal{N}(\mathbf{cell}_{i,j})\):
\[\begin{array}{ll}\mathbf{cell}_{i-1,j-1}&=(lat_{i}-\Delta x,lon_{i}-\Delta y), \\ \mathbf{cell}_{i-1,j}&=(lat_{i}-\Delta x,lon_{i}),\\ \mathbf{cell}_{i-1,j+1}&=(lat_{i}-\Delta x,lon_{i}+\Delta y),\\ \mathbf{cell}_{i,j+1}&=(lat_{i},lon_{i}+\Delta y),\\ \mathbf{cell}_{i+1,j+1}&=(lat_{i}+\Delta x,lon_{i}+\Delta y),\\ \mathbf{cell}_{i+1,j}&=(lat_{i}+\Delta x,lon_{i}),\\ \mathbf{cell}_{i+1,j-1}&=(lat_{i}+\Delta x,lon_{i}-\Delta y),\\ \mathbf{cell}_{i,j-1}&=(lat_{i},lon_{i}+\Delta y).\end{array}\]
With the concept of distance, we know that any two locations \(l_{m}\) and \(l_{n}\) in \(\mathbf{cell}_{i,j}\) must have \(0\leq d_{R}(l_{m},l_{n})\leq d_{R}((lat_{i},lon_{j}),(lat_{i}+\Delta x,lon_{j} +\Delta y))=D\). Thus if one location in \(\mathbf{cell}_{i,j}\) has a neighbor at distance no farther than \(D\) then it has to be in any of the 8 neighbor cells above. Now that we have defined distance and cell partitions, we can define and compute a geographically distributed **location graph**.
Assume we set the constant \(\Delta x=\Delta y=\Delta\) and we have a cell partition. For any cell \(\mathbf{cell}_{i,j}\), we collect all the locations in \(\mathbf{cell}_{i,j}\cup\mathcal{N}(\mathbf{cell}_{i,j})\). For every location \(l_{m}\in\mathbf{cell}_{i,j}\), we compute a distance with all other locations \(l_{n}\neq l_{m}\in\mathbf{cell}_{i,j}\cup\mathcal{N}(\mathbf{cell}_{i,j})\). We then create a node in a graph associated with \(l_{m}\) where we store all the neighbors information and their distances (edges) if the distance is less than \(D/2\) (say). The graph has a geographical key \(\mathbf{cell}_{i,j}\) and each location has information about its first degree neighbors, which must be in \(\mathbf{cell}_{i,j}\cup\mathcal{N}(\mathbf{cell}_{i,j})\). In Figure 1, we show an example of location graph. The cell partition describes a grid and we shall introduce grid algorithms. In the following section, we shall explain briefly the graph building computation, which is the basis for all our graph computations.
### Location Graph: Distance Computation
For simplicity, we have a injective function that map any location \(l_{m}\) to only one cell,
\[J(l_{m})=\mathbf{cell}_{i,j},\]
and given a \(\mathbf{cell}_{i,j}\) we can compute \(\mathcal{N}(\mathbf{cell}_{i,j})\). The computation of the location graph follows and it is the foundation for any graph algorithms in this work:
Figure 1: Example of geographical distributed location graph
**Broadcast:**: For every location \(l_{m}\) in the graph, we compute the cells \(\mathcal{N}(J(l_{m}))\), and we broadcast \(l_{m}\) to the them. In practice, the location \(l_{m}\) is associated with a node in the graph.
**Computation:**: For every cell \(\textbf{cell}_{i,j}\) and for every pair \(l_{s}\neq l_{t}\) in the cell, we compute all \(d_{R}(l_{s},l_{t})\) distances and we create an edge between the nodes with their distance.
**Reduce:**: For every cell \(\textbf{cell}_{i,j}\), we store the nodes \(l_{s}\) so that \(\textbf{cell}_{i,j}=J(l_{s})\)
The location graph is a hierarchical graph, a cell describes a circumscribed area where the graph has locations with connections within the cell and, possibly, only to its neighbors cells. A location \(l_{i}=(pid_{i},lat_{i},lon_{i})\) identifies all its direct neighbors \(\pi(l_{i})\), its own cell \(J(l_{i})\), and the neighbors' cells \(\mathcal{N}(J(l_{i}))\). If a location has also an altitude, think the Dubai's tower, the distance can be enhanced in order to distinguish locations beyond latitude and longitude but there is no changes about cells and neighbors.
### Location Graph: Keywords
We are ready to dwell into keywords and their notations. Consider a location \(l_{j}\) where we enumerate the locations using an integer \(\ell\in[1,J]\). In the same way, we enumerate the keywords using an integer in \(k\in[1,K]\). A keyword is associated to a step \(s\); this has two meanings. First, \(s\) represents the time in a time series; Second, at any step \(s\) we perform a computation among only direct neighbors: thus from step \(s-1\) to step \(s\), each step propagates keywords values in the graph by direct connections and thus we propagate keywords to _two level_ connected neighbors. The last and important identification of a keywords is the category where this feature is derived from, we enumerate the categories by \(c\in[1,3]\): First, from its direct neighbors; Second, weighted by number of visits and Third, the keywords from visitors. We shall clarify these distinctions as soon as we express the keywords and their applications. In short, we can represent any keyword by
\[\begin{smallmatrix}\bar{c}\\ k\end{smallmatrix}w_{\ell}^{s}.\]
As reinforcement, we have: \(\ell\) location, \(s\) step, \(k\) keyword and \(c\) category.
Using matrices and using notations to describe each location \(\ell\), our computation is an exponential smoothing, we have \(\mathbf{W}_{\ell}^{s}\in\mathbb{R}^{3\times K}\):
\[\mathbf{W}_{\ell}^{s}=\left[\begin{array}{c}\mathbf{n}_{s}^{s}\\ \mathbf{v}_{\ell}^{s}\\ \mathbf{u}_{\ell}^{s}\end{array}\right]\Leftarrow\mathbf{\Lambda}\oplus(\mathbf{W}_{\ell }^{s-1})+(\mathbf{I}-\mathbf{\Lambda})\oplus\varphi(\mathbb{X}) \tag{19}\]
We shall specify each term by unfolding the matrix notation completely (the uncommon symbols \(\oplus\)) and we shall shed lights on the use of exponential smoothing.
#### 3.3.1 Neighbors \(c=1\)
In real estate, the location and its neighbors are key to any property value. Given a location \(\ell\) and its neighbors up to \(|\pi(\ell)|=N_{\ell}\) at step \(s\), we compute the update keyword value by the following exponential smoothing:
\[\mathbf{n}_{\ell}^{s}=\mathbf{\lambda}*\mathbf{n}_{\ell}^{s-1}+\frac{(\mathbf{1}-\mathbf{\lambda} )D_{\ell}}{N_{\ell}}*\sum_{j\in\pi(\ell)}\frac{\mathbf{n}_{j}^{s-1}}{1+d(j,\ell)} \tag{20}\]
Where we have
\[\mathbf{a}*\mathbf{b}=[a_{1}b_{1},a_{2}b_{2},..,a_{K}b_{K}]^{t},\]
we identify the transpose of a vector \(\mathbf{v}\) with the usual superscript \(\mathbf{v}^{t}\), the scalar by vector follows the usual rules
\[a\mathbf{v}=[av_{1},av_{2},..,av_{K}]^{t},\]
\[(\mathbf{1}-\mathbf{\lambda})=[1-\lambda_{1},..,1-\lambda_{K}]^{t}\]
\[D_{\ell}=\sum_{j\in\pi(\ell)}\frac{1}{1+d(j,\ell)}\]
and \(d()\) is our distance function.
The notation tries to keep an intuitive format but we understand the difficulties associated with it. Here is our interpretation of Equation 20. We compute the average of the keywords of the neighbors; each neighbor contributes as a linear and decreasing function of its distance from the location: that is, the closest the distance is, the largest contribution will be; intuitively it is the inverse of a cost and the cost is the time required to go from one location to the other; we assume they will provide information to the location \(\ell\); if \(\ell\) has no other source and the neighbor keywords do not change, then at steady state \(\mathbf{n}_{\ell}\) will converge to the average of its neighbors.
#### 3.3.2 By number of visits, \(c=2\)
Given a location \(\ell\), we can compute the number of visits to this location in between \(s{-}1\) and \(s\), and we identify this count by \(V^{s}_{\ell}\). Thus we express the update as follows
\[\mathbf{v}^{s}_{\ell}=\frac{V^{s}_{\ell}}{\Upsilon_{\ell}}(\mathbf{\mu}*\mathbf{v}^{s-1}_{ \ell})+\frac{(\mathbf{1}-\mathbf{\mu})}{N_{\ell}}*\sum_{j\in\pi(\ell)}\frac{V^{s}_{j}} {\Upsilon_{\ell}}\mathbf{v}^{s-1}_{j} \tag{21}\]
Where \(\Upsilon_{\ell}\) is \(\sum_{j\in\pi(\ell)}V^{s}_{j}\). In practice, a location with a lot of visits will dominate the surrounding audience. By construction, the neighbors locations are relatively close and thus inherit these hot spots audience. In practice, a few business place themselves in close proximity to others to gain access to their audiences.
#### 3.3.3 By visit types, \(c=3\)
Assume we account for the keywords of the visitors of location \(\ell\); for example, we create a distribution of the visitor keywords from epoch \(s{-}1\) to \(s\) and to show that is related to the computation of the keywords we use the following notation \(\bar{\mathbf{u}}_{\ell}\), then we count the number of visitors \(V^{s}_{\ell}\) as above.
\[\mathbf{u}^{s}_{\ell}=\mathbf{\nu}*\mathbf{u}^{s-1}_{\ell}+\frac{(\mathbf{1}-\mathbf{\nu})}{ \Upsilon_{\ell}(N_{\ell}+1)}*\sum_{j\in\pi(\ell)\cup\ell}V^{s}_{j}\bar{\mathbf{u} }^{t}_{\ell} \tag{22}\]
The computation by visit type combines the traffic of all direct neighbors and create a weighted average. For example, we can emphasize the direct visitors instead of neighbors'. This is one approach to harness keywords associated to users, it is independent from the graph, and create a ripple effect of otherwise sparse and rare events such as visits.
#### 3.3.4 All together
We foresee the case of other dimensions and categories that can be added to the previous ones, but here we explain how we summarize the keywords weight. In summary, \(\mathbf{n}^{s}\) represents the graph contribution and it changes only when the graph changes; \(\mathbf{v}^{s}\) introduces the idea that neighbors have different contribution as a function to the number of visitors; \(\mathbf{u}^{s}\) represent the contribution from each visitors to each neighbors locations.
We combine them as independent
\[\mathbf{w}^{s}_{\ell}=\mathbf{\Gamma}^{t}\left[\begin{array}{cc}\mathbf{n}^{s}_{l}&\bm {v}^{s}_{l}&\mathbf{u}^{s}_{l}\end{array}\right]=\mathbf{\gamma}^{n}*\mathbf{n}^{s}_{l}+ \mathbf{\gamma}^{v}*\mathbf{v}^{s}_{l}+\mathbf{\gamma}^{u}*\mathbf{u}^{s}_{l} \tag{23}\]
where
\[\gamma^{n}_{k}=\begin{cases}0&[\mathbf{n}^{s}_{l}]_{k}=0\\ 1/\text{number of non zeros }[\mathbf{u}/\mathbf{v}/\mathbf{s}^{s}_{l}]_{k}&\text{otherwise}\end{cases}\]
In practice, we compute an simple average for each keywords but if any dimension provide no keyword entry, that is zero, we do not account for them. In this way, when we will add more dimensions, they will enrich the keywords, _density_, and they will not increase their value.
#### 3.3.5 Priors, distributions, and probabilities
Sometimes, a business knows the audience they have or they must have. This is a priori information or simply prior. At first, it may seem difficult to envision businesses with a specific and limited audience; however, in practice, there are gender-specific services such as Obstetrics and Gynecology, there are age-specific business such as liquor stores, there are income-specific ZIP codes (upper east side) or business such as Ferrari dealers.
In this prior category, we add averages as well: for example, by Census or other means, we know the _averages_ about the population of a ZIP code (thus a location or locations). Such averages help to narrow down the most likely features. In fact, we use probability as estimate of the feature distribution and use these as prior information. For example, we may have information about a ZIP+PLUS4 area, which we represent by its centroid latitude and longitude, the locations in this ZIP will be either connected directly to it or a few step way, and thus the ZIP will enrich their own keywords by proximity. Of course, an area into a single node in the location graph is an approximation but it is one simple way to feed the graph with priors otherwise not available.
Prior features are set and no further computation is needed. There is only one exception introduced to cope with imprecise and erroneous data: we normalize the keyword values in order to keep meaningful distributions, this may require scaling the keyword values.
### Location Graph: Update
Once we have a location graph with its features, we may have to rebuild the graph because a few businesses relocated, new businesses started, and old business exited.
In the graph, a business exiting translates into a node deletion and thus edges deletions to all its neighbors. Any modification to the list of locations will require a graph update. In this scenario, the new graph will inherit the keywords from the old graph but we must propagate the effect of the new connection by updating the keywords accordingly to Equation 19.
### Location Graph: Iterative Algorithm and cycles
We can iterate the computation of keywords in order to collect signal from locations father away in the graph or just to adjust the keywords value because of graph modifications.
**Broadcast:**: For every location \(l_{m}\) in the graph, we determine \(\mathcal{N}(J(l_{m}))\), and we broadcast \(l_{m}\).
**Computation:**: For every cell \(\mathbf{cell}_{i,j}\) and for every \(l_{s}\) in the cell, we compute Equation 19-23
**Reduce:**: For every \(\mathbf{cell}_{i,j}\), we store the nodes \(l_{s}\) so that \(\mathbf{cell}_{i,j}=J(l_{s})\)
This is an iterative algorithm that will propagate the keyword values across the graph. In this scenario, the exponential smoothing introduces yet another dimension: it will reduce, edge by edge, the effect of cycles in the graph. In practice, one degree neighbor will contribute \((1-\lambda)\), two degree neighbors will contribute \((1-\lambda)^{2}\) (i.e., \(i\)-degree will provide \((1-\lambda)^{i}\)). In practice \((1-\lambda)\sim 0.5\) and thus the effect will be \(\frac{1}{2^{n}}\), after three levels the contribution is negligible with respect to the local nodes and its direct neighbors. Nonetheless the smoothing can be tuned for the keyword and for the dimension.
### User Profile
In the previous section, we show that if we have information about the users visiting a location, then we can enrich the location and its neighbors. In this section we show that we can also take the location information and enrich its visitors.
Assume we have already a set of users \(\mathbf{V}^{s-1}\) at step \(s-1\) with their keywords and we want to compute the next step. We enumerate the users \(n\in[1,N]\) and we specify the users by the same keywords as for the location graph: \(\mathbf{v}_{n}\) is a vector of keywords. We have at our disposal the following information: Prior (\(\mathbf{P}^{s}\)), Visits, and graph locations. The user profile computation has the following formulation:
\[\mathbf{V}^{s}=\mathbf{\rho}*\mathbb{F}(\mathbf{V}^{s-1},\mathbf{P}^{s}))+(\mathbf{1}-\mathbf{\rho})* \mathbf{X}^{s}. \tag{24}\]
We shall expand and clarify all terms and how we compute visits (briefly explained and used in Equation 7). The order of the computation is important and the evaluation goes naturally from left to right because priors have priority.
#### 3.6.1 Visits and location graph \(\boldsymbol{X}^{s}\)
From step \(s-1\) to \(s\) we gather the foot print of users. Listening to real time bidding, we can observe a sample of impressions: geographical location, time, and keywords associated to the users for an advert; also we can collect similar information from third parties pixels. These two sources do not overlap. The interval from \(s-1\) to \(s\) represents an interval of time of one week. This is a geographical distribution of impressions: \(G^{s}\) for short.
We can gather all user information and thus average the keywords in different geographical locations and create a distributions of the keywords: we describes this information by \(\boldsymbol{f}_{n}^{s}\gets G^{s}\).
We merge the location graph at step \(s-1\) with the \(G^{s}\). We can compute the user visits with respect the location graph and we can collect the keywords from the location visited: \(\boldsymbol{g}_{n}^{s}\). Thus, we have
\[\boldsymbol{x}_{n}^{s}=\boldsymbol{\gamma}_{f}^{t}*\boldsymbol{f}_{n}^{s}+ \boldsymbol{\gamma}_{g}^{t}*\boldsymbol{g}_{n}^{s} \tag{25}\]
#### 3.6.2 Priors
In every interval from \(s-1\) to \(s\), we collect registration information, that are voluntary information the users provide to a carrier, phone application, and others. This information will change and it will describe the current users of the device. Priors are so important that we can use them stand alone and create discrete classes. This finds application to the balanced approach as we shall describe in Section 5.
In the contest of applying priors to user keywords, we must take care of any inconsistencies with the previous priors:
\[\boldsymbol{\tilde{V}}^{s-1}=\mathbb{F}(\boldsymbol{V}^{s-1}, \boldsymbol{P}^{s}))\]
this computation assesses the effect of the current Prior and we update the user profile accordingly: a simple example is when the user suggested to be a _female_ and now to be a _male_. If left unattended, this user will belong to male and female category. In this scenario, current priors will not substitute the past prior, it will clear both genders and we will be open to suggestions from their visits patterns, opening the opportunity of a probability of gender based on their foot traffic.
As for the location graph, priors are unchangeable, we consider them as simple truths. Thus, Equation 24 involves no computation using priors. However, as for the location graph, we perform an adjustment in order to keep keywords distributions consistent.
This concludes the description of how we compute the user features. As a summary, we compute features for locations and users as a function of time and their interactions. Now, we shall describe how we use these feature for matching, that is the art of compensating the original bias introduced into the exposed group.
## 4 Matching
To describe the efficiency of a campaign, we compute the variance and the expectation of the response statistics, that is, the lift as in Equation 2:
\[Lift=E[r(d,t)|d\in E]+E[r(d,t)|d\in C] \tag{26}\]
With the term quasi experiments, we refer our inability to choose the exposed and the control group before the experiment as in a randomized clinical experiment. However, we can still choose how the control group interacts with the exposed group by an appropriate selection: in practice, we sample the universe and we find for every device \(d_{i}\) a corresponding \(d^{\prime}{}_{i}\) in the control and the best we can do is to compute the following,:
\[E_{d}\big{(}E(r_{\gamma}|d,E)-E(r_{\gamma}|d^{\prime},C)\big{)} \tag{27}\]
where we represent conditional expectation as usual, \(r_{\gamma}\) is the response to exposure, and \(r_{\gamma}\) the response to not exposure, from the original work and notation. We notice that the notation infers
a matching with suitable pairing for every device \(d\) a device \(d^{\prime}\). In the following, we shall provide an interpretation of Equation 27. Notice that Equation 26 and 27 are equivalent if the choice of exposure (i.e., \(z(d_{i})=1\) if \(d_{i}\) in Exposed) is _strongly ignorable_, [Rosenbaum and Rubin 1983]. This is clear by using the original notations:
\[(r|E,r|C)\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$\perp$}}z|d \tag{28}\]
For every person in the sample \(d_{i}\), the fact that \(d_{i}\) is exposed or not, is orthogonal to the value of the response. As Dawid's stated [Dawid 1979]\(X\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$\perp$}}Y\) where \(X\) and \(Y\) are random variables, then
1. \(P(X=x,Y=y)=P(X=x)P(Y=y)\),
2. \(P(x|y)=P(x)\),
3. \(P(x,y)=a(x)b(y)\), and
4. \(P(x|y)=a(x)\).
The second case is the most practical for us: the distribution of the response is independent of the distribution of the exposure selection. In practice, we call general lift the result of Equation 26. We do not expect targeting to be strongly ignorable.
This is not just a mathematical tool; Equation 27 suggests that exposed and control must be paired equally. This has a natural application to the methodology described in Section 2.1 where the exposed-control pair is balanced and matching will provide an estimate of Equation 27 by 1-1 matching. In general, a good control can be large, the computation of Equation 27 difficult, and targeting specific and not ignorable, then we must estimate something like the following instead:
\[E[r_{\gamma}(d,t)|d\in\mbox{ E and }M(d)\in\Omega]-E[r_{\gamma}(d,t)|d\in \mbox{ C and }M(d)\in\Omega] \tag{29}\]
where \(M()\) is an appropriate function and \(\Omega=[M(d):\mbox{E}\to\mathbb{R}]\cap[M(d):\mbox{C}\to\mathbb{R}]\), is the intersection of the projection of exposed and control into the image of \(M\). This means we are changing the response statistics by filtering, clustering, and weighting within the cluster; then we compute expectations for the average lift. The Equation 29 provides an insight to the Equation 26 and, in general, there is no equality.
### Matching: Previous Work
The seminal work by Dawid [Dawid 1979] introduces the notation and the formalism of _ignorable_, the main property to make a quasi experiment closer to a random experiment.
The population composing our experiments has explanation features, see Section 3. The feature space is a multidimensional space: multivariate. As the features can be used to discriminate the exposed group from the control, we can use them to build \(M()\) of Equation 29. The author in [Rubin 1976a,b] introduced examples thus paving the path for matching using propensity score. The authors in [Rosenbaum and Rubin 1983] developed the theory connecting the importance of the propensity scores to matching. The propensity score transforms a multidimensional space problem, which users have the closest set of features, into a one-dimensional problem, which users have the closest score number.
Assume that \(\mathbf{x}_{i}\) is a feature vector for the person \(i\); we estimate \(z_{i}\) as \(\zeta(\mathbf{x}_{i})\) in the following Equation 30, in turn we may use a linear model such as in [McCullagh and Nelder 1989], Chapter 4.3:
\[\log\frac{\zeta(\mathbf{x})}{1-\zeta(\mathbf{x})}\sim\beta_{0}+\sum_{i=1}^{K}\beta_{i}x _{i} \tag{30}\]
Rosenbaum and Rubin explain how any function based on \(\zeta(\mathbf{x})\) or, for that matter, its linear model in Equation 30, can be used for the computation of the matching. We can use it in Equation 27 and we can determine \(M()\) in Equation 29. Compute the model and determine the \(\hat{\beta}\). Take two people from the experiment with their feature set \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). If \(\mathbf{x}_{i}=\mathbf{x}_{j}\), \(\zeta(\mathbf{x}_{i})=\zeta(\mathbf{x}_{j})\). Also, thanks to
the linearity and continuity of the estimation, \(|\mathbf{x}_{i}-\mathbf{x}_{j}|<\epsilon\) translates to \(|\zeta(\mathbf{x}_{i})-\zeta(\mathbf{x}_{j})|<\delta\) with \(\epsilon,\delta>0\) and arbitrary small, whether or not \(z_{i}=z_{j}\).
The work by the authors in [4] is probably the most cited attempt to provide a first evaluation of all the matching algorithms using propensity scores. As of today, this work is a great beginning and most of the current libraries implementing matching provide the data from this reference (see [11, 12, 13, 14, 15]). For a better view of the research, please consult the references in [16, 15].
Often the estimate of the propensity scores provides only a handful of values, discrete, and thus classes with very large gaps. By construction and in this scenario, any matching will be balanced and thus it does not provide any meaningful measure of quality. Also it means that the matching is many-to-many. Also, assuming we have to match one exposed user with one control user, eventually we have to sample either one. We may believe that random sampling should not change the average behavior; however, in practice, when the events are rare, sampling will remove them further and actually affect the final results. This simple consideration explains why it is difficult computing Equation 27 and also undermines balanced matching based on \(\zeta(\mathbf{x})\), which is unrelated to the response.
In this work, we present a single algorithm that will work with both propensity score and clustering algorithm. Also we show the danger of perfect matching while estimating the exposed effect in Equation 29. To overcome this problem, we introduce a novel quality measure of the matching algorithm and an un-balanced algorithm where the response distribution is not known a priori and thus it is not affected by matching.
### The algorithms
The propensity score describes a multidimensional vector by a probability or a score with a continuity property stemming from Equation 30. The propensity model for \(N\) users with \(K\) features is computed by an iterative algorithm with \(J\) iterations computing a \(QR\)-factorization in each step: \(O(JKN^{2})\). For us \(K=25\) and \(J<10\). Thus \(O(N^{2})\) is a good approximation of the complexity but it is often executed in native and fast code.
The other property is that the scores can be sorted and a caliper can be introduced naturally using the strong order of the scores. Sorting \(N\) scores takes \(O(N\log N)\) and thus creating the matches with any caliper takes an extra \(O(N)\), a further pass. Optimal algorithms will circumvent calipers and try to find the closest element independently. This takes \(O(\frac{N^{2}}{2})\) comparisons; for example, this is the default implementation in [10]. For large \(N\) a full comparison is not efficient 2 and sorting is by far the better solution.
Footnote 2: Even though the complexity \(O(N^{2})\) is the same as the QR-factorization, this is not highly optimized, the constant factor if much larger than 250 and it is often not acceptable for \(N>10^{6}\).
Complexity is one problem. The other problem is sampling due to matching: when exposed and control have different number of users and we match one-exposed-user-to-one-control-user, we sample the larger one. If the events we measure in the response have a distribution with long (fat) tails such as in stable distributions [12], sampling will censor rare but important events. In general, we do not want to use the response for matching, to avoid bias; however, the change in the response distribution can be used as a measure of quality: for example, we can provide distance measure and its confidence using several methods [13]. Here, we use a simple, consistent, and intuitive measure: entropy [17]. Small entropy changes for exposed and control before and after matching mean a representative matching. To achieve this entropy balance, we may opt for unbalanced matching and get away from one-to-one matching. We shall explain how we use weights and how they affect the computation of Equation 29
### The Idea
The propensity score defines clusters; that is, users with the same score are in the same cluster. If we sort these scores, then we sort/organize the clusters. By construction, consecutive scores in the sorted list represent close clusters thanks to the continuity property of the propensity score.
In practice, we score each user in the corpora composed of both exposed and control. We sort the corpora by the score; then we apply the following algorithm:
```
1:i=1 2:scoreR=Corpora.scores[i] 3:i=i+1 4:while(i<length(Corpora)){ 5:score=Corpora.scores[i] 6:if(scoreR!=score){#NewCluster 7:TE=length(tmpE)#Exposed 8:TC=length(tmpC)#Control 10:if(TE>1&&TC>1){#NonEmptycluster 11:N=max(TE,TC) 12:if(BALANCED)#orfurtherrandomsample 13:U=union(tmpE[1:N],tmpC[1:N]) 14:else 15:U=union(tmpE,tmpC) 16:matches=union(matches,U) 17:) 18:tmpE=c();te=0 19:tmpC=c();tc=0 20:scoreR=score; 21:}else{ 22:if(Corpora.exposed[i]==1)tmpE[te++]=i 23:elsetmpC[tc++]=i 24:} 25:i=i+1 26:} 27:Corpora=Corpora[matches]#Matchingdone
```
The pseudo code above matches the exposed and control by clusters. We can choose to have a balanced matching or unbalanced. Notice that there is no matching in between clusters: exposed or control coming from different clusters will have different scores. It is actually easy to encompass this problem by moving the assignments at lines 18 and 19 inside the loop and at the end of the condition at line 10 (we call this Caliper activation and it makes sense only for propensity-score algorithms).
In practice, we could use any other means to cluster the users. For example, if we use any \(k\)-mean clustering algorithms, the scores are the cluster labels and the algorithm above will not change. The matching is actually decoupled by the scores: propensity score and \(k\)-mean algorithms can be applied in combination. For example, we use propensity score first with adaptive calipers to estimate the number of clusters and, then optionally, we apply \(k\)-mean algorithm.
In practice, we compute the propensity score by generalized linear model (GLM) [McCullagh and Nelder 1989] and the \(k\)-mean by [Forgy 1965; Hartigan and Wong 1979]. Both methodologies are well known and available. From our side, we would like to expose the matching algorithm as a computational kernel and thus apply it to large problems.
### Quality Measure and Confidence
We are aware of several matching algorithms: _exact, cem, subclass, nearest, genetic, full, and optimal_. The list is longer. Our implementation falls among the first four. This scenario begs for a simple question: how can we compare the results of matching. The package _Matchl_, which offers
an interface to all of the above algorithms, takes a step back and lets the user decide the statistics about the matched sets, even the computation of the lift is avoided.
Often the quality of matching is based on the estimate of the variance of some sort.
**Sampling a normal distributed corpora.** Each user has a response \(r_{i}\) (i.e., \(\Delta_{M}r(d_{j},e_{i})\) in Equation 11), with \(N\) users, we can measure the average
\[\mu_{N}=\frac{1}{N}\sum_{i=0}^{N-1}r_{i}\]
and the variance
\[\sigma_{N}^{2}=\frac{1}{N-1}\sum_{i=0}^{N-1}(r_{i}-\mu_{N})^{2} \tag{31}\]
If we sample, \(M_{s}<N\), we can compute the average and variance and we can compute \(\sigma_{M}\). If we sample \(M_{s}\) randomly, then \(\frac{\sigma_{M}}{\sigma_{N}}\sim\sqrt{\frac{M}{N}}\): in particular the ratio \(\frac{\sigma_{M}}{\sigma_{N}}\) should be distributed in the vicinity of \(\sqrt{\frac{M}{N}}\) as a normal standard distribution. We can express the confidence that the sampling is in accordance with the corpora by computing the probability
\[2\Big{(}1-\Phi(\frac{\sigma_{M}}{\sigma_{N}}-\sqrt{\frac{M}{N}})\Big{)}. \tag{32}\]
That is, the probability to be close to the expected variance, the higher the better.
**Variance decrease.** In principle, matching should decrease the variance \(\sigma_{M}<\sigma_{N}\); because we reduce \(M<N\) and because we remove outliers; if they do appear in both exposed and control, thus they will not be outliers. We could choose the matching that reduces the variance the most. Unfortunately, the classic way to perform matching will sample either exposed or control, not both; thus if the variance is computed as in Equation 31, then this criterion does not apply.
**ATT Response as normal distribution.** One-to-one matching takes one exposed \(r_{e}^{i}\) and finds a control \(r_{e}^{j}\). We can see that
\[E[r|E]-E[r|C]\sim\frac{1}{K}\sum_{i=0}^{K-1}r_{e}^{i}-r_{c}^{i} \tag{33}\]
Thus we can create a distribution of the ATT response by \(r_{e}^{i}-r_{c}^{i}\). In general, the computation \(r_{e}^{i}-r_{c}^{i}\) is actually \(r_{e}^{i}-w_{i}r_{c}^{i}\) where \(w_{i}\) is a weight associated with the multiplicity of the matched control and distance from exposed. For us, \(w_{i}=1\) represents perfect match. As such, we can compute average \(\mu_{E}\) and variance \(\sigma_{E}\). If the distribution above is normal we can use the ratio of average and variance to describe how well the matching represents the final result: \(2(1-\Phi(|\mu_{E}|)/\sigma_{E})\).
In our cases, while the two moments specify completely an ideal normal distribution, they do not give justice to the empirical distribution. In fact, the normal distribution is _fatter_ close to the average and _thinner_ at the tails.
**ATT Response as Laplace distribution.** The Laplace distribution provides a different approximation, sharper close to the average and still symmetric.
If we build a cluster, each cluster has its own response: we can estimate the distribution of the response by the contribution of each pair or by cluster. While using cluster responses, the distribution may not be approximated by a Normal distribution nor by Laplace, see Figure 2.
**Skewness.** Normal and Laplace distributions are poor approximations of the tails of the empirical distributions. The computation of the skewness describes with a single measure whether these assumptions are appropriate and also whether or not we can trust the confidence levels that we measure using the Normal or Laplace distribution assumption. By construction, if the exposure is effective, the distribution will be skewed and thus the normality assumption will fail.
\(\chi_{k}^{2}\) distributions of users' response. Each cluster has a collection of users and thus responses. The responses are correlated in the clusters, by construction (or assumption). They should be independent otherwise, across clusters. If the cluster has \(k\) components, we can take the variance of each and their addition should be the realization of the \(\chi_{k}^{2}\) with \(k\) degrees of freedom. Then we can use the difference and the known distribution of the \(\chi_{k}^{2}\) to compute a confidence level, see Figure 3.
The main goal of these quality metrics and their description as confidence level is to provide a measure of how well the clustering and matching work. Basically, these metrics use two (or three) moments of the response (average and variance) and the assurance of quality is based on variations
Figure 2: Each element and Cluster contribution of the response distribution
from an average, which comes from a known distribution. Unfortunately, the response distribution seems to have a mixed distribution and thus two moments cannot capture this nature.
**Exposed and Control Entropy Constancy.** The clustering/matching should be independent of the response, just to avoid systematic bias. One way to compare the quality of the matching is by measuring the information contained in the original corpora and in the matched one. Actually, we expect to have different distributions for exposed and control.
We should and can compare the response distributions before and after matching; in [15], we provide not only distance but also confidence. Here, we simplify the problem and compute and compare _entropy_. Take the exposed distinct response or the original corpora
\[\mathcal{E}_{e}^{N}=-\sum_{\ell=0}P[r_{e}^{\ell}]\log_{2}(P[r_{e}^{\ell}]) \tag{34}\]
\(\mathcal{E}_{e}^{N}\) is the expected information of the exposed response. We can compute \(\mathcal{E}_{c}^{N}\), which is the entropy of control. We then can compare the entropy after the matching. Simply put, we compute the pair:
\[(\mathcal{E}_{e}^{M}-\mathcal{E}_{e}^{N},\mathcal{E}_{c}^{M}-\mathcal{E}_{c}^ {N}) \tag{35}\]
if the pair is positive, it means that the matching increased the entropy and thus it censored responses with less information. Otherwise, the matching is sampling responses with more information. Among the matching algorithms, we should choose the one with the smallest entropy difference for both exposed and control. In fact, we are after the computation of Equation 27 where the distribution affects the averages.
**Boostrap.** We can consider the lift as a statistics and as such we can estimate a variance using bootstrap. If the response \(r(d,e)\) has \(lift=E[r]\) and variance \(\sigma_{r}^{2}=E[(r-lift)^{2}]\), then by bootstrapping the lift as average \(lift=\frac{1}{N}\sum_{i}r_{i}\) has an average, say \(\mu_{lift}\), and \(\sigma_{\mu}\).
Larger is the number of devices used in the experiment, smaller will be the \(\sigma_{\mu}\). This is because the average approximates the expectation asymptotically
\[\lim_{N\rightarrow\infty}(E[r]-\frac{1}{N}\sum_{i}r_{i})\to 0.\]
As such, \(\sigma_{\mu}\) and \(\sigma_{r}\) are not related: the response can have stable distribution with unbounded variance \(\sigma_{r}\) (i.e., have undefined variance for \(\alpha<2\)) and the \(\sigma_{\mu}\) still will converge to zero because the distribution has a finite expectation.
If we are prepared to run enough iterations, and considering that \(\sigma_{\mu}\) must be bound, we can use the equation \(2(1-\Phi(|lift-lift|)/\sigma_{\mu})\) to achieve a p-value and thus a confidence level. Bootstrapping is also used to determine if the \(lift\) of this sample is a good approximation of the larger population [10]. For matching without replacement, 1-1 matching, this is a welcome approach [12] and suggested to fill in the missing information due to small experiments. For matching with replacement, bootstrapping must be used carefully [1]. For matching with replacement and large sample it may be not needed because the distribution is known [1, 2]; the application of bootstrap for continuous distribution of the features and continuous response is an open question.
**Campaign as sample.** Any matching algorithm samples the experiment to remove the original bias, unfortunately the lift after matching does not generalize to the original experiment because, in advertising, targeting is not strongly ignorable and because the original experiment is not a sample (it contains all exposed) and we are not trying to estimate the lift for the hypothetical exposed.
## 5 Experiments
In this section, we shall present our experimental results. One way to be able to use the methods above for large corpora is to sample randomly. We show the hidden danger of this expedient Section 5.1. We then provide examples where all methods provide information and thus none cannot be excluded a priori, Section 5.2. We show how the matching algorithms presented here are comparable
to the current available Section 5.3. We conclude in Section 5.5 where we compare the average lift computations for balanced approach of Section 2.1 and the more general and unbalanced approach, showing how large experiments can be matched.
### The danger of Sampling
When we started our research, we applied known methods such as Match() [21]. To cope with the long execution time for large experiments, we sampled the users randomly but keeping the ratio. Our experiments may have rare responses, fat tails; for features represented in binary format, we have large classes creating many-to-many matching; we noticed that sampling can bring forth inconsistent results. For example, we took an experiment using a balanced control-exposed set and discrete classes, the experiment had about 310,000 users and we sampled it to 200,000. We ran 10 different sampled matching and computed the lift (\((E[r|E]-E[r|C])/E[r|C]\) as relative measure. Also, we added features with time information (i.e., two extra dimensions) to help the propensity score matching. We summarize the results in Table 1. Considering that in Section 2.1 we suggest to sample Control, thus the response, the table results should be a warning for large and small campaigns.
What is confusing with this experiment? First, adding features does not help provide consistent measures; what is lost during sampling is lost and further space investigations seem helpless. Second, each run provides quite different lifts in absolute values and in signs (opposite). In three out of ten iterations all matching results show only negative lifts, one shows only positive lifts, and for six we have mixed results. The experiment is not robust, but as we can see the original experiment produce consistent, although negative, results. In our scenario, the control group has much more signal as the exposed group, thus sampling the experiment or sampling control must be done with care and not randomly.
### Match comparisons
Let us take the example used in Section 5.1. We do not perform sampling and also we set to compute an unbalanced matching. The number of users is exactly 302862. For each algorithm, we use 25 and 27 features (whether or not we use time information). If we use 25 features, the algorithm is fully specified by its name (i.e., sort, k-means, subclass, exact and cem), if we use 27 features we use longer names: sort-time/s-time, k-means-time/k-time, subclass-time/s-time, exact-time/e-time and cem-time/c-time (features are discrete again).
Match(), full and optimal from MatchIt() results are not reported because they will take more than 3 hrs, which is not acceptable for our purpose. The execution time to model the propensity score and to score the user is about 8sec. Also further tests show that their performance increases linearly with the number of users in the matching tests (considering the number of features fixed).
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|} \hline iteration & sort & sort-time & k-means & k-time & subclass & sub-time & exact & e-time & cem & c-time \\ \hline \hline
1 & -122 & -145 & -132 & -150 & -124 & -88 & -100 & -132 & -100 & -132 \\
2 & 326 & 538 & 346 & 594 & 297 & 357 & 400 & 308 & 400 & 308 \\
3 & -101 & -673 & 0 & 26 & -163 & 166 & 185 & -209 & 185 & -209 \\
4 & -3 & 14 & 3 & -3 & 23 & 52 & 52 & -3 & 52 & -3 \\
5 & -42 & 123 & -10 & -22 & -154 & -104 & -70 & -97 & -70 & -97 \\
6 & -118 & -52 & -133 & -59 & -156 & -109 & -82 & -113 & -82 & -113 \\
7 & -110 & 454 & -157 & 587 & -187 & -167 & -141 & -43 & -141 & -43 \\
8 & -37 & 112 & -47 & 166 & -134 & -113 & -111 & 38 & -111 & 38 \\
9 & 1637 & 941 & 1257 & 3795 & -1660 & -1019 & -1119 & -969 & -1119 & -969 \\
10 & -63 & -4 & -79 & -48 & -84 & -60 & -88 & -45 & -88 & -45 \\ \hline all & -96 & -91 & -100 & -50 & -131 & -71 & -60 & -88 & -60 & -88 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The different lifts by random sampling 200,000 users out of 310,000
In Table II, we show the performance of the same tests, the entropy difference (see Equation 35), and the standard deviation of the users' responses. The corpora has variance of 0.0980. Notice that our methods tend to decrease the entropy but in absolute value, this is the minimum.
If we use the moments and a few assumptions about the users' responses, we can compute the probability such that the empirical distribution is indistinguishable from the assumed distribution, see Table III; then, all matching algorithms accept the equality assumption. Sort and k-mean use the clusters: by using the Normality assumption the matching will be accepted, using the Laplace assumption the matching will be rejected. Other methods use single user response and thus due to the number of users (300,000 users) they converge to normality and the others follow.
### Audience Selection, Features, and Algorithms (Cor)Relation
In our experience, our problem space has three basic dimensions: First is the choice of the exposed group and thus control; Second is the dimension number and quality of the feature space describing our audience; Third is the set of algorithms and what they can expose for all the above. Eventually, we would like to infer recommendations about what works, especially at scale. In this section, the largest campaign has ten million devices and the smallest a few hundred thousands.
In this section, we consider a few dozen campaigns and we applied a balanced approach, Section 2.1, with discrete feature space Section 3.6 using only registration data, known as prior. In this scenario, the user response is discrete covering a set of discrete values. The feature space specifies a discrete space, although possibly large, it is limited and users could be clustered into a few thousands classes. The estimate of the targeting function reflects the discrete space nature; thus, the matches are often many-to-many and the exposed group has priority, that is, we sample control. For all
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline algorithm & exposed & control & variance \\ \hline \hline sort & **-0.00016** & **-5.91886e-05** & 0.0978 \\ sort-time & -0.00056 & -0.00061 & **0.0959** \\ k-means & -6.45606e-05 & -6.06477e-05 & 0.0979 \\ k-means-time & 0.00281 & -0.00028 & 0.0970 \\ subclass & 0 & 0.00404 & 0.0975 \\ subclass-time & 0 & 0.00600 & 0.0965 \\ exact & 4.52999e-05 & 0.00808 & 0.1009 \\ exact-time & 0.00252 & 0.01500 & 0.0978 \\ cem & 4.52999e-05 & 0.00808 & 0.1009 \\ cem-time & 0.00252 & 0.01500 & 0.0978 \\ \hline \hline \end{tabular}
\end{table}
Table II: Unbalanced computation: difference in entropy and variance
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Algorithm & Normal & Laplace & \(\chi_{k}^{2}\) & Eq. 32 \\ \hline \hline sort & 0.99911 & 0.95689 & 0.94941 & 0.99943 \\ sort-time & 0.99798 & 0.72556 & 0.99178 & 0.98933 \\ kmeans & 0.99824 & 0.67636 & 0.97187 & 0.99940 \\ kmeans-time & 0.99749 & 0.65775 & 0.98537 & 0.98061 \\ subclass & 0.99867 & 1 & 0.99825 & 0.99581 \\ subclass-time & 0.99927 & 1 & 0.9926 & 0.98781 \\ exact & 0.99941 & 1 & 0.99958 & 0.97563 \\ exact-time & 0.99913 & 1 & 0.99910 & 0.97086 \\ cem & 0.99941 & 1 & 0.99846 & 0.97563 \\ cem-time & 0.99913 & 1 & 0.99936 & 0.97086 \\ \hline \hline \end{tabular}
\end{table}
Table III: Unbalanced computation: confidence in the matching process
the matching algorithms, there is always a many-to-many matching because the calipers will infer classes and within any class we do not apply a _nearest_ matching. Even for the standard algorithms, they apply a 1-to-many matching introducing weights.
Figure 4: Audience keywords: Unbalanced \(k\)-to-\(m\) matching (above) and Balanced many-to-many matching (below)
In Figure 4, we compare our algorithms together with more standard ones. We can appreciate that the final lift values differ little. This simple experiment shows that our matching algorithms are equivalent to others with the advantage that can be applied to larger campaigns without loss of accuracy. We make sure that the matching as we designed and developed does not loose information for the type of our experiments. In practice, we show experimentally that our matching are a useful contribution to the literature especially as meaningful extension, if not the only available extension. In the following section, we go even bigger. To do so, we need a different framework and we need to use an unbalanced approach.
### Experiments and analysis of stochastic hits
Setting a specific radius or the contour of a parcel as a boundary for the computation of a hit is simple to explain and to use. However, how we can account for those impressions close by those boundaries by users we do not see inside those same boundaries. They may have stepped out of the location we are interested in and sent us a signal. In this section, we discuss the application of what we called stochastic hit, if an impression is close enough to a location we may consider to give it a probability (of a hit) by using a known distribution such as the IG presented in Equation 4 or a _lognormal_ distribution.
We consider two campaigns A and B, they have more than thirty locations of interests relatively sparse geographically. One campaign is to advertise a car company and the other is a restaurant chain.
We create these two experiments for both campaigns: we consider \(R=30\) and \(R=96\) meters, \(R\) is the radius we use to consider an impression a hit. For each experiment, we consider all the users that have a hit at distance \(d<=R\). Then we count the number of times and the distances \(0\leq d\leq 10R\) with a precision of one meter.
In Figure 5, we present the two experiments when we observe the distribution of who hit (\(d<R\) for \(R=30\) and \(R=100\) meter). Within \(0\leq d\leq R\), we have investigated a simple polynomial curve fit and we have found that \(p[x=d]\sim\alpha d\). This simple observation means that \(P[x\leq R]\sim R^{2}\) and, thus, the number of hits is proportional to the area. This is not true for each location, but for the aggregate of 30 or more locations, we achieve such a nice property, which is intuitive.
In Figure 6, we present the second campaign and we can appreciate that the experiment have similar results. The slope is different, specific to the campaign and locations set, \(p[x=d]\sim\beta d\). In principle, we can fit multiple distribution models: we fit the inverse Gaussian (IG) and a log-normal. We plot the correspondent distribution for the models.
Our goal is to estimate the probability that an impression with distance \(d>R\) could be a probability of a hit. In practice, we could use \(IG(\mu,\lambda)\) to estimate the probability for every impression in the range \([R,R+\frac{R}{2}]\), in Figure 5 and 6 we represent this space by the first two vertical gray lines (from the far left). And we could use the \(LogNormal(\mu,\sigma)\) for the interval \((R+\frac{R}{2},3R]\).
Figure 5: Campaign A: Conditional distance distribution with users who has a visit within 30 and 100 meters
### Large scale lift comparison
To the best of our knowledge, the system we present in Section 2.1 and show results in Section 5.3 is the only one capable to tackle an experiment with 2 million users in any practical way. In this section, we present a different prospective in order to create and to measure even larger experiments and show how sampling is still a lingering issue, although for different reasons.
Let us introduce the **impression space**: We formally introduced the concept of impression and we used to specify visits as in Equation 6. Where do these impressions come from? We are listening to a fire-hose of streaming impressions that we can bid through a collection of exchanges. Our budgets dictates how much we can listen and it changes. Here, we call this fire-hose real time bid (RTB) exchanges, the volume of RTB is a function of an allocation budget and it will change as a function of company wide budget, hardware allocation and hardware/software failure. The RTB is composed of three non intersecting parts:
* Listening RTB (LRTB) is a random sample of the fire-hose used only for collection purpose, let us say that LRTB is 10% of the RTB.
* Won RTB (ND), we bid and we win the impression, thus we deliver our advert. The volume of impressions here is a function of campaign budget and pacing.
* Unwanted RTB (URTB) is the remaining impressions, the larger portion of RTB.
Using all URTB, LRTB, and ND (i.e., urtb-lrtb-nd) impressions and our unbalanced approach, we will not sample impressions, visits, nor users. This is the ultimate representation of our experiment space. As such, it puts quite a few practical and economical constraints in the experiment measure. For example, a national campaign like Starbucks counting 10 thousands locations and three months period, will touch approximately 100 million devices and (hundred of) billions impressions. This experiments will have maximum number of users and visits.
Historically and thus in Section 2.1, the balanced approach uses the URTB and ND impressions, it intersects the exposed users to those impressions and it samples the control, (i.e., c-urtb-nd). For national campaigns as above, the sheer size of impressions to manage can be quite large. As a practical effect, we sample control, we sample visitors, and thus we sample visits.
Assume we embrace a different sampling: sampling of impression in time. The LRTB has the property of being an unbiased and random sample of RTB, thus the space LRTB-ND (i.e., lrtb-nd) may have all the information we need, a critical size to compare users and visits, and a practical size to have the experiment measures in a more economical way.
A long tail distribution is applicable here. There are a lot of users with few impressions, there are a lot of users with zero visits. The c-urtb-nd samples mostly control, even though we use the terminology of balanced approach, we can appreciate the irony of the name, if or when we are control. The lrtb-nd approach will sample the impressions, we will pick users with enough impressions, but we do that without any bias to the targeting. Of course, a critical mass has to be met by both,
Figure 6: Campaign B: Conditional distance distribution with users who has a visit within 30 and 100 meters
otherwise the sampling curse will be visible at this level as well. Now, we can introduce the final experiment.
We took twelve campaigns of various sizes. For each we measure lift without any matching, we use the term of general lift. These experiments have different goals. In Figure 7, we show the results.
To provide a clear presentation of the small differences, we opted for a non standard box plot. For any lift in the range \([-1,1]\), we present it as it is. For any lift in the range \([1,\infty)\), we represent it as \(1+\log_{10}(lift)\). We work out similarly the negative lift smaller than \(-1\). Thus, we use a logarithmic scale but only for large enough lifts.
We can appreciate that the general lift for all campaigns is negative (on an average). There is only one exceptions where c-urtb-nd is not correlated to urtb-lrtb-nd (i.e., campaign \(20592\)), and only one exception for rtb-nd (i.e., campaign \(20601\)). This means that sampling can be an issues even at this scale but it is moderate. Why is it always negative? The main reason is targeting: targeting is far from being ignorable and the control group is not really comparable. We need to apply matching and we show the lift comparison in Figure 8.
There is even a stronger correlation on the sign of the lift while applying matching: only campaign \(20483\) is the exception. In general, sampling increases the variance of the lifts (a little obscured by the log scale) and the absolute value of the lift; that is, sampling reduce the population of the experiments, making the visit per user effect larger.
The larger the experiment is, then the smaller the lift is. This is an important and practical consideration. Lift is basically a comparison measure with respect to a control that it can be much larger than the exposed group and thus the control visits can be much larger. Eventually, we are bound to measure lift as number of relative visits with respect to all visits, if all visits increases the lift will decrease. This is the course of relative measures: A larger campaign may have a larger effect (more visits) than a smaller one, but relatively to the population it reached, the larger campaign will have smaller lift.
Figure 7: General Lift using different methodology and sampling
## 6 Conclusions
In this work, we present a common and important problem for advertising companies: how to quantify the effect of a digital campaign. Specific to our field, we need to measure visits and the speed of visits for an exposed group with respect to a control baseline. We introduce a general approach and we explain two different methodologies using a balanced and an unbalanced exposed-control selection. We follow through by presenting two implementations and showing their different capabilities and similarities. We show that we can write scalable matching algorithms that can be practical, accurate as much as the ones available in the literature. We show that our algorithms can be applied to very large experiments.
After all, the two methods should agree on an average especially if the experiments are well deployed. Our goal is to share our intuitions, our development solutions, and insights. This is a complex problem: our solutions have often been driven by practical necessities, limited resources, and clear goals. Here we show our best and always moving effort to present our understanding and shed some light to possible, sound, and practical solutions.
|
2303.07031 | A Complete Approach to Determine the $^3$He neutron incoherent
scattering length $b_i$ | We report the first results from a new approach for measuring the $^3$He
neutron incoherent scattering length $b_{i}$. $b_{i}$ is directly proportional
to the difference $\Delta b=b_{+}-b_{-}$ in the two low-energy s-wave
neutron-nucleus scattering amplitudes $b_{+}$ and $b_{-}$, corresponding to the
singlet $J=0$ and triplet $J=1$ states of the neutron-$^3$He interaction,
respectively. An accurate measurement of $b_{i}$ can help distinguish among
different models of three-nucleon interactions by comparison to {\it ab initio}
nuclear theory calculations. The neutron birefringence caused by $\Delta b$
results in neutron spin rotation around the nuclear polarization. We measured
$\Delta b$ using polarized neutron spin rotation and the transmission of
neutrons through a $^3$He gas target polarized in situ by spin-exchange optical
pumping. This brief test measurement, conducted at the FZ-J\"ulich neutron spin
echo spectrometer at the Heinz Maier Leibnitz Zentrum (MLZ), yielded $\Delta b
= [-5.27 \pm 0.05$ (stat.) $- 0.05$ (syst.)] fm. We argue that this method can
be improved in precision to resolve the discrepancies between two prior
measurements of $b_i$ which are dependent on the polarized absorption cross
section $\sigma_p$. Further with absolute $^{3}$He polarization via NMR (in a
properly-shaped cell) concurrent with accurate neutron transmission
measurements, $\sigma_p$ can be measured to obtain independent values of
$b_{+}$ and $b_{-}$. | H. Lu, O. Holderer, A. Ioffe, S. Pasini, P. Pistel, Z. Salhi, B. M. Goodson, W. M. Snow, E. Babcock | 2023-03-13T11:45:57Z | http://arxiv.org/abs/2303.07031v1 | # A Complete Approach to Determine the \({}^{3}\)He neutron incoherent scattering length \(b_{i}\)
###### Abstract
We report the first results from a new approach for measuring the \({}^{3}\)He neutron incoherent scattering length \(b_{i}\). \(b_{i}\) is directly proportional to the difference \(\Delta b=b_{+}-b_{-}\) in the two low-energy s-wave neutron-nucleus scattering amplitudes \(b_{+}\) and \(b_{-}\), corresponding to the singlet \(J=0\) and triplet \(J=1\) states of the neutron-\({}^{3}\)He interaction, respectively. An accurate measurement of \(b_{i}\) can help distinguish among different models of three-nucleon interactions by comparison to _ab initio_ nuclear theory calculations. The neutron birefringence caused by \(\Delta b\) results in neutron spin rotation around the nuclear polarization. We measured \(\Delta b\) using polarized neutron spin rotation and the transmission of neutrons through a \({}^{3}\)He gas target polarized in situ by spin-exchange optical pumping. This brief test measurement, conducted at the FZ-Julich neutron spin echo spectrometer at the Heinz Maier Leibnitz Zentrum (MLZ), yielded \(\Delta b=[-5.27\pm 0.05\) (stat.) \(-0.05\) (syst.)] fm. We argue that this method can be improved in precision to resolve the discrepancies between two prior measurements of \(b_{i}\) which are dependent on the polarized absorption cross section \(\sigma_{p}\). Further with absolute \({}^{3}\)He polarization via NMR (in a properly-shaped cell) concurrent with accurate neutron transmission measurements, \(\sigma_{p}\) can be measured to obtain independent values of \(b_{+}\) and \(b_{-}\).
Precision measurements of the scattering amplitudes in the \(n\)+\({}^{3}\)He system provide important tests for _ab initio_ theoretical calculations of the properties of few-nucleon systems. Three-body (3N) interactions among nucleons are now estimated to provide about 5% of the total binding energy of stable nuclei [1]. The development of a global model of bound nuclei that can both explain the binding energies of stable nuclei and can also make reliable predictions out to the extremes of nuclear stability is a major long-term goal for nuclear physics, with important scientific applications for astrophysics and for our understanding of the process of formation of the heavy elements [2]. Although theoretical models for the possible forms of nuclear three-body forces exist and give a rough estimate for the relative sizes and spin/isospin dependence of three- and higher-body effects compared to two-nucleon forces, more precise experimental data on systems with few nucleons is needed to determine the relative strengths of these forces. The binding energies of \({}^{3}\)H, \({}^{3}\)He, and \({}^{4}\)He are essential data for this purpose in the \(A=4\) system and are measured with high precision. Theoretical calculations of the binding energy of \({}^{4}\)He using the Green's function Monte Carlo technique [3] including some using phenomenological three-nucleon interactions [4; 5; 6; 7] differ from experiment by 1%. Theoretical analysis also shows that the information on the nuclear three-body force from the binding energy of \({}^{4}\)He is not independent of that from three-body bound systems and is mainly sensitive to the spin-independent component of the nuclear three-body force [8; 9; 10; 11]. To better constrain the spin-dependent parts of the nuclear three-body force, data are required on the spin-dependent scattering of three- and four-body systems with precision at the sub-percent level.
Existing data on the difference \(\Delta b=b_{1}-b_{0}\) of the two scattering lengths \(b_{0}\) and \(b_{1}\) for the two total angular momentum \(J=0,1\) values of n-\({}^{3}\)He are inconsistent. \(\Delta b\) is proportional to the n-\({}^{3}\)He incoherent scattering length \(b_{i}\). The two best measurements of \(b_{i}\) using neutron spin echo [12] and neutron interferometry [13] are inconsistent based on the quoted errors, as are the three measurements of the n+\({}^{3}\)He coherent scattering length \(b_{c}\) using neutron interferometry [14; 15; 16]. Different theoretical calculations of \(b_{1}\) and \(b_{0}\) available for comparison at the time employing NN+3N interactions, such as the standard potential models AV18 + UIX, AV18 +UIX + V3 [17; 18], and AV18 + LL2 [19], were also not in agreement. Improved measurements of both \(b_{i}\) and \(b_{c}\) are needed to help resolve the inconsistencies shown in previous work. Improved precision on both \(b_{i}\) and \(b_{c}\) can also help distinguish among different models of few-nucleon interactions. The description of the \({}^{4}\)He continuum just above the \(n\)+\({}^{3}\)He threshold is challenging for existing theory to treat, and changes to existing 3N force models are proposed as a possible solution to existing discrepancies. Fortunately several new theoretical techniques have been developed to tackle nuclear four and five body systems [20; 21; 22; 23; 24; 25] including chiral effective
theory [9; 26]. Different _ab initio_ calculational methods for nucleon scattering in \(A=3\) systems deliver internally consistent results [27; 28], and the resonating group method (RGM) has been applied in the past to \(A=4\) systems [29; 30]. We judge that the prospects for improved theoretical calculations in the \(n+^{3}\)He system are good.
In general the total free n-nucleus scattering length is given by \(a=a^{\prime}+ia^{\prime\prime}\) where \(a^{\prime}\) and \(a^{\prime\prime}\) are real. The imaginary term \(a^{\prime\prime}\) arises from absorption, which is very large for the n-\({}^{3}\)He system. For the forward scattering amplitudes measured in this work the bound scattering lengths are observed. The bound scattering length is related to \(a\) by \(b=a(A+1)/A\) where \(A\) is the nucleus to neutron mass ratio. The two s-wave neutron-nucleus scattering amplitudes \(b_{+}\) and \(b_{-}\), corresponding to the total nucleus plus neutron angular momentum \(J=I+1/2\) and \(J=1-1/2\) scattering channels from a nucleus of spin \(I\) and neutron of spin \(s\), can be expressed as
\[b=b_{c}+\frac{2b_{i}}{\sqrt{I(I+1)}}s\cdot I. \tag{1}\]
The coherent scattering length is thus
\[b_{c}=\frac{(I+1)b_{+}+Ib_{-}}{(2I+1)} \tag{2}\]
and the incoherent scattering length is
\[b_{i}=\frac{I\sqrt{I+1}(b_{+}-b_{-})}{(2I+1)}, \tag{3}\]
which is directly proportional to \(\Delta b=b_{+}-b_{-}\). The two values \(J=0,1\) of the total spin for \(I=1/2\) imply \(b_{+}\equiv b_{1}\) and \(b_{-}\equiv b_{0}\) for the triplet and singlet scattering lengths, respectively.
\(\Delta b\) can be measured by observing the precession of the neutron spin as neutrons pass through a polarized nuclear target, named "pseudomagnetic precession" [31] in the literature. Although this phenomenon was initially described [31; 32] in terms of a fictitious "pseudomagnetic field" inside the medium, \(\Delta b\) originates from neutron-nucleus scattering. The optical theorem [33] relates the spin dependence of the neutron optical potentials associated with the scattering amplitudes \(b_{+}\) and \(b_{-}\) to a two-valued neutron index of refraction (\(n_{+}\),\(n_{-}\)) depending on the relative orientation of the neutron spin and the nuclear polarization:
\[n_{\pm}^{2}=1-\frac{4\pi}{k^{2}}N(b_{coh}+b_{\pm}), \tag{4}\] \[\Delta n=(n_{+}-n_{-})\approx-\frac{2\pi}{k^{2}}N(b_{+}-b_{-}),\]
where \(N\) is the number of nuclei per unit volume, \(k=2\pi/\lambda\) is the neutron wave number, and the approximation in the second expression is valid in our case as the neutron index of refraction is \(\simeq 1\). \(\Delta n\) makes the medium optically birefringent for neutrons so that the two helicity components of the neutron spin accumulate different phases, \(kn_{\pm}d\), in the forward direction as neutrons propagate a distance \(d\) through the target. Therefore neutron spins orthogonal to the nuclear polarization direction of the target precess around the nuclear polarization by an angle \(\phi^{*}=k\Delta nd\).
We can write the neutron precession angle \(\phi^{*}\) created by the incoherent scattering length \(b_{i}\propto\Delta b\) of the polarized \({}^{3}\)He [12; 13] as
\[\phi^{*}=-\frac{1}{2}\lambda P_{3}Nd\Delta b=-\frac{2\lambda P_{3}Nd}{\sqrt{3} }b_{i}, \tag{5}\]
where \(P_{3}\) is the \({}^{3}\)He polarization, \(N\)=[He] is the \({}^{3}\)He density, and \(d\) is the neutron path length through the \({}^{3}\)He. For nuclei such as \({}^{3}\)He which possess a very large spin-dependent component to the neutron cross section, one can determine the constant of proportionality \(\lambda P_{3}Nd\) using neutron measurements and write the measured quantity \(\Delta b\) as follows:
\[\Delta b=\frac{2\phi^{*}}{\lambda P_{3}Nd}=\frac{\sigma_{p}}{\lambda_{th.}} \frac{2\phi^{*}}{\cosh^{-1}\!R}. \tag{6}\]
Here \(R\) is the ratio of unpolarized neutron transmission of polarized \({}^{3}\)He, \(T(P_{3})\), to the transmission of unpolarized \({}^{3}\)He, \(T(0)\), \(\sigma_{p}\) is the polarized \({}^{3}\)He spin dependent neutron absorption cross section, and \(\lambda_{th.}=1.798\) A is the thermal neutron wavelength chosen by convention as a reference point for neutron absorption cross sections. The total n-\({}^{3}\)He absorption cross section \(\sigma_{a}=(4\pi/k)b^{\prime\prime}\) obtained from the imaginary part of \(b\) by the optical theorem [33] can be written in terms of the polarization-independent and polarization-dependent terms as:
\[\sigma_{a}=\sigma_{un}\mp P_{3}\sigma_{p}, \tag{7}\]
where the sign convention \(\mp\) is for \(P_{3}\) parallel (-) or anti-parallel (+) to the neutron spin. Here \(\sigma_{un}=(5333\pm 7)\) barn is the total unpolarized neutron absorption cross section, and \(\sigma_{p}\) can be expressed as \(\sigma_{p}=(1-\sigma_{1}/\sigma_{un})\sigma_{un}\). Both \(\sigma_{a}\) and \(\sigma_{un}\) are measured to be proportional to the neutron wavelength \(\lambda\) to high precision [34; 35; 36]. For an unpolarized neutron beam of \(n\) neutrons with half spin up (\(n^{+}\)) and half spin down (\(n^{-}\)) neutrons the corresponding transmission of Eq. 7 is
\[T^{\pm}=\frac{n^{\pm}}{n}=\frac{1}{2}\mbox{exp}\left(-(\sigma_{un}\mp P_{3} \sigma_{p})\frac{\lambda Nd}{\lambda_{th.}}\right). \tag{8}\]
Thus, the transmission of unpolarized neutrons through polarized \({}^{3}\)He is:
\[T(P_{3})=\mbox{exp}\left(-\frac{\sigma_{un}}{\lambda_{th.}}\lambda Nd\right) \mbox{cosh}\left(\frac{\sigma_{p}}{\lambda_{th}}\lambda P_{3}Nd\right). \tag{9}\]
Since the unpolarized transmission is simply:
\[T(0)=\mbox{exp}\left(-\frac{\sigma_{un}}{\lambda_{th.}}\lambda Nd\right) \tag{10}\]
with two neutron transmission measurements giving \(R=T(P_{3})/T(0)\) one directly experimentally obtains the product \(\frac{\sigma_{p}}{\lambda_{\rm{He}}}\lambda P_{3}Nd\) from \(\cosh^{-1}\!R\), leading to eq. 6.
Thus one still needs to determine \(\sigma_{p}\). If the triplet absorption rate \(\sigma_{1}\) were zero, so that there would be only absorption in the singlet state \(\sigma_{0}\), then \(\sigma_{p}=\sigma_{un}\), where \(\sigma_{un}\) is known to \(\simeq 0.1\%\). However the upper bound on \(\sigma_{p}\) from previous experiments is several-percent [37; 38] and limits the precision of this technique. There is no reason to expect that \(\sigma_{1}\) is zero, as theoretical calculations show [17; 18]. The work in Ref. [12] used an average of the experimental determinations [37; 38] to arrive at a value that can be reinterpreted as \(\sigma_{1}=57\) barn. Conversely the work in Ref. [13] used a combination of theoretical predictions and the measured thermal absorption cross section to estimate \(\sigma_{1}=24\) barn [39]. For lack of better knowledge of \(\sigma_{1}\), we will use the latter value in our analysis below but also present the result independent of the value of \(\sigma_{1}\) as was also done in [13] for comparison purposes.
\({}^{3}\)He SEOP cells can also enable a measurement of \(\sigma_{p}\). An independent 0.1% measurement of \(P_{3}\) combined with accurate measurements of \(R(\lambda)\) through an _in-situ_ polarized \({}^{3}\)He sample using the time-of-flight (TOF) method (_i.e._ a wavelength scanned neutron beam) as in Ref. [40] could provide a \(\simeq 0.1\%\) accuracy for \(\sigma_{p}\), allowing determination of \(\sigma_{1}\) to \(\approx 5\) barn accuracy. Several atomic physics methods have determined \(P_{3}\) to high precision [41; 42; 43; 44] so this approach should be feasible.
In order to determine \(P_{3}\) on a neutron beamline, we propose to use the "self-magnetometry" of a polarized \({}^{3}\)He sample in a defined shape such as in a long tube parallel or perpendicular to the applied \(B_{0}\) field. The \({}^{3}\)He magnetization \(M_{3}=\mu_{3}P_{3}N\) will generate a magnetic field of
\[B_{3}=\mu_{0}M_{3}\left(1-\frac{2}{3}\right)=\mu_{0}\frac{M_{3}}{3} \tag{11}\]
when the tube's axis is parallel to \(B_{0}\) and
\[B_{3}=\mu_{0}M_{3}\left(\frac{1}{2}-\frac{2}{3}\right)=\mu_{0}\frac{-M_{3}}{6} \tag{12}\]
when the tube's axis is perpendicular to \(B_{0}\)[45; 41]. Here the first term is the magnetization minus the demagnetization factor and the \(-2/3\) term, which is the same as the field from a spherical volume, arises from the scalar contact term, meaning the \({}^{3}\)He spins are non-overlapping and cannot "see" one another so the self field must be subtracted [46; 47]. We note the magnetic moment of \({}^{3}\)He \(\mu_{3}/h\)=-16217050 Hz/T is known to the ppb level [48], and the the geometric correction factor in the field parallel case for a cell with a length to diameter ratio \(\simeq 5\) is \(\simeq 2\%\) and very well known [49; 50]. Given at one bar pressure at 25 \({}^{\circ}\)C there are \(2.43\times 10^{25}\) atoms m\({}^{3}\) and the gyromagnetic ratio of \({}^{3}\)He, \(\gamma_{3}=3.24\times 10^{7}\) Hz/T, the product \(f_{3}=\mu_{0}\mu_{3}P_{3}N\gamma_{3}=10.6\) Hz at \(P_{3}=1\) and \(N=1\) bar. Thus for the field-parallel case, upon a reversal of \(P_{3}\), an NMR frequency shift of
\[\Delta f_{3}=2B_{3}\gamma_{3}=2\mu_{0}\frac{M_{3}}{3}\gamma_{3}=\frac{2}{3}\mu _{0}\mu_{3}P_{3}N\gamma_{3}\simeq 5\ {\rm Hz} \tag{13}\]
will be observed for \(P_{3}=0.70\) at 1 bar pressure. Since \(\lambda Nd\) of eq. 9 can be calibrated by unpolarized \(T(0,\lambda)\) measurements using the well-known \(\sigma_{un}\) and given that one can expect \(<5\) Hz NMR linewidths, 0.1% accuracy in \(P_{3}\) and thus \(\sigma_{p}\) should be attainable for normal pressures by signal averaging. This method would not require new on-beamline detection methods to be developed other than a specialized cell for the purpose. _In-situ_ polarization of the \({}^{3}\)He employing adiabatic fast passage (AFP) \({}^{3}\)He flipping and a TOF neutron beamline would be preferred.
Since a recent measurement of \(b_{c}\) in n+\({}^{4}\)He using perfect crystal neutron interferometry [51] reached \(10^{-3}\) precision using a technique that can be directly applied to \({}^{3}\)He, our ideas to improve \(b_{i}\) are the key additional input needed to confront theory. Therefore we intend to perform a higher precision measurement of \(\Delta b/\sigma_{p}\) and a measurement of \(\sigma_{p}\) to obtain an absolute value for \(b_{i}\), which could then also approach a \(10^{-3}\) precision.
We have tested an accurate method to determine the real part of \(\Delta b/\sigma_{p}\) on the J-NSE Phoenix instrument [52] as part of a different experiment to measure the same quantity for n-\({}^{129}\)Xe and n-\({}^{131}\)Xe [53]. Our approach builds on the pioneering work of Zimmer _et al._[12] by taking advantage of technical improvements in neutron spin echo spectroscopy to measure neutron birefringence and by exploiting the improved time stability and improved performance of polarized \({}^{3}\)He gas targets created using spin-exchange optical pumping (SEOP) [54].
Measurements of neutron birefringence in polarized nuclei were originally performed using the Ramsey method of separated oscillatory fields [55; 56]. This so-called pseudomagnetic precession method [31] uses two oscillating fields before and after a solid-state nuclear-polarized sample to measure phase precessions caused by the sample. We use a variation that also uses orthogonally-precessing polarized neutrons moving through a nuclear
Figure 1: A schematic drawing of the J-NSE neutron spin echo spectrometer showing the coil arrangement and the SEOP-polarized \({}^{3}\)He cell. Not shown are optical pumping lasers whose light is perpendicular to the neutron flight path and directed along \(B_{SEOP}\) via a 45\({}^{\circ}\) mirror above the cell, and the oven used to regulate the cell temperature for SEOP.
polarized \({}^{3}\)He gas sample but in a neutron spin-echo (NSE) spectrometer [57] in order to quantify the resulting phase shifts in the neutron precession. NSE is similar to spin echo in NMR [58] but the neutron spin is precession-encoded in space along the path of the traveling beam as opposed to in time with static spins as in NMR spin echo.
In a NSE spectrometer, polarized neutrons are first flipped by \(\pi/2\) to induce precession in the orthogonal plane, they then pass through a high-field flight path with an over 1 T\(\cdot\)m field integral encoding a large number of spin precessions; this step is followed by a \(\pi\) flip reversal of the neutron polarization in the middle of the instrument and then by a second high-field flight path identical to the first to decode the spins. The sample is typically near the middle either before or after the \(\pi\) flipper and the additional precession it creates can be quantified by matching it to the precession in additional phase (compensation) coils placed around the neutron flight path. Thus the NSE method is like the Ramsey technique, but the addition of the central \(\pi\) flipper allows the sample-induced phase shift to be quantified by DC phase coils rather than phase matching an oscillating RF field directly. NSE has the benefit that, because the phase coils have field integrals accurate to nT\(\cdot\)m compared to total instrument field integrals of 1 T\(\cdot\)m or more, it can encode the spins very precisely and measure very small changes in the neutron precession [52].
A schematic diagram of a NSE spectrometer is shown in Fig. 1. The nuclear spins of the \({}^{3}\)He sample were polarized in-situ using spin-exchange optical pumping [54] in the usual sample area of the NSE spectrometer after the \(\pi\) flipper. The \(B_{SEOP}\) field oriented perpendicular (vertical) to the neutron flight path and the fields \(B_{1}\) or \(B_{2}\) of the NSE spectrometer itself, which are longitudinal (i.e. horizontal; see Fig. 1).
Because of the work on Xe isotopes the gas target was polarized in-situ to maintain a steady nuclear spin polarization. For Xe this was mandatory because of their comparatively short \(T_{1}\) polarization lifetimes compared to the duration of a typical NSE scan (\(\simeq\)20 min), especially \({}^{131}\)Xe where \(T_{1}<\) 30 s. For the \({}^{3}\)He experiment this also turned out to be advantageous by decoupling time-dependent instrumental drifts from changes in \(P_{3}\). Using a home-built NMR system for free-induction decay detection, we were able to determine that fractional changes to \(P_{3}\) were below 0.3% during our NSE measurements. The in-situ polarization equipment [59] also enabled on-beam AFP flipping of \(P_{3}\) during continuous pumping. The continuous polarization allows any time-dependent phase drifts in the NSE spectrometer to be fit as a time-dependent background, and the AFP flipping eliminates systematics due to possible non-perfect neutron spin-flips or non-adiabatic transport of the neutron polarization through our setup.
A 5 cm diameter cylindrical \({}^{3}\)He SEOP cell made of GE180 glass with about 0.4 bar of \({}^{3}\)He was used [59]. This cell has somewhat rounded ends with a path length of 4.8 cm through its center. Ref. [60] describes the SEOP instrumentation used to polarize the \({}^{3}\)He spins. In contrast to that neutron polarizer device, here the vertical magnetic field for SEOP was provided by a set of 70 cm diameter Helmoltz coils for added flexibility and to satisfy space constraints on the J-NSE Phoenix instrument; all other components such as the lasers, NMR devices, and controls were taken directly from the device in [60]. High-fidelity data was obtained for a 6 cm\({}^{2}\) area in a half-circle in the neutron-illuminated central portion of the cell where the path length is approximately uniform.
A typical NSE scan is made by measuring the amplitude of the neutron polarization vector as the phase-coil is scanned in small steps around the point where the two NSE precession regions are balanced in field integral. This action produces a spin echo envelope that shifts in proportion to precession angle \(\phi^{*}\) of the sample. The J-NSE Phoenix spectrometer employs a position-sensitive detector allowing independent determination of \(\Delta b\) for approximately each 0.5 cm \(\times\) 0.5 cm region of the \({}^{3}\)He cell, which can then be averaged. This practice eliminates problems that might otherwise arise from a non-uniform neutron path length though the \({}^{3}\)He cell. A pair of NSE scans for the two states of the \({}^{3}\)He polarization from one such pixel is shown in Figure 2.
The NSE signal \(I(B_{1})\) detected transmitted intensity after the neutron polarization analyzer as a function of the phase coil field \(B_{1}\). In the expression:
\[I(B_{1})=I_{0}[1-p\int d\lambda f(\lambda)cos(\phi_{1}-\phi_{2})], \tag{14}\]
\(\phi_{1}\) and \(\phi_{2}\) are the precession angles in the first and second precession coils, respectively, \(\bar{f}(\lambda)\) is the neutron wavelength distribution, and \(p\) measures the loss of contrast of the interference pattern from neutron polarization efficiencies. The instrument records the neutron intensity in each detector pixel as a function of the current of the phase coil. The neutron wavelength distribution transmitted by the velocity selector on this beamline is well fit by a triangular function, so the resulting NSE signal is this form convoluted with a cosine function which is used to fit the data for analysis.
The \({}^{3}\)He \(b_{i}\) data reported here was obtained to verify the method for the Xe measurement [53], and thus time limited. \({}^{3}\)He was measured 6 hours each for polarized and unpolarized \({}^{3}\)He. The data was taken in the following pattern: 2 NSE scans with \(P_{3}\) positive; two scans with \(P_{3}\) flipped to the negative state; two in the positive state; and 6 hours of scans with \(P_{3}\)=0. Although not needed for the determination of \(b_{i}\), \(P_{3}=70.6\pm 1.6\%\) was determined using \(R\) and a value of [He]=\(0.3556\pm 0.001\) bar from a separate transmission measurement using neutron TOF on the FIGARO instrument [61]. A graph of the phase versus time for one of the pixels is shown in Fig. 3. This data was then fit for to a step function with a linear time-dependent background to determine the measured shift \(\Delta\phi=2\phi^{*}\) for \(+P_{3}\) to \(-P_{3}\) from each pixel.
The \(R\) measurement was performed by taking the weighted average of the mean intensity value of the NSE
signal during the NSE scans for the \(+P_{3}\) and \(-P_{3}\) states to determine \(T(P)\), and the mean intensity of the unpolarized \({}^{3}\)He NSE scans to determine \(T(0)\). This process was also performed for each pixel of the NSE scan over our region of interest to account for any variations in \(d\) resulting from the cell's shape and alignment in the neutron beam. Using the mean value compensates for the small difference in transmission of the positive versus negative \(P_{3}\) due to a small residual neutron polarization along \(B_{SEOP}\).
The measured \(\Delta\phi\) value could have a small correction due to the magnetic dipole field of the polarized gas, causing an extra precession signal in the neutron phase that is also proportional to \(P_{3}\). This effect was discussed in Ref. [12], however that work did not account for the scalar-contact term that one must include for real particles [46; 47], which is different than one would expect from the simple classical result where it is assumed that one can have non-interacting and overlapping point-like particles. For n+\({}^{3}\)He the contact interaction should be 0 for any reasonable first order approximations. Therefore along their flight path neutrons do not sample the classical field of the individual polarized \({}^{3}\)He nuclei inside the cell. The field experienced will consist of the long range dipole fields caused by a non-spherical geometry inside the polarized cell and, since the neutron precession also integrates the field along their entire flight path, the classical field of a polarized (magnetized) volume outside of the cell. For the geometry of our experiment this would lead to a correction \(<0.07\%\) and is not relevant here. However the general arguments of the fields experienced by nonzero spin particle beams through polarized volumes is of interest for precision measurements so we include a discussion of this effect in a supplement [62].
For neutron velocity selector instruments there is a small correction for the triangular wavelength distribution caused by the wavelength-dependent attenuation through the \({}^{3}\)He target. Using the arguments described in the O. Zimmer work [12] this leads to a negligible correction factor of 1.0003 for our case because of the relatively small \(Nd\) of the cell used. This correction would not be needed for an instrument that uses TOF to determine the transmitted spectrum. Our global detector count rates were 1% or less of the total detector deadtime of 400 ns (e.g maximum count rate of 2.5 MHz) so detector deadtime corrections are also negligible.
Using this analysis we arrive at the final value of \(\Delta b=[-5.27\pm 0.05\) (stat.) \(-0.05\) (syst.)] fm using the values of \(\sigma_{un}=5333(7)\) barn and an estimated \(\sigma_{p}=\sigma_{un}-\sigma_{1}=5309\) barn. From eq. 5 \(\Delta b=4b_{i}/\sqrt{3}\) gives the \({}^{3}\)He neutron incoherent scattering length. Rewriting the result to be independent of \(\sigma_{p}\) as in [13; 39] we obtain:
\[\frac{\Delta b}{\sigma_{p}}=(-9.93\pm 0.09(\mbox{stat.})-0.09(\mbox{syst.})) \times 10^{-4}\;\frac{\mbox{fm}}{\mbox{b}}. \tag{15}\]
This value compares to \(\Delta b/\sigma_{p}=(-10.1929\pm-.0760)\times 10^{-4}\) fm/b for the work of Ref. [13] and \(\Delta b/\sigma_{p}=(-10.3628\pm-.0180)\times 10^{-4}\) fm/b for the work of Ref. [12]. This preliminary data shows we can readily obtain our target of \(10^{-3}\) precision for \(\Delta b/\sigma_{p}\) with a dedicated measurement. The value using this basic method is free of systematics to our knowledge. The variation of the preliminary result above with respect to previous measurements could be attributed to slow experimental drifts and that we only reversed the \(P_{3}\) one time. The cell used had non-uniform neutron flight path and a minor shift in the cell position could lead to a one-sided error, estimations from shifting the center position of the data result in the given systematic error here.
We implemented improvements to the measurement technique pioneered by [12] such as in-situ polarization of the \({}^{3}\)He gas and the ability to reverse the \({}^{3}\)He polarization using AFP. The in-situ polarization approach
Figure 3: Phase data from one pixel of the NSE detector over the entire experiment. \(b_{i}\) values from 36 pixels representing the center of the \({}^{3}\)He cell were averaged for the final result.
Figure 2: Spin echo signals from the \({}^{3}\)He target versus the current in the phase coil (\(I_{trim}\))for one pixel of the neutron detector. The two NSE profiles correspond to \(P_{3}\) parallel and antiparallel to \(B_{SEOP}\).
decouples the measured \(\phi^{*}\) from time-dependent drifts that could falsely correlate with \(P_{3}\), and prevents possible inconsistencies induced by removal and replacement of the \({}^{3}\)He cell. The position-sensitive determination of the cross section reduces possible path length errors, which could be further reduced by using a flat-windowed \({}^{3}\)He SEOP cell to eliminate variations in path-length across the beam over time, and the AFP flipping cancels errors from small residual longitudinal neutron polarization. Use of a TOF NSE instrument such as the SNS-NSE [63] will eliminate the neutron velocity selector correction. By increasing the measurement time to 1 week and using a cell with an optimized \(Nd\) to minimize error propogation from the \(\cosh^{-1}(R)\) term we estimate one could reach a statistical accuracy of \(<0.1\%\) (or 0.005 fm). Previous work [64] shows that the transmission measurements needed to measure the proportionality factor between \(\phi^{*}\) and \(\Delta b/\sigma_{p}\) can indeed be conducted with the required precision. With the additional measurement of \(\sigma_{p}\) to a comparable precision, a total \(10^{-3}\) precision on \(b_{i}\) for \({}^{3}\)He can be attained.
###### Acknowledgements.
H. Lu and W. M. Snow acknowledge support from US National Science Foundation (NSF) grants PHY-1913789 and PHY-2209481 and the Indiana University Center for Spacetime Symmetries. H. Lu received a Short-Term Grant, 2019 no. 57442045 from DAAD the German Academic Exchange Service. B.M. Goodson acknowledges support from the NSF (CHE-1905341), DoD (W81XWH-15-1-0272, W81XWH2010578), and a Cottrell Scholar SEED Award from Research Corporation for Science Advancement. P. Guthfreund (ILL) and K Zhernenkov performed a calibration measurement of \(N\) (_i.e_ [He]) on FIGARO [61] aiding this work. We acknowledge G.M. Schrank for discussions and M. Huber for detailed discussions of NIST work on \(b_{i}^{3}\) and estimates of \(\sigma_{1}\) for \({}^{3}\)He.
|
2310.07109 | SparseCoder: Advancing Source Code Analysis with Sparse Attention and
Learned Token Pruning | As software projects rapidly evolve, software artifacts become more complex
and defects behind get harder to identify. The emerging Transformer-based
approaches, though achieving remarkable performance, struggle with long code
sequences due to their self-attention mechanism, which scales quadratically
with the sequence length. This paper introduces SparseCoder, an innovative
approach incorporating sparse attention and learned token pruning (LTP) method
(adapted from natural language processing) to address this limitation. Compared
to previous state-of-the-art models CodeBERT, RoBERTa, and CodeT5, our
experiments demonstrate that SparseCoder can handle significantly longer input
sequences--at least twice as long, within the limits of our hardware resources
and data statistics. Additionally, SparseCoder is four times faster than other
methods measured in runtime, achieving a 50% reduction in floating point
operations per second (FLOPs) with a negligible performance drop of less than
1% compared to Transformers using sparse attention (Sparse Atten). Plotting
FLOPs of model inference against token lengths reveals that SparseCoder scales
linearly, whereas other methods, including the current state-of-the-art model
CodeT5, scale quadratically. Moreover, SparseCoder enhances interpretability by
visualizing non-trivial tokens layer-wise. | Xueqi Yang, Mariusz Jakubowski, Li Kang, Haojie Yu, Tim Menzies | 2023-10-11T01:11:30Z | http://arxiv.org/abs/2310.07109v2 | # SparseCoder: Advancing Source Code Analysis with Sparse Attention and Learned Token Pruning
###### Abstract.
As software projects rapidly evolve, software artifacts become more complex and defects behind get harder to identify. The emerging Transformer-based approaches, though achieving remarkable performance, struggle with long code sequences due to their self-attention mechanism, which scales quadratically with the sequence length. This paper introduces SparseCoder, an innovative approach incorporating sparse attention and learned token pruning (LTP) method (adapted from natural language processing) to address this limitation. Extensive experiments carried out on a large-scale dataset for vulnerability detection demonstrate the effectiveness and efficiency of SparseCoder, scaling from quadratically to linearly on long code sequence analysis in comparison to CodeBERT and RoBERTa. We further achieve 50% FLOPs reduction with a negligible performance drop of less than 1% comparing to Transformer leveraging sparse attention. Moverover, SparseCoder goes beyond making "black-box" decisions by elucidating the rationale behind those decisions. Code segments that contribute to the final decision can be highlighted with importance scores, offering an interpretable, transparent analysis tool for the software engineering landscape.
2018
2018
[MISSING_PAGE_POST]
In the experiments reported here, model inference times were reduced from 16 hours to 4 hours- which in an industrial context is the difference between "getting results tomorrow" and "getting results this morning". Further, our method scales better than prior work (we run in linear time while prior work takes quadratic time).
**RQ2:**_How does the modified window size and the token length impact the performance of Transformer with sparse attention mechanism?_
We demonstrate that the sparse attention mechanism can significantly improve Transformer efficiency, especially when utilizing a smaller window size. Better yet, we also highlight that that overall performance increases with the growing sequence length.
**RQ3:**_Can we advance Transformer with sparse attention mechanism further via token pruning algorithm?_
Our results illustrate that we can further improve the above results with SparseCoder via integrating token-pruning.
Our contributions can be concluded as follows:
* **Improved model efficiency**: By integrating a learned token pruning algorithm into Transformer with sparse attention, SparseCoder can adaptively prune unessential tokens layer-wised and significantly reduce the model's inference overhead.
* **An ablation study**: We conduct a comprehensive ablation study with different configurations of sparse attention Transformer models, which explores the impact of local information (via modified window sizes and maximum sequence lengths).
* **Improved model interpretability**: We improve the interpretability of Transformer models with sparse attention by visualizing the important tokens after token pruning.
The rest of this paper is structured as follows. Related work are introduced in Section 2. In Section 3, we illustrate the detail of our methodology. The experimental design and data curation is introduced in Section 4, and proposed research questions are answered in Section 5. We discussed the threats to validity and future work in Section 5.4. And finally, the conclusion is drawn in Section 6. To facilitate further work by other researchers in this area, all of our scripts and datasets are available on-line1.
Footnote 1: [https://github.com/invisiblehead/Sparse_Attention](https://github.com/invisiblehead/Sparse_Attention), on Transformer-based model.
## 2. Related Work
In recent years, there has been a growing interest towards integrating neural networks with software code analysis. As elaborated further, many of these methods are primarily tailored for short code sequences- which is inappropriate for many industrial contexts. In many industrial contexts, certain code analysis tasks can become notably intricate due to the escalating scale of projects. To illustrate, the first author's summer internship at Google and Microsoft Research revealed the following observations:
Figure 1. An example of SparseCoder. Token pruning on accumulative attention matrices, where full attention depicted in (a) and sparse attention visualized in (b). The accumulation is conducted vertically after row-wise softmax as formulated in Transformer models. Given a pre-defined threshold as 0.5, tokens marked with ✗ in both (a) and (b) denote trivial words pruned away since their accumulative attention scores fall below the threshold. (a) details the token pruning process on a single self-attention layer of Transformer model as demonstrated by Kim et al. (2017), and (b) delves into the token pruning process within our sparse attention layer of SparseCoder, achieving greater computational efficiency through a sliding window strategy with a window size of three. Finally, (c) visualizes token pruning post-multiple attention layers, demonstrating the elimination of trivial tokens.
* At Google, a software engineer may submit a single CL (change list to Google Codebase, Google3) that encompasses multiple scripts or project. This action can produce a snapshot of code change list spanning hundreds or thousands of line of code.
* In Microsoft's Windows Defender system, statistical results from an extensive offline PowerShell dataset in 2022 indicated that over 40% of warning messages have lengths exceeding thousands of tokens. Such lengths surpass the capabilities of conventional models.
Given the outlined challenges, the following sections of this paper delve into methods for pruning large token spaces specifically within the context of vulnerability detection in source code analysis.
### Code Summarizing
Source code summarization, extensively studied in recent years, is to generate a short and concise natural language descriptions of source code to facilitate developer understanding and maintenance of source code. Hu et al. (Hu et al., 2020) propose _DeepCom_ via combining the natural language processing techniques, Abstract Syntax Tree, and Encoder-Decoder framework to automatically generate comments for Java methods to help developers comprehend Java programs when maintaining such projects. Wan et al. (Wan et al., 2020) introduce an improved deep reinforcement learning framework by incorporating an abstract Syntax Tree structure as well as sequential content of code snippets to tackle automatic source code summarization tasks. Wu et al. (Wu et al., 2020) present a structure-induced Transformer model via encoding sequential code inputs with multi-view structural clues to capture various semantics of programs in source code summarization tasks. Zhang et al. (Zhang et al., 2020) take advantage of both neural and retrieval-based techniques in source code summarization by combining both the input code snippet for testing and its two most similar snippets retrieved in the training set from syntax and semantics aspects. The default input sequence length of these works is 400 or less.
### Code Clone Detection
Code clone detection (Zhu et al., 2020; Wang et al., 2020; Wang et al., 2020) is an essential task for the maintenance and evolution of software projects by evaluating the similarity of internal source code representation, such as identifier names, syntactic fragments at statement and function level, etc (Wang et al., 2020). Recent studies on code clone detection define the four major types of clone detection, namely exact clones, renamed clones, near-miss clones and semantic clones. There also exist some other types utilized when referring to the clone relation to their experiments; e.g., structure clones and function clones. White et al. (White et al., 2020) propose learning-based code clone detection techniques by utilizing Recurrent Neural Network to automatically associate patterns extracted at the lexical level with patterns found at the syntactic level by inducing representations at different levels of granularity. Compared with a prominent structure-oriented technique, Deckard(Decker, 2019) which leverages a parsing tree instead of AST, White et al. (White et al., 2020) reported code clone pairs which are undetected or suboptimally reported in Deckard. Although White et al. claim that their approach can encode arbitrarily long sequences of embeddings to characterize fragments, this RNN-based method also suffers from long-range context dependencies, as most sequence transduction approaches do. It is more challenging for these models to learn long-range dependencies for longer paths between the combination of two positions in the input and output sequences (Wang et al., 2020). However, current research works mostly focus on analyzing short sequences of source code.
### Transformers
Transformer-based models (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) have achieved state-of-the-art results in sequence analysis tasks (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020), e.g., RoBERTa (Wan et al., 2020) in natural language processing and ViT (Wang et al., 2020) in computer vision. However, it becomes more and more challenging to apply these architectures to downstream tasks efficiently, given the large model sizes, increasingly complex datasets, demand for real-time inference and limited computing resources. Most language models, not only Transformer-based models, utilize the pipeline as illustrated in Figure 2 to conduct classification or regression on downstream tasks. Since pre-training is a time-consuming and GPU-demanding stage, our empirical study leverages only the fine-tuning and inference stages. Various methodologies are proposed to enhance the model efficiency during the inference stage. Pruning proposed by Lecun et al. (Lecun et al., 2019) is one of the popular approaches in compression of Transformer models. Generally, by getting rid of trial or unimportant weights in neural networks, pruning can reduce inference time and memory requirement with limited performance loss by avoiding unnecessary computation with limited performance loss (Wang et al., 2020).
Previous studies (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) have shown the computation cost (e.g., memory requirement) grows quadratically with the input sequence length in the attention layers of Transformer models. This research topic has increasingly garnered attention from both the industrial sector and the research community. Several research works focus on incorporate different sparse attention mechanism into Transformer models, e.g., _Longformer_ (ETC) (Wang et al., 2020), _Extended Transformer Construction_ (ETC) (BigBird, 2019), BigBird (Bird, 2019), _Global Memory Augmentation for Transformers_ (GMAT) (Bird et al., 2019) and _LongGLORA_(Borda et al., 2020). Most of released
Figure 2. Pipeline of utilizing natural language models in downstream tasks. In our empirical study, only the fine-tuning and inference stages are leveraged. Fine-tuning refers to adjusting the parameters of pre-trained NLP models with a training set of the specific downstream task, and inference means evaluating the fine-tuned models on the test datasets of our downstream task.
research papers and pre-prints are pre-trained based on long documents in NLP tasks, utilizing natural language datasets. Google LLC also has internal LLM pre-trained on the snapshots of internal code-base, Google3. Similar research work, _CodeReviewer_(Nakamoto et al., 2019) proposed by Li et al., is pretrained based on large open-source dataset from Github in code review scenario, consisting of code diff hunks and code review comments. But Google steps further via pre-training a longer version of LLM with internal codebase to facilitate code intelligence, such as program repairing, auto code completion within the longer context.
To enhance the model efficiency, there are generally two groups of pruning approaches based on the pruning patterns. Unstructured pruning removes less salient connections in arbitrary patterns of sparse parameters and feature maps in deep learning models. However, research works show that sparse networks with unstructured pruning do not yield significant efficiency gains when deployed on GPUs. Structured pruning removes a large part of network in structurable ways, such as a layer or a channel in a CNN, or a head in multi-head self-attention layer in the Transformer (Nakamoto et al., 2019). However, the latter approach mainly focuses on facilitating hardware implementation instead of diving into profound analysis of inner characteristics of model sparsity.
## 3. Methodology
### Baselines
In this study, we leverage three prominent methodologies in NLP and source code analysis domains as baseline approaches to compare with and evaluate the efficiency of SparseCoder. And here is the recap of our three baseline methods.
#### 3.1.1. Recurrent Neural Networks
RNNs such as LSTM (Hochreiter and Schmidhuber, 1993) and gated RNN (Hochreiter and Schmidhuber, 1993), were firmly established as prominent methods for sequence analysis, machine translation, and language modeling. These models make recurrent connections from neighboring positions of two words and generate a sequence of hidden states \(s_{t}\) from previous hidden state \(s_{t-1}\) and current position \(t\). However, the inherent sequential nature hinders the parallelization of the training process. These models tend to miss global information when it comes to long-dependency sequence analysis.
#### 3.1.2. RoBERTa
RoBERTa (Roh et al., 2019) is a replication study by Facebook AI in 2019 based on the checkpoints generated on the BERT model. This study demonstrates that the prior benchmark model BERT (Li et al., 2020) is significantly undertrained. The authors showed this by implementing several simple modifications, namely training over more data with longer sequences and greater batches, removing the next sentence prediction objective and changing the masking pattern dynamically on training data. The resulting model generates competitive results on all nine natural language tasks (GLUE) compared with prior benchmark models.
#### 3.1.3. CodeBERT
CodeBERT (Li et al., 2020) is the first _NL-PL_ Transformer-based framework pre-trained on both natural language and six programming language datasets. The model parameters are optimized in the pre-training process with two objectives, masked language modeling (MLM) and replaced token detection (RTD). Although the architecture of CodeBERT is identical to RoBERTa, CodeBERT achieves benchmark results on _NL-PL_ tasks, such as natural language code search and code documentation generation.
### Sparse Attention
Although the full attention mechanism is powerful as a vital attribute of the transformer-based models, these prior proposed models have a core disadvantage due to the quadratic dependency in terms of memory. Consequently, given the available computing resources, this approach could not process entire long input sequences (with over 512 tokens) at the same time. To address this limitation, there are several research works that explored the feasibility of sparse attention mechanisms to analyze longer input sequences by reducing the overall algorithm complexity.
Beltagy et al. (Beltagy et al., 2019) propose Longformer, a modified Transformer architecture by adopting local and global attention operations that scale linearly with the sequence length, making it practical for processing long sequences. Within the framework, the local attention mechanism utilizing a sliding window scheme is the crucial component in reducing the complexity and scaling the input to long sequences. Global attention remedies the attention dependency loss in the long sequence for the local attention by attending the special tokens to every other token in the input sequence. The technical details of these two attention mechanisms will be illustrated in the following subsections. There are several research papers also making attempts at sparse attention similar to Longformer by combining different global (e.g., random attention generated with graph) and local attention mechanisms to Transformer-based models; e.g., _Extended Transformer Construction_ (ETC) (Beng et al., 2019), BigBird (Song et al., 2019) and _Global Memory Augmentation for Transformers_ (GMAT) (Song et al., 2019). Most of them are pre-trained on long documents in NLP tasks.
In the following subsections, we will illustrate more of the technical details in different attention mechanisms related to this paper.
#### 3.2.1. full Attention
Given a sequence chunk with length \(n\) in natural language, we preprocess the chunk with basic natural language preprocessors. After conducting tokenization, each token is fully connected in the multi-head self-attention layer to make everything routed to everything. The highlighted principal diagonal means the attention of every token to itself as shown in (a) in Figure 6. The units are the tokens from the input sequence. The complexity is dominated by \(O(n^{2})\).
#### 3.2.2. Local Attention
As illustrated in Figure 3, the input sequence in the example is given a sliding window with the window size of 2 for easy illustration here. Given a sequence of length \(n\), we start from the green units from left to right; for each step from bottom to top in this example, the window is slid to get each unit covered by the window size (which is marked as red) in the current step connected with all the red units in its next step. As such, every unit is attended to by its immediate neighbors. As the window is slide across the input sequence, _Unit_\(i\) in step \(q\) can not only be directly attended by units from \(i-w\) to \(i+w\) but has indirectly connected with more units because its immediate neighbors are attended by those units. Generally, with the sliding window attention, tokens lose information of a wide range of units in a single step but regain it through depth by stacking multiple layers. With augmenting the depth of layers, a single unit gets increa
We can finally get everything in the chunk attended to everything by sliding the small window and stacking the multiple layers. The window size is an engineering trade-off between efficiency and performance. Smaller window sizes are less computationally expensive due to fewer nonzero values (better efficiency), while larger window sizes have richer representation power and often result in performance improvements (higher performance). The complexity is dominated by \(O(w^{2}*n)\), where \(w\) is window size and \(n\) is the length of the sequence chunk. We can simply ignore the constant \(w^{2}\), so the overall complexity is \(O(n)\). As such, Transformer models with the local attention mechanism can handle and process longer input sequences as compared to full-attention Transformers.
### Token Pruning
A recent observation in sequence analysis is that not all tokens in the input sequences are necessary to enhance model performance, and the overall computation during model inference can be significantly reduced by removing less pertinent tokens (Kumar et al., 2019). Compared with model parameter pruning, token pruning can be utilized to handle token sparsity by getting rid of less salient tokens in the importance matrix while preserving performance (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). Prior studies about token pruning can be generally categorized into three families: 1) based single configuration of pruning rate, 2) top-k of the sequence length, and 3) threshold adapted on both the sequence length and the input context. Figure 1 is the visualization of token pruning by weights in one of the full attention layers. Given an example sequence, the full attention layer attends each pair of two tokens and generates the attention score matrix. After conducting normalization on the attention matrix via a softmax function horizontally in Transformer model, the matrix can be visualized as with the heatmap. We can accumulate the attention score vertically to summarize the importance score of each token as illustrated at the bottom of Figure 1. Then, different pruning strategies can be applied to prune the tokens. For the first family of token pruning, such a single configuration of pruning rate may incur over-pruning on short sequences and under-pruning on long sequences, which can damage the model performance. In the second family of token pruning, such as Spatten (Spatten, 2018), a proportional configuration for each layer is utilized to remove the trivial tokens and the pruning ratio is adaptively adjusted with the sequence length. However, contextual information is not considered when adjusting the pruning ratio in each layer. In the third family, Kim (Kim, 2019) proposed Learned Token Pruning (LTP) by utilizing threshold-based pruning to adaptively conduct token pruning. In the fine-tuning stage, both the model parameters and the threshold is optimized based on sequence length and context. And in the pruning stage, the accumulative attention score of each token is compared with a learned threshold in every layer to adaptively remove the trial tokens.
In this work, with SparseCoder framework, we will leverage the token pruning approach to prune away tokens layer-wise in Transformer with sparse attention to reduce the computing footprint in the inference stage. Learned Token Pruning (Kumar et al., 2019) is adopted in our sparse-attention Transformer, which consists of two token pruning strategies, soft pruning and hard pruning. Given a Transformer-based model \(M\) fine-tuned on security defect detection datasets, the adaptively learned token pruning algorithm comprises three steps as follows:
* Step 1: In soft pruning, the model parameters and pruning threshold in each layer are trained by applying the soft mask with decimal masks \(\theta_{l}\), where \(\theta_{l}\) is the soft pruning threshold of layer \(l\).
* Step 2: Binarize the decimal mask generated in soft pruning and fix the thresholds, where \(S_{l}(x_{i})\) is the importance score of token \(i\) in layer \(l\), and \(Mask_{l}(x_{i})\) is a binarized mask for token \(i\) in layer \(l\).
* Step 3: In hard pruning, remove tokens with binarized masks as 0 and keeps the ones with masks as 1. Fine-tune the model parameters after hard pruning.
The whole process for token pruning is layer-wise and adaptive. Finally, the token pruning results are visualized in the experimental analysis section to make this black-box methodology more transparent and interpretable to software engineering researchers and engineers.
Figure 4. Overview of experimental design of this work.
Figure 3. A demonstration of sliding window mechanism for local attention in Transformer, where the token length is \(n\), the window size is \(w\) and \(i\) is the \(i\)-th token in the sequence.
## 4. Empirical Study of Vulnerability Detection
### Experimental Design
The overall design of our systems is shown in Figure 4. There, we apply the logical _OR_ operation by combining the positive labels from the multi-task classification with the five CWEs labels to generate only one label, where the code snippet in function level will be labeled as positive or buggy if at least one of the five types of CWE issues is identified in the specific function.
After that, random downsampling is leveraged to balance the ratio of majority samples which are labeled as non-anomalous in the training set. The testing set is kept unchanged to make fair comparison in the whole experiment pipeline. As a simple sampling strategy, downsampling is widely used to tackle the imbalance problem, which can also help to reduce the overall training overhead.
We also compare the model efficiency for different models with modified configurations by measuring FLOPs. To compare the model efficiency between full attention and sparse attention mechanism, we caculate the FLOPs of different Transformer model by breaking down the Transformer model into FFN (feed-forward layers), Projection Layer (for queries, keys, values and attention outputs), attention layers(full attention or sparse attention) and other operations (e.g., embedding, normalization and multi-heads) (Krizhevsky et al., 2014). More specifically, we report the FLOPs and performance changes with varying the window sizes and sequence lengths.
### Dataset
As outlined above in SS2, most of the current works focus on short sequence analysis of programming languages (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015), likely limited by the capabilities of their algorithms.
Upon conducting a comprehensive literature review on source code analysis and vulnerability detection, we identified a singular dataset containing long sequences that examined: 1) classification tasks related to source code analysis and vulnerability detection, and 2) the reliability of open-source code. This dataset, introduced by Russell et al. in 2018 (Russell et al., 2018), delves into vulnerability detection within source code. Notably, the dataset is curated at the function level--the most granular level that provides a comprehensive view of the subroutine flows within the code.
#### 4.2.1. Data Curation
Russell et al. (Russell et al., 2018) compile millions of function-level examples of C/C++ source code snippets from the SATE IV Juliet Test Suite 2, Debian Linux distribution 3, and public Git repositories on GitHub 4. Although the SATE IV dataset has labeled samples with anomalies from 118 different Common Weakness Enumeration (CWE), it consists of synthetic code snippets instead the original source code, which may not be sufficient as the training set. Debian, known as Debian GNU/Linux, is a free and open-source operating system (OS) with a Linux code basis established by Ian Murdock's team in 1993 and widely applied in many systems. There exists a very well-managed and curated source code of Debian package releases. GitHub provides the distributed version control of Git, access control, bug tracking, task management, continuous integration, etc. As of June 2022, a statistical analysis shows that there are over 83 million developers and more than 200 million repositories in GitHub. Compared to Debian packages, Github has a wider range of codebases but is often of lower quality. Both the samples collected from Debian and Github required extra labeling efforts.
Footnote 2: [https://samate.nist.gov/SARD/test-suites/112](https://samate.nist.gov/SARD/test-suites/112)
Footnote 3: [https://www.debian.org/](https://www.debian.org/)
Footnote 4: [https://github.com/](https://github.com/)
Another essential step of data curation is data cleaning. In open-source projects, code cloning at the function level is commonly observed within or cross projects. Similar functions can exist both in training and test sets, although these functions may seem quite diverse at the raw source code level. Removing the potential function duplication can efficiently avoid performance inflation and conceal overfitting caused by such a data leakage issue. Russell et al. (Russell et al., 2018) conduct a rigorous duplicate removal process via removing functions similar in their lexed representations and feature vector at the compile level. After the removal process, only 10.8% of samples pass as distinct functions without duplication and will be utilized in further study.
#### 4.2.2. Ground Truth
Russell et al. (Russell et al., 2018) explored the feasibility of three approaches, namely static analysis, dynamic analysis, and commit-message/bug-report tagging, to label the collected dataset. However, dynamic analysis is highly computationally expensive, requiring nearly a day of effort to conduct dynamic analysis on 400 functions in ManyBug Dataset via a single module of the LibTIFF package. Commit-message based approach turns out to be challenging, which cannot guarantee the quality of the label generated and required extra human efforts to inspect. Finally, only three static analysis tools (namely, Clang 5, Cppcheck 6, and Flawfinder 7) are used to obtain the ground truth for this dataset. Those different static analysis tools address and detect different aspects of anomalies for C/C++ source code. Clang includes a wide scope of vulnerability detection and additionally checks the programming style, syntax, and other aspects, which are less presumable to be anomalous. Cppcheck provides filename, line, severity, alert identifier (with a message), and CWE for each alert instead of style. Flawfinder utilizes simple text pattern matching and ignores comments and strings, which is geared toward the CWEs instead of
Figure 5. Token length distribution statistic on the test dataset (following the long-tail distribution).
style. The multiple analysis results are incorporated and irrelevant vulnerabilities which are not associated with security anomalies are pruned away. A professional security research team generates the binary labels and categorizes the anomalies into five multiple labels, which are summarized in Table 1. In addition, the overall dataset is highly imbalanced.
We also analysis the tokenization results to see the distribution of token length and the necessity to use SparseCoder. There are over 24 percent samples are over the token limitation (512) in the training set. As illustrated in Figure 5, the distribution of the token length follows long tail distribution. For samples with token length less or greater than the configuration of maximum token length, we have two schemes padding or truncation to make the input sequence with same token length as required in Transformer-based models. After repeating the tokenization and visualization process, we find that the test set has the same token length distribution as the training set.
### Evaluation Metrics
The experimental results are reported in terms of the following metrics: accuracy, precision, recall, F1, false alarm, AUC and loss (namely binary cross-entropy loss on the test set).
Previous research works (Wang et al., 2017; Wang et al., 2018) have demonstrated that floating point operations (FLOPs) as a measure agnostic to the hardware performance, indicating how many floating point calculations it performs in a single second. Training and inference in neural networks, especially deep learning models, involve a vast number of matrix multiplications and other operations. FLOPs provide a standardized way to estimate and compare the computational effort required by different models or frameworks. As LLMs grow in size, efficiency becomes paramount. These models, with their billions or even trillions of parameters, demand significant time and energy resources during both pretraining and deployment. Therefore, measuring and optimizing based on FLOPs is essential for model efficiency and model scalibility in practical industrial deployments.
### Statistical Tests
In this study, we report the median results of repeated ten runs for each group of experiments. To select "best" learning methods, we follow the advice of Rosenthal et al. (Rosenthal et al., 2019) by conducting statistical tests. Rosenthal et al. discuss different parametric methods for asserting that one result is with some small effect of another (i.e. it is "close to"). They list dozens of effect size tests that are divided into two groups: the \(r\) group that is based on the Pearson correlation coefficient; or the \(d\) family that is based on absolute differences normalized by (e.g.) the size of the standard deviation. By utilizing the most direct \(d\) family method, it can be concluded that one distribution is the same as another if their mean value differs by less than Cohen's delta (\(d^{*}\)standard deviation), where \(d\) is computed separately for each different evaluation measure (accuracy, precision, recall, F1, false alarm, AUC and loss).
To visualize that "close to" analysis, in all our results:
* We calculate the standard deviation of each row in Table 2, Table 3 and Table 4 which formulated as \(STDEV\)
* Any cell that is within \(d*STDEV\) of the best value will be highlighted in red or gray. All red cells are observed as "winners" to maximize and all the gray cells are "winners" for the rows to minimize. The other cells without highlighting are "losers".
* For accuracy, precision, recall, F1 and AUC, the "best" cells have "highest value" as red since the optimization goal is to maximize these values. For false alarm and loss, the "best" cells have "lowest value" marked as gray since those metrics are to be minimized.
We follow the advice of a widely cited paper by Sawilowsky (Sawilowsky, 2018) as a standard when deciding the value of \(d\) in our statistical analysis, which asserts that "small" and "medium" effects can be measured using \(d=0.2\) and \(d=0.5\) (respectively). Splitting the difference, we will analyze this data looking for differences larger than \(d=(0.5+0.2)/2=0.35\).
### Extractor of Attention Scores in Sparse Attention Matrix
Inspired by LTP, which originally proposed based on I-BERT (Sawilowsky, 2018) (a variant of RoBERTa with full attention mechanism). To develop SparseCoder, we further modified the existing implementation to adapt the token pruning algorithm for Transformer with sparse attention. This adaption was necessary not only because the implementation of LTP's implementation is closely tied to the I-BERT framework, which isn't a modular component, but also due to the need to accommodate sparse attention within LTP. As depicted in Figure 6, consider a short sequence of ten tokens, with two special tokens (the 1st and 8th). Given a window size of 3, the sparse attention matrix is represented in Sub-figure (b), with global attention highlighted in red and local attention in green. The attention matrix from Sub-figure (b) is decomposed into global attention in Sub-figure (c) and local attention in Sub-figure (d) for clearer representation. For the special tokens in global attention, they attend to every other token in the sequence.
In our implementation, given the local attention matrix is sparse, we avoid storing the attention scores in a \(n^{2}\) matrix format. Instead, sparse attention matrix is reshaped into a \(n*(2w-1)\) matrix to economize on storage space. In Sub-figure (d), the gray cells indicate no-attention areas, while the dark green cells represent attention scores to themselves. We accumulate the sub-diagonals of the reshaped local attention score matrix to derive the importance score of each token within local attention. For instance, the local attention score for the 3-rd token is revealed by the sub-diagonal in
\begin{table}
\begin{tabular}{c l l} \hline \hline
**CWE Types** & **Anomaly Description** & **Frequency/\%** \\ \hline
120/121/122 & Buffer Overflow & 38.2\% \\
119 & Improper Restriction of Operations & \\ & within the Bounds of a Memory Buffer & 18.9\% \\
476 & Null Pointer Dereference & 9.5\% \\
469 & Use of Pointer Subtraction to & 2.0\% \\ & Determine Size & \\
20, 457, 805 & Improper Input Validation, Use of & \\ & Uninitialized Variable, Buffer Access & 31.4\% \\ etc. & with Incorrect Length Value, etc. & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Ground truth summarization of the security dataset.
Sub-figure (d). Subsequently, the global attention matrix for special tokens is aggregated with local attention scores of the corresponding special tokens. This provides a comprehensive importance score for each token in each layer.
## 5. Experimental Results
In this section, we will answer research questions raised above. We also compare different model efficiency with modified configurations by reporting FLOPs during model inference, which is widely utilized as a standard measure of model efficiency to get real-time prediction and bring the processors of large models to the edge (Krizhevsky et al., 2017; Krizhevsky et al., 2017). All of the models are fine-tuned based on released checkpoints from Hugging Face 8 (a community and data science platform which provides standard tools to enable users to build, train and deploy ML models based on open-source code and technologies) with a fixed training set and tested on the same test set generated from the data curation discussed in Section 4.
Footnote 8: [https://huggingface.co/](https://huggingface.co/)
### Rq1
**RQ1.** Does sparse attention make Transformer scale better than the benchmark models (RNN-based method and Transformer with full attention, namely CodeBERT and RoBERTa)?
For the _recurrent neural network (RNN)_, we leverage the benchmark architecture _Gated Recurrent Unit (GRU)_(Chen et al., 2017) as it has advantages over _long short-term memory (LSTM)_ for model efficiency with less memory and faster than LSTM. We compare the experimental results of Transformer with sparse attention to the RNN model, which is widely adopted in raw source code analysis (Chen et al., 2017).
For the configuration of the recurrent neural network, the maximum sequence length is set as 1024, which is the same setting as the _max_length_ in Transformer with sparse attention in our experiments. For instance, shorter sequences than 1024 are padded while longer ones are truncated to the _max_length_. After the function-level code snippets are converted to a numeric look-up table by an embedding layer with the integer-encoded vocabulary, a convolutional layer is utilized to extract the underlying features of the embedding matrix, where the filter size is set as 512 (same as the maximum window size of Transformer with sparse attention and configs in RoBERTa and CodeBERT considering the full attention mechanism). Subsequently, the number of middle layers with gated recurrent units is configured as 12, since the default number of layers in the Transformer-based model is 12, to make the comparison fair. After 12 layers of GRU, the max pooling layer is leveraged to down-sample the input representation, followed by three dense layers to fully connect the network and to make a binary prediction on whether the given sequence of code snippets is vulnerable or not.
The RNN model is trained on the training set and tested on the test set. As illustrated in Table 2, the performance of Transformer with sparse attention model exceeds the benchmark RNN-based model significantly in terms of accuracy, precision, F1, false alarm, AUC and loss - and especially for false alarm, considering that our security defect detection dataset is highly imbalanced. In source code analysis, decreasing the cost of inspecting falsely reported warnings generated by static code analysis tools is crucial for software engineers (especially in the early stage of a software project's life cycle) and provides a meaningful guideline to improve the performance of current SA tools (Wang et al., 2017).
Moreover, other Transformer based models (RoBERTa and CodeBERT) also outperform the RNN model. Although our RNN model can conduct inference on longer sequences than traditional Transformer models (RoBERTa and CodeBERT), we can conclude that Transformer-based models are more prominent than RNN-based models in long-term dependency code analysis.
According to Table 2, we can conclude that the Transformer with sparse attention outperforms RoBERTa on the metrics of accuracy, precision, recall, F1, false alarm, AUC and loss, which illustrates that longer input sequences can improve the model performance. Note that the Transformer with sparse attention is pre-trained based on the checkpoint of RoBERTa and both these two models are pre-trained on a large volume of the document corpus. Compared with CodeBERT, Transformer with sparse attention only performs better than CodeBERT in terms of accuracy, and false alarm slightly, and worse than CodeBERT in terms of precision, recall, F1, AUC and loss. In terms of model efficiency measured in FLOPs, it's oberved that Transformer with sparse attention significantly reduces the computational cost in inference stage compared with RoBERTa and CodeBERT. Also, as demonstrated above, Transformer models with full attention has restricted capability to analysis long sequence.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multirow{2}{*}{RNN} & \multirow{2}{*}{RoBERTa} & \multirow{2}{*}{CodeBERT} & \multirow{2}{*}{Sparse Atten} & Sparse- \\ & & & & & \\ \hline Accuracy & 70.30 & 86.32 & 86.58 & 86.79 & 86.23 \\ Precision & 15.36 & 29.17 & 30.12 & 30.12 & 29.56 \\ Recall & 79.49 & 77.91 & 80.13 & 78.82 & 78.64 \\ F1 & 25.75 & 42.45 & 43.79 & 43.59 & 43.14 \\
**False alarm** & 30.33 & 13.10 & 12.87 & 12.66 & 12.98 \\ AUC & 82.46 & 88.99 & 90.08 & 89.34 & 89.12 \\ Loss & 0.678 & 0.39 & 0.37 & 0.39 & 0.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Experiment results (median of ten runs) of RNN-based model and Transformer-based models, i.e., RoBERTa, CodeBERT, Transformer with sparse attention and SparseC-oder.
Figure 6. Illustration of combining local and global attention mechanisms and how to efficiently store the matrix on hardware. full attention (a), local + global attention where the global attention score is marked as pink (b), decomposing the global attention (c) and efficiently stored local attention (d).
Considering that our sparse attention Transformer checkpoint was pretrained on NLP datasets, we are optimistic about its potential to yield improved outcomes once pretrained on SE datasets for industrial use cases.
### Rq2
**RQ2.** How does the modified window size and the token length impact the performance of Transformer with sparse attention mechanism?
To answer this research question, we conduct the ablation study to explore the impact of modified the window size on the performance of Transformer with sparse attention. For each group of the experiment, the window size is set as \(\{16,32,64,128,256,512\}\) and the maximum sequence length is set as \(1024\) as a control variable. As demonstrated in Table 4, given the maximum sequence length as a constant, the overall performance is getting slightly better as the window size grows. Moreover, FLOPs in the model inference stage is significantly reduced by configuring a smaller window size. This illustrates that the local attention is efficient to reduce the memory requirement without damaging the model performance significantly. Based on this observation, we suggest utilizing the sparse attention mechanism when applying Transformer-based models on software engineering tasks.
We also conduct the ablation study with a modified maximum sequence length on Transformer with sparse attention to explore the impact of the different sequence lengths on the model performance. For each group of the experiment reported in Table 3, the maximum sequence lengths are set as \(\{32,64,128,256,512,1024\}\) respectively. Since the window size cannot be greater than the sequence length and it also has the \(512\) constraint from the inherent nature of full attention, the window size would be \(min\{512,max\_length\}\), which will be set as \(\{32,64,128,256,512,512\}\) correspondingly. As illustrated in Table 3, the overall performance of Transformer with sparse attention gets improved as the maximum sequence length is augmented, which indicates longer code sequences with less truncation is a benefit for the model performance.
### Rq3
**RQ3.** Can we advance Transformer with sparse attention mechanism further via token pruning algorithm?
To conduct the learned token pruning algorithm discussed in Section 3.3 on Transformer with sparse attention, we propose a novel framework further, SparseCoder, via implementing the attention score extractor as illustrated in (c) and (d) of Figure 6 since the accumulated attention scores of each token required by token pruning algorithm in sparse attention is different from full attention mechanism. SparseCoder can adaptively prune away unimportant tokens layer-wisely in the fine-tuning stage of Transformer model and advances the model efficiency by reducing computing overhead. As mentioned in Section 3.3, after conducting the layer-wisely learned token pruning, both model parameters and thresholds are trained to optimize the model performance. And binarized masks are set for tokens in the hard pruning stage, where tokens with a mask as 0 are removed and a mask as 1 are kept. We can retrieve the mask information in the last (12-th) layer of SparseCoder from neural level and visualize the input sequence after conducting token
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multicolumn{6}{c}{Window size} \\ \cline{2-7} & w = 16 & w = 32 & w = 64 & w = 128 & w = 256 & w = 512 \\ \hline Accuracy & 86.08 & 85.82 & 85.91 & 85.69 & 86.49 & 86.79 \\ Precision & 28.79 & 28.41 & 28.83 & 28.35 & 29.70 & 30.12 \\ Recall & 77.98 & 78.25 & 80.01 & 79.10 & 77.95 & 78.82 \\ F1 & 42.05 & 41.68 & 42.38 & 41.74 & 43.25 & 43.59 \\ False alarm & 13.36 & 13.66 & 13.68 & 13.85 & 13.03 & 12.66 \\ AUC & 88.45 & 88.64 & 89.52 & 88.68 & 89.23 & 89.34 \\ Loss & 0.421 & 0.414 & 0.397 & 0.409 & 0.390 & 0.386 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Experiment results (median of ten runs) of ablation study via modifying window size in Transformer with sparse attention with fixed max length as 1024 with padding and truncation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multicolumn{6}{c}{Maximum sequence length} \\ \cline{2-7} & n = 32 & n = 64 & n = 128 & n = 256 & n = 512 & n = 1024 \\ \hline Accuracy & 69.98 & 73.09 & 75.40 & 82.15 & 85.85 & 86.79 \\ Precision & 14.03 & 16.02 & 17.38 & 23.44 & 28.49 & 30.12 \\ Recall & 70.91 & 74.35 & 74.53 & 77.40 & 78.42 & 78.82 \\ F1 & 23.43 & 26.36 & 28.18 & 35.99 & 41.79 & 43.59 \\
**False alarm** & 30.08 & 26.99 & 24.54 & 17.51 & **13.64** & **12.66** \\ AUC & 77.87 & 81.19 & 28.49 & 87.25 & 88.89 & 89.34 \\ Loss & 0.559 & 0.528 & 0.523 & 0.432 & 0.410 & 0.386 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Experiment results (median of ten runs) of ablation study via modifying max sequence length in Transformer with sparse attention with fixed window size = min(512, max_length), where padding and truncation is utilized.
Figure 7. Comparing efficiency of different models measured by GFLOPs, where 1 GFLOPs \(=10^{9}\) FLOPs. And The FLOPs simulation is based on Electra [11].
pruning. After that, the words or code snippets are restored from tokens by the mapping rules generated during the tokenization and embedding procedure.
Generally, there exists a trade-off between the pruning threshold and model performance. A higher pruning threshold, which removes more tokens, will reduce more computing overhead but with a dropped model performance. Therefore, we conduct an ablation study to decide the optimal configuration of a final layer as 0.01. And for each of the remaining layers, the threshold will be linearly scaled, for example, it's set as 0.0025, 0.005, and 0.0075 respectively for the 3rd, 6th, and 9th layers when the threshold of the final layer (12-th) is configured as 0.01.
Figure 8 presents a visualization demo of security defect detection based on function-level C code. Strikethroughs in the code snippet highlight the trivial information that the algorithm pruned; this information was subsequently ignored by the Transformer model during the fine-tuning and testing phases. After pruning, the underlying buffer overflow issue, emphasized in red, becomes more readily discernible. Through such token pruning, our proposed SparseCoder not only enhances interpretability through visualization but also improves efficiency, as measured in GFLOPs, advanced by about **two times**, as depicted in Figure 7. Also, as a trade-off of the advanced efficiency, only less than 1% performance drop on the accuracy, precision, recall, F1, false alarm (lower the better) and AUC is observed in Table 2. This advancement notably facilitates real-time analysis and model interpretability.
### Threats to Validity
Threats to validity that may threaten the generality of our conclusions drawn in this paper with datasets found in the future are listed as follows:
**Sampling bias.** Regarding sampling bias, our first comment is that the conclusion drawn in this paper is based on the dataset explored in this specific empirical study, security defect detection on open-source C/C++ projects. In future work, it's necessary to repeat these experiment rigs on a new dataset composed of other programming languages or other downstream software engineering tasks to verify the generality of this framework.
Besides, in this experiment, random sampling is utilized to advance the computational efficiency and to balance the binary labels in the training set. The negative samples in the training set were randomly down-sampled with a fixed random seed and the dataset after down-sampling is dumped into a CSV file to make the experiment repeatable and comparable. However, the down-sampling process might still incur some biased issues such as losing information from some important instances.
**Parameter bias.** In this work, there exist multiple parameters to configure in each classification model leveraged in this work, namely, the number of layers in the RNN model, learning rate and batch size in RoBERTa and CodeBERT and window size in Transformer with sparse attention and SparseCoder. Model performance might be improved by tuning these configurations. However, this study emphasis more on model parameter optimization. And the batch size of each model is set to as much as the computing resource can process. And the learning rate of each model is configured as the same value to make a fair comparison. For window size in Transformer with sparse attention, we conduct a series of ablation studies with window size configured as \(2^{n}\) where \(n=5,6,7,8,9,10\). In the Transformer with sparse attention, the author suggests a scheme of small window size in lower layers to capture local information and large window size in higher layers to represent the high-level or wholesome information of the sequence. These configurations can also be an influencing factor in the experiment results.
## 6. Conclusion
With this technology, we offer a new highwater mark (SparseCoder) in the application of Transformer with sparse attention to source code analysis (in the arena of vulnerability detection). We show from over two hundred thousand data points sampled from one billion function-level code snippets in open source C/C++ projects that this method outperforms the prior state-of-the-art model in several ways: accuracy, precision, F1, false alarm and AUC score. We also provide the empirical comparison between different Transformer-based approaches, RoBERTa CodeBERT and Transformer with sparse attention with different configurations (i.e., window size and max sequence length) to verify the efficiency of Transformer with sparse attention in the scenario of source code analysis. With the sparse attention mechanism, we can address the model scalability issue of Transformer models, which suffer from inherent token sequence-length limitation issues. Further, we devoope a advanced framework (SparseCoder) by combining and implementing a learned token pruning algorithm on top of sparse attention Transformer to advance the model efficiency and model interpretability by conducting the visualization of token pruning.
|
2306.02797 | Human-like Few-Shot Learning via Bayesian Reasoning over Natural
Language | A core tension in models of concept learning is that the model must carefully
balance the tractability of inference against the expressivity of the
hypothesis class. Humans, however, can efficiently learn a broad range of
concepts. We introduce a model of inductive learning that seeks to be
human-like in that sense. It implements a Bayesian reasoning process where a
language model first proposes candidate hypotheses expressed in natural
language, which are then re-weighed by a prior and a likelihood. By estimating
the prior from human data, we can predict human judgments on learning problems
involving numbers and sets, spanning concepts that are generative,
discriminative, propositional, and higher-order. | Kevin Ellis | 2023-06-05T11:46:45Z | http://arxiv.org/abs/2306.02797v3 | # Modeling Human-like Concept Learning with Bayesian Inference over Natural Language
###### Abstract
We model learning of abstract symbolic concepts by performing Bayesian inference over utterances in natural language. For efficient inference, we use a large language model as a proposal distribution. We fit a prior to human data to better model human learners, and evaluate on both generative and logical concepts.
## 1 Introduction
Human learning is rapid and broad. Consider a child learning a new routine like 'high-five': given just 1 or 2 examples, they can learn that move, and even generalize to variations like low-fives. Or consider that same child learning the basics of a game like Pacman, or a mathematician extrapolating '1, 4, 16, 64,...'. In each of these cases, the relevant routine, rules, or concept can be learned from relatively little data, and only seconds to minutes of experience. Furthermore, the space of possible concepts is essentially boundless, because concepts can compose to yield more complex constructs like 'high-five followed by a fist bump', or 'Pacman, except now you control two avatars.' Thus a key computational challenge is to understand how an intelligent system can acquire a wide range of new concepts, given modest data, and granted a modest computational budget. Building AI systems that can efficiently master many concepts is also practically valuable, because data-efficiency and broad generalization remain some of the most salient gaps between human and machine intelligence [1].
Fundamentally, we are concerned with the problem of _induction_: Inferring a generalizable pattern, rule, trend, or law from specific examples. A classic approach to induction is to start with a hypothesis space of possible concepts, and then probabilistically infer which hypothesis most likely generated the observed data using Bayes' Rule. This Bayesian paradigm has proved widely applicable across both cognitive science and artificial intelligence [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13].
On its own, however, the Bayesian paradigm leaves important questions unanswered. Acquiring a broad range of possible concepts demands an expressive hypothesis space, but inference over a rich space of concepts comes at steep computational cost. For increased expressivity, many Bayesian models design hypothesis spaces that resemble a programming language [14; 15; 16; 17; 18; 19]. Posterior inference then corresponds to constructing high-probability programs. Each of these program-learning models requires a custom domain-specific programming language. Despite confining themselves to a domain-specific language, these models still require specialized inference machinery, such as heuristically-designed search moves [10], or exorbitant compute budgets [20].
Our goal is to build a model of humanlike concept learning that makes progress toward resolving the tension between intractable inference and expressive hypothesis classes. We propose a new model that expresses its concepts in natural language-even when the learning problem does not involve natural language. We do this for two reasons. First, language is an effective representation for many human concepts. It is compositional, richly expressive, and regularizes the learner toward natural generalizations. Second, we find we can efficiently infer natural language concepts using modern large language models [21; 22] based on the transformer architecture [23].
Like any Bayesian model, we will first define a prior over concepts, which in our case exerts a top-down pressure for naturally-structured language. Our model also has a bottom-up mechanism for efficiently inferring possible hypotheses, analogous to a recognition network [24]. The interaction of these top-down and bottom-up models is surprisingly effective as a model of humanlike learning: Given around 50 samples from the recognition network, our model can account for human patterns of generalization for concepts that are generative or discriminative, and propositional or first-order. We show the model can also capture fine-grained structure in human judgements: both subtle gradations of uncertainty, and also the dynamics of learning starting from a few examples and going to dozens of examples. Finally, a key reason why humans can learn so efficiently is because they have a good inductive bias or prior [25; 26]. Our model can fine-tune its prior to human judgements, effectively extracting a human-like prior from behavioral data. We find this gives a more faithful model of human generalization, and also improves average accuracy on concept-learning tasks.
Our modeling efforts here focus on abstract symbolic concepts, such as 'prime numbers less than 30', which we think are well-suited to natural language. We do not consider concepts grounded in perception and actuation, such as 'dog' or 'chewing'. To be clear, we do not attempt to provide a single unified account of concept learning.
However, for these abstract concepts, we provide a _rational process model_[27]. Following the Marr levels of analysis [28], this means we propose algorithmic mechanisms for concept learning that rationally approximate optimal inference, subject to bounds on computation (sampling). This contrasts with _computational-level models_, which characterize the goal of the learner, without committing to a theory of how the learner mechanistically accomplishes that goal [28]. Most Bayesian concept learning models operate at the computational level, avoiding issues of intractability [3; 14] (cf. [29]).
We contribute (1) a model of symbolic concept learning that supports efficient inference over a flexible hypothesis class; (2) an evaluation on human data from two different concept learning experiments; and (3) a simple recipe for extracting a humanlike prior over concepts, given raw behavioral data.
## 2 Background and Related Work
**Bayesian Program Learning** (BPL: [10]) models treat concept learning as Bayesian inference over latent symbolic programs. BPL models first define a domain-specific programming language spanning a space of possible concepts, and then infer a posterior over concepts \(C\) given training data \(D\) via Bayes' Rule: \(p(C|D)\propto p(D|C)p(C)\). Although the set of possible concepts remains hardcoded, the prior \(p(C)\) can be learned through hierarchical Bayesian methods. Given parameters \(\theta\) indexing possible priors, and a collection of datasets \(\mathcal{D}\), the prior can be learned by approximately solving [20]:
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}p(\theta|\mathcal{D})= \operatorname*{arg\,max}_{\theta}p(\theta)\prod_{D\in\mathcal{D}}\sum_{C}p(D| C)p_{\theta}(C)\]
Bayesian models can learn from few examples, because the prior regularizes them toward reasonable generalizations. They also produce nuanced uncertainty estimates, because they represent the posterior. Program learners can also acquire any concept expressible in their domain-specific language, but this language must be appropriately circumscribed for inference to remain tractable. Systems such as DreamCoder [20] partly address this concern by growing the language and training neural networks to aid inference, but even then, general-purpose programming languages remain out of scope. We next turn to latent language, which considers the broad hypothesis space of all natural language.
**Latent Language.** Using language as an internal representation for nonlinguistic tasks was introduced by the Latent Language framework [30; 31]. These systems perform few-shot learning by searching for the language which minimizes a loss on the training examples. Given training input-output examples \(\{(x,y)\}\), the latent language approach infers the language \(C^{*}\) minimizing
\[C^{*}=\operatorname*{arg\,min}_{C\in\Sigma^{*}}\sum_{(x,y)}\text{Loss}(y,f_{ \theta}(x;C))\]
where \(f_{\theta}\) is a neural network and \(\Sigma^{*}\) is the set of all strings. Because there are infinitely many strings, another neural network samples a finite pool of candidate concepts. Relative to Bayesian learners, latent language models use maximum likelihood estimation to infer a single concept, rather than construct a posterior distribution. Like our approach, latent language also uses language as an intermediate representation, combined with a bottom-up concept proposal process. Our work also
adds Bayesian inference, and shows how to learn a prior on \(C\) from human judgments. Learning the prior proves important for both modeling human judgments and achieving high task performance.
**Induction, Abduction, and Deduction.** We address inductive problems: inferring a general pattern from specific examples [32]. Abduction is a related reasoning process where the reasoner infers an explanation for a specific observation. Abductive reasoning using modern language models has received much recent attention [33; 34], which has solidified natural language as a promising candidate for representing abductive explanations. Our work extends that paradigm to inductive reasoning by equipping it with extra Bayesian machinery. Deduction--logically inferring the truth of a proposition--has also have been similarly revisited in the context of modern language models [35; 36].
## 3 Model
We start with a basic Bayesian approach. A latent concept \(C\) generates \(K\) observed examples, notated \(X_{1},\ldots,X_{K}\), according to an IID process. We abbreviate \(X_{1},\ldots,X_{K}\) as \(X_{1:K}\). For our model, \(C\) is an utterance in natural language. The learner's posterior beliefs are given by Bayes's Rule,
\[p(C|X_{1:K})\propto p(C)\prod_{1\leq k\leq K}p(X_{k}|C) \tag{1}\]
The prior \(p(C)\) comes from a neural model. The likelihood \(p(X|C)\) is domain-specific because it depends on the structure of the examples. We assume the posterior in Eq. 1 is intractable.
We model tasks that do not involve externalized language, thus beliefs over \(C\) play a fundamentally auxiliary role. We instead care about the probability that a test example \(X_{\text{test}}\) belongs to the same concept as the training examples \(X_{1:K}\). This posterior predictive quantity is
\[p(X_{\text{test}}\in C|X_{1:K})=\sum_{C}p(C|X_{1:K})\mathds{1}\left[X_{\text{ test}}\in C\right] \tag{2}\]
To make the above tractable, we introduce a proposal distribution \(q(C|X_{1:K})\). We draw from \(q\) a modest number of sampled concepts (tens to hundreds), writing those samples as \(C^{(1)},\ldots,C^{(S)}\). By only considering concepts proposed by \(q\), the infinite sum over \(C\) in Eq. 2 becomes a finite sum over \(S\) samples. Provided those proposals account for much of the posterior mass, this is a good approximation. Conventionally, \(q\) is used to construct an importance sampler [37]:
\[p(X_{\text{test}}\in C|X_{1:K})=\operatorname*{\mathbb{E}}_{C\sim p(\downarrow X _{1:K})}\mathds{1}\left[X_{\text{test}}\in C\right]\]
\[\approx\sum_{1\leq s\leq S}w^{(s)}\mathds{1}\left[X_{\text{test}}\in C^{(s) }\right]\text{, where }w^{(s)}=\frac{\tilde{w}^{(s)}}{\sum_{s^{\prime}}\tilde{w}^{(s)}} \text{ and }\tilde{w}^{(s)}=\frac{p(C^{(s)})p(X_{1:K}|C^{(s)})}{q(C^{(s)}|X_{1:K})} \tag{3}\]
The above Monte Carlo estimate requires evaluating \(q(C^{(s)}|X_{1:K})\). The most powerful proposal distributions at our disposal, such as GPT-4 [21], do not expose this functionality, so we heuristically approximate importance sampling by deduplicating the samples and weighing each by \(p(C)p(X_{1:K}|C)\):
\[p(X_{\text{test}}\in C|X_{1:K})\approx\sum_{C\in\{C^{(1)},\ldots,C^{(S)}\}}w^ {(C)}\mathds{1}\left[X_{\text{test}}\in C\right]\text{, where }\]
\[w^{(C)}=\frac{\tilde{w}^{(C)}}{\sum_{C^{\prime}}\tilde{w}^{(C^{\prime})}} \text{ and }\tilde{w}^{(C)}=p(C)p(X_{1:K}|C)\mathds{1}\left[C\in\{C^{(1)},\ldots,C^{(S) }\}\right] \tag{4}\]
Ultimately, the distribution \(p(C)\) should reflect the prior beliefs of human learners. To tune the prior to reflect human patterns of generalization, we assume access to a dataset of human judgments consisting of triples \((X_{1:K},X_{\text{test}},r)\), where \(r\) is the ratio of humans who judged \(X_{\text{test}}\) as belonging to the same concept as \(X_{1:K}\). More generally, \(r\) could be any average human rating in \([0,1]\). If \(\theta\) parametrizes the prior, then we match the prior to the human data by solving
\[\operatorname*{arg\,max}_{\theta}\sum_{(X_{1:K},X_{\text{test}},r )}r\log p_{\theta}(X_{\text{test}}\in C|X_{1:K})+(1-r)\log\left(1-p_{\theta}( X_{\text{test}}\in C|X_{1:K})\right)\] (5) \[\text{ where }p_{\theta}(X_{\text{test}}\in C|X_{1:K})=\sum_{C} \mathds{1}\left[X_{\text{test}}\in C\right]p_{\theta}(C|X_{1:K})\text{, approximated via Eq. \ref{eq:
The Number Game
The Number Game is a few-shot concept learning setup covered by classic textbooks and dissertations [5; 40]. Participants playing The Number Game are given a few example numbers belonging to a hidden concept, and then rate how likely it is that other numbers also belong to the same concept. Given just the example number _16_, the concept could be'square numbers', 'powers of two', 'evens', 'odds but also 16', '97 and 16', 'numbers ending in 6', or infinitely many other concepts. With more examples such as _16, 8, 2, 64_, humans consistently rate powers of two as almost certainly in the concept, but gradations of uncertainty remain: other evens like 24 could plausibly belong in the concept, but humans rate odds like 23 as extremely unlikely. Examining human judgments for these and other concepts reveal a variety of belief states, including sharp, all-or-none concepts like 'powers of two,' but also soft graded concepts like 'numbers around 20' (Fig. 1, blue bars).
**Human Data.** We take human data from [40]. Eight human participants rated test numbers on a scale of 1-7, given different training sets of example numbers. We model the average rating for each test number, on each training set of example numbers.
**Prior distribution.** We consider two different prior distributions. The **pretrained prior** scores the log likelihood of each concept \(C\) using an open source language model (specifically, CodeGen 350M [41]). The **tuned prior** first extracts semantic features of the natural language concept \(C\) using a pretrained sentence feature extractor \(\phi\), specifically all-MiniLM-L6 [42], which outputs a 384-dimensional feature vector. The tuned prior maps those features to an (unnormalized) log probability via a linear mapping with parameters \(\theta\):
\[\text{Tuned prior:}\quad p_{\theta}(C)\propto\exp\left(\theta\cdot\phi\left(C \right)\right) \tag{6}\]
**Likelihood distribution.** Evaluating \(p(X|C)\) requires first determining which numbers belong to the concept \(C\). To efficiently enumerate those numbers, we translate \(C\) from natural language to python using Codex code-davinci-002 [22], a large language model trained on source code. We run the python code on the numbers 1..100 to determine the members of \(C\). Given the members of \(C\), we assume numbers are drawn uniformly at random from \(C\) with probability \((1-\epsilon)\), and uniformly at random from 1..100 with probability \(\epsilon\):
\[p(X|C)=(1-\epsilon)\frac{\mathds{1}\left[X\in C\right]}{|C|}+\epsilon\frac{1} {100} \tag{7}\]
Figure 1: Number Game human judgments (blue bars) for different example data. For instance, the top plot shows that after seeing that _16_ belongs to the concept, humans rate _64_ as around 50% likely to belong to that same concept. Bars at zero correspond to missing data. Orange curves show our model’s predictions. The text to the right of each plot shows 5 samples from our model’s approximate posterior after proposing 100 concepts using \(q\). Human data from [40]. See also Appendix Fig. 6
Proposal distribution.We implement \(q\) using Codex code-davinci-002 [22]. We prompt Codex by adapting the cover story given to the human subjects, then append the training example numbers \(X_{1:K}\) and have it complete the natural language description of the hidden concept. We used Codex because we hypothesized that training on source code would transfer to reasoning about numbers.
Temperature, Platt transform.Because human subjects rated on a scale of 1-7, we introduce a learnable Platt transform between the model's predicted probabilities and the human judgments [43]. We also place a learnable temperature parameter on the posterior.
Parameter fitting.We want a single parameter setting that works for _all_ of the training example sets, so we fit the above parameters to the average human judgment for each test number, and for each set of examples. When comparing model predictions against the average human judgment on a particular test number, we always holdout that particular human data from the parameter fitting. We use Adam [44] to perform maximum likelihood estimation of the parameters, following Eq. 5.
Alternative models.To understand the few-shot learning abilities of a bleeding-edge large language model, we run GPT-4 on the number game: prompted with example numbers belonging to a concept, together with a test number, we measure the probability that GPT-4 responds "yes" to the test number belonging to the concept, vs responding "no". We transform GPT-4's predictions using a Platt transform fit to the human data. We also contrast against Latent Language [30], using the same Codex-based proposal distributions, and the same learnable Platt transform. Finally, we consider versions of our model that encode hypotheses in python instead of natural language ('code prior'), as well as a version of our model that ablates the proposal distribution by generating concepts unconditioned on \(X_{1:K}\).
Results.Fig. 2 shows that an out-of-the-box pretrained natural language prior offers a decent fit to the human data after proposing 50-100 hypotheses. Our tuned prior achieves a close fit to the human data, again using 50-100 samples. Switching from English to Python significantly degrades model fit, even when the Python prior is learned ('tuned code prior'). Ablating the proposal distribution-sampling hypotheses from the prior-also provides a poor fit: the space of possible number hypotheses is too vast to be randomly guessed without looking at the data.
These results establish that a language-generating proposal distribution can support efficient inference, and accurately model the average of a small pool of human participants. Recall that we are modeling the average of 8 human judgements, and a good fit to this average requires 50-100 samples. Therefore, our model suggests that each human might only need to draw a couple dozen samples, which we think is psychologically plausible (and practical, from an engineering perspective). In contrast, the original Number Game model considered over 5000 hypotheses [40].
Last, GPT-4's judgments are decidedly different from humans. This does not mean that GPT-4 is wrong in its predictions: there is no ground-truth for whether the number 30 should belong to the same concept as the number 60. To better understand how humans and models stack up against each other, we next consider a richer domain where accuracy can be more objectively measured.
Figure 2: Left: How well different models predict held-out human judgments (\(R^{2}\)), as a function of the sampling budget (X-axis, log scale). Error bars: \(\pm\)SEM over 3 runs with different seeds. Variance across runs decreases with number of samples. Right: holdout predictions for the best model. See also Appendix Fig. 7
## 5 Logical Concepts
We next consider concepts with more complex logical structure. Consider a concept such as _bachelor_, defined as "unmarried man", or _Valedictorian_, defined as "the person, within a single school, with the highest GPA". Using the primitives of propositional logic, we can define _bachelor_: (Male\(\wedge\)-Married). Using the more expressive language of first-order logic, which includes quantifiers, we can define _valedictorian_ as \(\text{Valedictorian}(x)\iff(\forall y:\text{School}(x)=\text{School}(y)\implies \text{GPA}(x)\geq\text{GPA}(y))\). Discovering the discrete logical structure that best explains a dataset of examples is a well-known AI challenge [45; 46; 47]. Within the cognitive sciences, understanding how people come to grasp logical relations has been proposed to be a key component of understanding how people comprehend number systems, geometry, causal processes, social and kinship relations, and other domains [48; 49; 50; 51].
For our modeling, we consider an online learning setup from [52] where a learner observes a stream of examples of a unknown logical concept. On each example, the learner observes a fresh batch of 1-5 objects, and must pick out which objects belong to the hidden concept. The learner then gets feedback on which objects in the batch actually belonged to the concept, and then the process repeats for a new batch. Fig. 3 illustrates this experimental setup: each object is a shape defined by its size (small, medium, large), color (green, yellow, blue), and shape (triangle, rectangle, circle). Recording each human response to each shape on each batch gives a fine-grained learning curve capturing how learning unfolds over dozens of examples. These learning curves signal what concepts people readily learn, what patterns of mistakes they make, and what concepts remain essentially unlearnable from even dozens of examples. We obtain this human data from [52], which covers 112 concepts, collecting judgements from 1,596 human participants as they attempt to learn each concept over 25 batches of examples. These 25 batches corresponds to \(\approx\) 75 examples/concept, and each concept run on \(\approx 20\) participants. Most human learning happens over the first dozen batches, so we take the first 15/25 batches. (Also, \(q\) is implemented using GPT4, which is expensive and slow. Taking the first 15/25 nearly halves the cost and time of the model.)
**Model.** Our modeling approach is similar to The Number Game, except we now have a discriminative learning problem instead of a generative one, and an online learning setup where the learner observes a stream of examples. To model online learning, we draw fresh proposals from \(q\) for each training batch, and perform Bayesian inference over all proposals drawn so far. To model discriminative learning, each example is now a triple \((B,T,Y)\), where \(B\) is a batch of shapes, \(T\) is a test shape in that batch, and \(Y\) is zero or one depending on whether \(T\) in \(B\) is an example of the concept. Our likelihood model assumes human subjects predict according to \(C\) with probability \((1-\epsilon)\) and pick randomly with probability \(\epsilon\), with a base rate \(\alpha\) of labeling an example as positive. Following [52; 53], we also model a simple memory decay process where the relative importance of earlier observations falls off according to a power law with parameter \(\beta\):
\[p(Y=1|B,T,C) =(1-\epsilon)\mathds{1}\left[(B,T)\in C\right]+\epsilon\alpha \tag{8}\] \[\log p(X_{1:K}|C) =\sum_{(B_{k},T_{k},Y_{k})\in X_{1:K}}(1+K-k)^{-\beta}\log p(Y|B,T,C) \tag{9}\]
Figure 3: Concept learning experiment from [52] (illustration used with permission). On each batch, participants label which shapes they think belong to a new concept (called _wudsy_). Previous batches are shown with the ground truth positive examples surrounded by a square. From these examples, participants might infer a simple concept like “green triangles”, and select the second test object.
As before, we translate hypotheses from natural language into python using code-davinci-002, and evaluate the likelihood term above by running python code. We consider both pretrained and tuned priors. Our proposal distribution \(q\) comes from running GPT-4 on prompts that illustrate previous batches, either as truth tables (for propositional concepts) or as a raw list of previous observed batches (for higher-order concepts). We again place a learnable temperature on the posterior.
**Bayesian Program Learning Baseline (BPL).** We contrast with a strong BPL baseline. It uses a grammar over expressions in first-order logic, plus predicates for shape, color, and size, totaling 28 primitives that were selected by the creator of the logical concept learning dataset (A.3.3). The BPL baseline uses the same memory-decay likelihood model, and fits (tunes) its prior by estimating probabilities for each logical primitive. It is implemented in Fleet [54], the state-of-the-art in fast parallelized MCMC over grammatically structured hypothesis spaces.
Our model differs in two important ways. First, the baseline is given first-order primitives that were chosen specifically for this dataset. While our model can use natural language expressing first-order concepts (e.g., _the only_ for \(\exists\)), it can also express concepts like _objects with the least common color_ that are unrepresentable by the baseline, and which are not in the dataset.
The second difference is that our model supports efficient inference via bottom-up proposals, while this baseline performs a stochastic search over hypotheses (MCMC), requiring many more samples. It takes \(10^{6}\) Metropolis-Hastings proposals per batch, and per learning curve, totaling \(\approx 10^{9}\) proposals, which are deduplicated to yield \(\approx 45,000\) unique concepts, which provide the support of subsequent posterior inference. This means every BPL posterior is informed by the total \(\approx 10^{9}\) sampling moves.
**Results.** Our model's predictions generally align well with human judgments (Fig. 4). Using 100 proposals per batch, our model explains 81% of the variance in human responses (\(R^{2}\) = \(.81\)), which is much higher than GPT-4 on its own. The model is also more accurate at the actual task than GPT-4, and within 3% of human-level accuracy (Fig. 5C). The model also explains the human data somewhat better than MCMC over first-order logic, even when this BPL baseline is granted an implausibly large sampling budget. Tuning the prior proves very important for this domain, possibly because, relative to The Number Game, these are more complex concepts. Hence each concept can be expressed in a variety of syntactic forms. The pretrained model is highly sensitive to syntax and cannot learn to attend to semantic features, unlike the tuned model.
Although our model explains most of the variation in the human responses, nontrivial variation remains. One possible reason is that we are modeling the responses of many human subjects, and different subject groups per concept, so it is difficult to capture the full variability of the human responses. In this sense, building a model that accurately reflects the judgments of a population of humans may be much more difficult, and require a higher sampling budget, than building a model of a single person. It is also the case that our model slightly underperforms humans (Fig. 5C), and so a higher fidelity model might come from simply performing the task better. More fundamentally, these learning curves contain data from many successive trials, so they likely contain subtle phenomena such as anchoring [38], garden-pathing [55], and other memory and order effects that are probably not well accounted for by the pure probabilistic inference our model approximates, nor by our simple memory decay model.
Figure 4: Model fits on holdout data. Error bars: \(\pm\)SEM over 3 runs. (Error bars often close to zero)
At the same time, our model does account for many fine-grained details in the behavioral data, including predicting specific patterns of successes and failures (Fig. 5A). Because our approach is manifestly interpretable--it explicitly verbalizes its hypotheses in human language--we can inspect its maximum a posteriori concept at the end of each learning episode, and observe that its successes typically occur because its verbalization of the concept describes the correct first-order law. Conversely, when humans make highly selective failures, we can probe the model to suggest what alternative hypotheses humans may have incorrectly inferred (Fig. 5A, top right).
Modeling new concepts.To test the flexibility of our model, we created two new concepts for evaluating both the model and a new group of human participants. These two concepts were _shapes with the majority color_, and _shapes with the least common color_. 16 human subjects participated in an IRB-approved study (A.2). Our study finds that humans rapidly learn these concepts (Fig. 5D.)
We also test our model on these new concepts, but using the prior estimated earlier on the data in [56]. Our model correctly predicts that humans will learn the concept of _majority color_ after just 2 batches of examples. It predicts learning _minority color_ after 4 batches, while humans need just 3, but the primary result--that this concept is very learnable--holds for both humans and the model.
Figure 5: **A.** Left and middle: learning concepts isomorphic to _bachelor_ and _valedictorian_. Dark ticks delimit batches. Above each plot is the ground-truth logical expression and the predicted natural language description. Right: the model can explain human mistakes by verbalizing what other concept humans might have been misled into learning. Both humans and the model seem to have learned _largest blue_ instead of _the largest and it also happens to be blue_. **B.** More examples showing somewhat worse fits, including getting the right answer despite having a slightly odd solution (rightmost). **C.** The model is close to human performance after extracting a prior from human judgments. **D.** New concepts run on new participants. All illustrations show results on holdout learning curves.
Although these two new concepts are easy to explain in words, and could be expressed in first-order logic with the right primitives--set cardinality and quantification over colors--neither concepts are learnable by the BPL baseline. This is because both concepts are simply unrepresentable: despite being equipped with 28 primitives, those primitives were designed without anticipating these new concepts. This highlights the fact that it is difficult to preprogram a sufficiently broad set of primitives that can efficiently encode all the different concepts people might learn.
## 6 Discussion
**Putting humanlike inductive biases into machines.** Humans excel at rapidly mastering new tasks and understanding new concepts in part because they have a good inductive bias or prior [57; 58]. Imparting similar priors upon machines is therefore an important problem. Our work gives a recipe for training such a prior by fitting it directly to human judgments, marginalizing out the natural language, meaning we never need to elicit natural language from human subjects. Because our approach simply tunes a small network on top of a large open-source pretrained model, and because it can be fit to raw human judgments, we hope that it can serve as a broadly applicable engineering strategy for extracting and using human priors. Recent work has also explored complimentary strategies for instilling a humanlike prior upon a machine learning system. For example, Kumar et al. 2022 [9] show that training neural networks with auxiliary linguistic tasks, and with auxiliary programming tasks, causes their representations to better align with human priors.
**Bayesian Program Learning (BPL).** Our model has similar motivations to BPL. Because we translate the natural language into Python to compute the likelihood, it is possible to see our approach as BPL with an unusual prior: \(p(\text{program})=\sum_{\text{NL}}p(\text{NL})p(\text{program}|\text{NL})\). In that sense, our work provides a new prior for BPL, together with the demonstration that it is possible and practical to do BPL over a Turing-complete language like Python. Relatedly, recent BPL modeling has found synergies between natural language and program representations for cognitive AI models [9; 59].
Beyond BPL, combining probabilistic reasoning with expressive symbolic representations has long been an appealing paradigm [60; 61; 62], although the expressivity of the symbolic language must be balanced against the tractability of inference [63; 64]. Guiding inference with a neural model is a natural choice, but this is hard because of the lack of natural training data (though synthetic data can help: [65; 66; 67]). Encoding knowledge in natural language allows pretrained neural models to guide inference, and it could be fruitful to examine statistical-relational AI [68] in light of that fact.
**Large Language Models.** Our work suggests that an out-of-the-box large language model is not an effective approach to inductive reasoning, at least on its own. Bayesian mechanics are needed to dampen the unpredictability of the language model. To the extent that being Bayesian is a normative account of rational behavior, our work offers a framework for enabling language models to draw more rational inferences, and ultimately generalize in more predictable and human-like ways by tuning their prior beliefs to match human data.
**The Language of Thought.** Our work connects to the Language of Thought Hypothesis [69], which says that human learning and thinking relies on an inner symbolic language. This has been a productive framework for computational modeling [14; 15; 17; 70]. In its most literal forms, language-of-thought models are afflicted by the _curse of a compositional mind_ (Spelke 2022 [71]): the free-form recombination of concepts yields a combinatorial explosion, which here we address by using pretrained knowledge from language to guide the learner. Whatever the true Language of Thought looks like, however, it must have a wide array of composable basic concepts in order to explain the breadth, flexibility, and generality of human thinking. Natural language, even if it is not actually the same as our inner mental language, acts as a vast reservoir of human concepts, and provides a flexible algebra for combining them. Therefore a reasonable near-term strategy for modeling human thinking may be to use natural language as a heuristic approximation to an inner Language of Thought.
**Rational Process Modeling.** As a rational process model, our account of concept learning bridges the computational and algorithmic levels of the Marr hierarchy [27; 28]: We commit to a Bayesian computational-level theory, and a particular Monte Carlo algorithm as a rational approximation to that theory. One important sense in which our account is inadequate is that we do not actually explain how the bottom-up process works, or how it came to be learned. We merely require that it is stochastic and unreliable, but occasionally correct enough to not need many millions of proposals. Given those
requirements, a modern large language model is a reasonable surrogate for this bottom-up process, even if it its inner workings might differ greatly from human bottom-up proposal processes.
**Limitations.** Our model performs induction via discrete structure learning. Given the combinatorial difficulty of structure learning, it is unclear whether our approach can scale to inferring complex systems of symbolic rules. We believe recent work on iteratively refining language model outputs may be promising here [72, 73].
The present form of our model is also limited to processing discrete symbolic input-outputs. Actual human thinking connects with the messy perceptual world. It would be valuable to understand whether this limitation can be addressed using multimodal language models [74], or approaches that interface separate language and vision modules [75, 76]. It is worth noting that BPL models can straightforwardly interoperate with perceptional data [10, 15, 17], and that many outstanding challenge problems within AI have at their core a perceptual-reasoning process, such as Bongard problems [77] and the Abstraction and Reasoning Corpus [1].
Currently, the model relies on costly, energy-intensive closed-source models for its proposal distribution, a constraint that might be mitigated by open-source models and network compression [78].
Last, a strong justification for formal representations is that they allow specifying knowledge precisely and unambiguously [79]. Natural language is usually imprecise and ambiguous, which we deferred addressing by translating language into Python. It remains to be seen whether language models can be coaxed into producing sufficiently precise language to support representing knowledge solely in natural language, or if refinement into precise languages like Python offers the better scaling route.
**Acknowledgements.** We are grateful to Steven Piantadosi for providing the raw human data and Fleet results for the logical concepts, as well as Joshua Tenenbaum for providing his Number Game data, and Mathias Sable-Meyer for comments on the manuscript and work.
|
2308.05554 | The receding contact line cools down during dynamic wetting | When a contact line (CL) -- where a liquid-vapor interface meets a substrate
-- is put into motion, it is well known that the contact angle differs between
advancing and receding CLs. Using non-equilibrium molecular dynamics
simulations, we reveal another intriguing distinction between advancing and
receding CLs: while temperature increases at an advancing CL -- as expected
from viscous dissipation, we show that temperature can drop at a receding CL.
Detailed quantitative analysis based on the macroscopic energy balance around
the dynamic CL showed that the internal energy change of the fluid along the
pathline induced a remarkable temperature drop around the receding CL, in a
manner similar to latent heat upon phase changes. This result provides new
insights for modeling the dynamic CL, and the framework for heat transport
analysis introduced here can be applied to a wide range of nanofluidic systems. | Hiroki Kusudo, Takeshi Omori, Laurent Joly, Yasutaka Yamaguchi | 2023-08-10T13:09:21Z | http://arxiv.org/abs/2308.05554v1 | # The receding contact line cools down during dynamic wetting
###### Abstract
When a contact line (CL) --where a liquid-vapor interface meets a substrate-- is put into motion, it is well known that the contact angle differs between advancing and receding CLs. Using non-equilibrium molecular dynamics simulations, we reveal another intriguing distinction between advancing and receding CLs: while temperature increases at an advancing CL --as expected from viscous dissipation, we show that temperature can drop at a receding CL. Detailed quantitative analysis based on the macroscopic energy balance around the dynamic CL showed that the internal energy change of the fluid along the pathline induced a remarkable temperature drop around the receding CL, in a manner similar to latent heat upon phase changes. This result provides new insights for modeling the dynamic CL, and the framework for heat transport analysis introduced here can be applied to a wide range of nanofluidic systems.
+
Footnote †: preprint: AIP/126-QED
## I Introduction
Wettings are ubiquitous in our daily life, in nature and in various scientific and engineering fields. In particular, the behavior of the contact line (CL), where a liquid-vapor interface meets a solid surface, has long been a topic of interest because it plays a key role in wetting properties.[1; 2; 3; 4] For static wetting without CL motion, a common measure of wettability at the macroscopic scale is the contact angle (CA), described by Young's equation,[5] which was first proposed in 1805 based on a balance between solid-liquid, solid-vapor and liquid-vapor interfacial tensions. These interfacial tensions originate from the microscopic molecular interaction forces, and recent molecular simulation studies have provided significant advance in understanding static wetting.[6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]
The situation is more complex at the dynamic CL (DCL) --appearing typically during droplet spreading or moving on a substrate, where the advancing and receding CAs are different. To model the CA difference, numbers of theoretical, computational and experimental studies about the DCL have been carried out and have indicated that this dynamic effect is induced by the viscosity and friction in the vicinity of the DCL;[1; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48] however, the governing principle of the DCL motion still remains unclear, mainly due to the lack of detailed information on the nanoscale thermal and flow fields around the DCL, and it is considered to be one of the long-standing unsolved problems of fluid dynamics.
In this article, we show a unique thermal phenomenon around the DCLs, cooling as well as heating at the DCLs. To elucidate its mechanism, we analyze the heat flow field around the DCL using molecular dynamics (MD) simulations of a quasi-2D system with liquid-solid-vapor CLs, consisting of a Lennard-Jones (LJ) fluid between parallel solid walls moving in opposite directions as shown in the top panel of Fig. 1. To that aim, we have developed a heat transport analysis methodology applicable in multi-component MD systems.
## II Methodology
Prior to the analysis, we first need to construct a methodology to calculate heat flows based on the method of planes (MoP),[49; 50; 51; 52; 53; 54; 55] which defines surface-averaged field values on a finite control plane so that obtained values satisfy the continuum conservation laws described by the Reynolds transport theorem for arbitrary control volume (CV) surrounded by finite control planes.[56] Specifically, in this article, we extend the formulation proposed for single-component fluid systems by Todd and Daivis [51] to the heat flow in multi-component systems with a solid wall. Energy conservation in the presence of an external force writes
\[\frac{\partial\rho e}{\partial t}=-\nabla\cdot(\rho e\mathbf{u}+\mathbf{J}_{\rm Q}- \mathbf{\tau}\cdot\mathbf{u})+\rho\mathbf{F}^{\rm ext}\cdot\mathbf{u}, \tag{1}\]
where \(\rho\), \(\mathbf{u}\) and \(e\) denote the density, velocity and specific total energy of the fluid --defined by the sum of the specific internal energy and the specific convective kinetic energy \(\frac{1}{2}|\mathbf{u}|^{2}\), whereas \(\mathbf{J}_{\rm Q},\mathbf{\tau}\) and \(\mathbf{F}^{\rm ext}\) denote the heat flux, stress tensor and external force per unit mass, respectively. For this energy conservation law, we treat the fluid-fluid intermolecular interaction force as the stress while we treat the fluid-solid one as the external force.[57; 58] Equation (1) can be integrated for an arbitrary CV, and by applying Gauss' theorem to the advection and stress work terms on the right hand side (RHS), one obtains:
\[\int_{\rm CV}{\rm d}V\frac{\partial\rho e}{\partial t}= -\int_{\rm CV}{\rm d}V\nabla\cdot\mathbf{J}_{\rm Q}+\int_{\rm CV}{\rm d }V\rho\mathbf{F}^{\rm ext}\cdot\mathbf{u}\] \[-\int_{\rm S}{\rm d}\mathbf{S}\cdot(\rho e\mathbf{u}-\mathbf{\tau}\cdot\mathbf{u})\,, \tag{2}\]
meaning that the fluid energy change in the CV in the left hand side (LHS) balances the heat production/absorption, the
work of the external body force on the fluid in the CV, and the macroscopic energy advection and stress work through its surrounding surface in the RHS. The divergence of the heat flux term, which corresponds to the heat production/absorption value in the CV, can be rewritten as
\[\int_{\text{CV}}\text{d}V\nabla\cdot\mathbf{J}_{\text{Q}}= -\int_{\text{CV}}\text{d}V\frac{\partial\rho e}{\partial t}+\int_{ \text{CV}}\text{d}V\rho\mathbf{F}^{\text{ext}}\cdot\mathbf{u}^{\text{VA}}\] \[-\int_{\text{S}}\text{d}\mathbf{S}\cdot\rho\mathbf{e}\mathbf{u}+\int_{\text{S }}\text{d}\mathbf{S}\cdot\mathbf{\tau}\cdot\mathbf{u}, \tag{3}\]
meaning that the heat flow from the CV is obtained by integrating the energy change in the CV and the energy advection and stress work on the surface of the CV --obtainable by the MoP, and by integrating the work by the external body force on fluid in the CV. In this article, we calculated the first term in the RHS by integrating the energy flux through the whole surrounding surface of the CV, see detail in the supplementary materials (SM). Note that we adopted the volume-averaged fluid velocity \(\mathbf{u}^{\text{VA}}\) in the second term of the RHS of Eq. (3) because we take its inner product with the body force \(\rho\mathbf{F}^{\text{ext}}\) as the volume-averaged intermolecular force exerted on the fluid by the solid.
## III System
The top panel of Fig. 1 shows the MD simulation system of a quasi-2D Couette-type flow, where the basic setups are the same as in our previous study [56]. The fluid-fluid and fluid-solid interactions were modeled by the 12-6 LJ potential \(\Phi^{\text{LJ}}(r_{ij})=4\epsilon_{ij}\left[\left(\frac{\alpha_{ij}}{r_{ij}} \right)^{12}-\left(\frac{\alpha_{ij}}{r_{ij}}\right)^{6}\right]\), where \(r_{ij}\) is the distance between the particles \(i\) and \(j\), while \(\epsilon_{ij}\) and \(\sigma_{ij}\) denotes the LJ energy and length parameters, respectively. Quadratic functions were added to this LJ potential so that the potential and interaction force smoothly vanished at a cut-off distance of \(r_{\text{c}}=3.5\sigma\)[59]. We used the following parameters for fluid-fluid (ff) and fluid-solid (fs) interactions: \(\sigma_{\text{ff}}=0.340\,\text{nm}\), \(\epsilon_{\text{ff}}=1.67\times 10^{-21}\,\text{J}\), \(\sigma_{\text{fs}}=0.345\,\text{nm}\), \(\epsilon_{\text{fs}}=0.646\times 10^{-21}\,\text{J}\). The atomic masses of fluid and solid particles were \(m_{\text{f}}=39.95\,\text{u}\) and \(m_{\text{s}}=195.1\,\text{u}\), respectively. Finally, the equations of motion were integrated using the velocity-Verlet algorithm, with a time step \(\Delta t\) of 5 fs.
Periodic boundary conditions were set in the \(x\)- and \(y\)-directions, and 20,000 LJ particles were confined between two parallel solid walls (dimension of \(x\times y=39.2\times 3.92\,\text{nm}^{2}\)) at a distance of \(\sim 10.4\,\text{nm}\), so that the LJ fluid formed two quasi-2D menisci with CLs on the walls upon the preliminary equilibration at a control temperature \(T_{\text{w}}=85\,\text{K}\) without shear. The static CA on both top and bottom walls was \(\sim 57\,\text{deg}\). After the equilibration, further relaxation runs to achieve a steady shear flow with asymmetric menisci were carried out for \(10\,\text{ns}\) by moving the particles in the outmost layers of both walls with opposite velocities of \(\pm 10\,\text{m/s}\) in the \(x\)-direction.
After the relaxation run, the main calculation was conducted for an average time of \(400\,\text{ns}\). We calculated the external body force and volume averaged velocity in the RHS of Eq. (3) using cuboid bins of size \(\Delta x\times\Delta y\times\Delta z=0.150\times 3.92\times 0.149\,\text{nm}^{3}\), while we calculated the energy flux, velocity, stress and the specific energy in the RHS of Eq. (3) using the MoP with the faces of each local bin. Regarding the calculation of the energy flux and the specific energy, see details in SM.
## IV Results and Discussion
The middle panel of Fig. 1 shows the density distribution and velocity field obtained by the volume average. Due to the shear applied by the wall, a caterpillar-like flow was induced, and DCLs, _i.e._, advancing and receding CLs with different CAs, appeared. In addition to the CA difference, we showed the stress inhomogeneity in the bulk liquid induced by this flow in our previous study [56]. In the present study, we report a distinct thermal difference in the DCLs as shown in the temperature distribution in the bottom panel of Fig. 1: temperature rises around the advancing CLs (bottom right and top left), and temperature drops around the receding CLs (bot
Figure 1: Top: quasi-2D Couette-type flow system of a Lennard-Jones liquid confined between two solid walls. Middle: density and velocity distributions calculated by volume average; the macroscopic streaming velocity is denoted by black arrows. Bottom: temperature distribution of the fluid. (Partly from Kusudo, H., Omori, T., Yamaguchi, Y., J. Chem. Phys., 155 (2021), 184103; licensed under CCBY)
tom left and top right). Quantitatively, in the bulk liquid away from the interfaces, the temperature is around 86.5 K, which is slightly higher than the control temperature of the wall due to viscous dissipation, whereas that around the advancing CLs is about 2 K higher and that around the receding CLs is about 2 K lower than the bulk, as shown in Fig. 1. The cooling at the receding CLs is especially intriguing, because viscous dissipation can only induce temperature rise through heat production.
To elucidate the mechanism of the heat production/absorption around the DCLs, we conducted a heat flux analysis. The top panel of Fig. 2 shows the heat flux field superimposed on the temperature distribution. Note that the heat flux field was depicted only for the fluid sufficiently away from the wall, where the effect of potential field from the wall on the fluid was negligibly small; the MoP methodology of the heat flux calculation is shown in SM. Heat flow from the high temperature area to the low temperature area can be observed, meaning that the heat produced around an advancing CL induces temperature rise there, and flows to the cold neighboring receding CL due to the heat absorption. To quantitatively evaluate the heat production/absorption around the DCLs and in the liquid area sufficiently away from the CLs, we set three control volumes (CVs) as shown with magenta rectangles in Fig. 2; a CV surrounding the receding CL, a CV surrounding the advancing CL, and a CV between them. We integrated the divergence of the heat flux in each CV and the corresponding values are shown inside the CVs with a unit of mW/m, calculated as the heat production/absorption rate divided by the system depth, in the top panel of Fig. 2. According to Fig. 2, heat is produced and absorbed at the CVs surrounding the advancing (97 mW/m) and receding (\(-89\) mW/m) CLs, respectively, and the absolute values are approximately twice as large as the heat production 42 mW/m in the middle CV even though its volume is twice as large as the others. From this, we see that viscous dissipation is not the main cause of heat production/absorption at the DCLs.
To elucidate this, one can rewrite the energy conservation in Eq. (1) as follows (see details in SM):
\[\mathbf{\nabla}\cdot\mathbf{J}_{\rm Q}=\mathbf{\tau}:\mathbf{\nabla}\mathbf{u}-\rho\,\frac{\text{ D}e_{\rm int}}{\text{D}t}, \tag{4}\]
where \(e_{\rm int}\) denotes the specific internal energy defined by
\[e_{\rm int}=e-\frac{1}{2}|\mathbf{u}|^{2}, \tag{5}\]
and \(\frac{\text{D}}{\text{D}t}\) and : denote the Lagrangian derivative and the inner product of a second order tensor, respectively. Note that the specific internal energy includes not only the fluid-fluid interaction potential but also the fluid-solid one. Equation (4)
Figure 2: Volume integral of (top) the heat flux divergence, (middle) the inner product of stress tensor and tensor derivative of velocity vector and (bottom) the Lagrange derivative of internal energy on three CVs; left: surrounding the receding CL, right: surrounding the advancing CL, middle: between the left and right CVs. The top panel also shows the distributions of temperature and heat flux field. The bottom panel also shows the internal energy distribution and the velocity field.
indicates that the heat production/absorption arises from two mechanisms: 1. the inner product of stress tensor and velocity gradient, which corresponds to the viscous dissipation in bulk; 2. the internal energy change along the pathline (identical to the streamline in a steady flow). The work done by the solid-fluid interaction force seems to be absent from Eq. (4) because it does not contribute to the internal energy change but to the convective kinetic energy change, see details in SM. We rewrite the first term on the RHS of Eq. (4) as
\[\mathbf{\tau}:\mathbf{\nabla}\mathbf{u} =\mathbf{\nabla}\cdot(\mathbf{\tau}\cdot\mathbf{u})-\mathbf{u}\cdot(\nabla\cdot \mathbf{\tau})\] \[=\mathbf{\nabla}\cdot(\mathbf{\tau}\cdot\mathbf{u})-\frac{\partial}{\partial t }\frac{1}{2}\mathbf{\rho}|\mathbf{u}|^{2}-\mathbf{\nabla}\cdot\frac{1}{2}\mathbf{\rho}|\mathbf{u}|^ {2}\mathbf{u}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mathbf{ \rho}\mathbf{F}^{\text{ext}}\cdot\mathbf{u}\] \[=\mathbf{\nabla}\cdot(\mathbf{\tau}\cdot\mathbf{u})-\mathbf{\nabla}\cdot\frac{1}{ 2}\mathbf{\rho}|\mathbf{u}|^{2}\mathbf{u}+\mathbf{\rho}\mathbf{F}^{\text{ext}}\cdot\mathbf{u}, \tag{6}\]
where the momentum conservation with external force is applied for the second equality and macroscopically steady-state system is assumed for the third equality. The rightmost-hand side (HS) can be directly integrated for the CV with MoP by applying Gauss' theorem while the numerical differentiation is essential to integrate the LHS. The middle panel of Fig. 2 shows the integral of this stress term by using Eq. (6), and this indicates that this stress work is the main cause of the heat production in the middle CV without CL, but is not remarkably large in the CV surrounding the DCLs. Note that the stress term contains not only the viscous dissipation but also the work by the pressure or interfacial tensions so that it is not always positive, specifically around the advancing CL. The other factor of the heat production/absorption is the internal energy change \(\rho\frac{\text{D}e_{\text{int}}}{\text{D}\nu}\) in the RHS of Eq. (4), and we rewrite it as
\[-\rho\frac{\text{D}e_{\text{int}}}{\text{D}t} =-\frac{\partial\rho e_{\text{int}}}{\partial t}-\mathbf{\nabla}\cdot \mathbf{\rho}e_{\text{int}}\mathbf{u}\] \[=-\frac{\partial\rho e}{\partial t}+\frac{\partial}{\partial t} \frac{1}{2}\rho|\mathbf{u}|^{2}-\mathbf{\nabla}\cdot\rho e\mathbf{u}+\mathbf{\nabla}\cdot \frac{1}{2}\rho|\mathbf{u}|^{2}\mathbf{u}\] \[=-\frac{\partial\rho e}{\partial t}-\mathbf{\nabla}\cdot\rho e\mathbf{u} +\mathbf{\nabla}\cdot\frac{1}{2}\mathbf{\rho}|\mathbf{u}|^{2}\mathbf{u}, \tag{7}\]
where the mass conservation is applied for the first equality, Eq. (5) is applied for the second equality, and the convective kinetic energy is assumed to be constant over time. Note that the steady state is not assumed for the first term in the rightmost-HS because it depends on the microscopic configuration difference between the start and end time of the sampling interval, and it is not negligibly small especially around the CLs, which are microscopically fluctuating.[56] Also for this Eq. (7), the rightmost-HS can be directly integrated for the CV with the MoP by applying Gauss' theorem, and the integral of the first term in the rightmost-HS is obtained with the energy flux through the whole surrounding surface of the CV, see detail in SM. The integral values of this internal energy change for the CVs are shown in the bottom panel of Fig. 2. The large absolute values of \(-100.2\) and \(101.9\) mW/m in the two CVs indicate that this term is the main cause of the heat production/absorption around the DCLs, which is small in the middle CV without CL.
We also show the internal energy distribution and the velocity field as the background of bottom panel of Fig. 2, and one can observe that the internal energy changes along the streamline specifically near the DCLs, where the heat is produced/absorbed. At the advancing CL, heat is produced when the fluid flows from the solid-vapor and liquid-vapor regions to the solid-liquid region, whereas at the receding CL heat is absorbed when the fluid flows from the solid-liquid region to the solid-vapor and liquid-vapor regions. During these processes, the internal energy of the fluid changes due to the surrounding density change as well as due to the potential field induced by the solid surface, and it leads to the cooling and heating at the DCLs. This phenomenon is analogous to latent heat, which induces the heat production/absorption upon the phase change.
Therefore, it is expected that this cooling and heating effect should be increased with the flow rate around the DCLs, _i.e._, the faster wall speed. Here, we additionally conducted the heat analysis for the CVs with various wall speeds: \(\mathbf{u}^{\text{w}}=1.0\), \(2.5\), \(5.0\), \(7.5\), and \(12.5\) m/s (the density and velocity fields and temperature distribution with each condition are shown in SM). Top, middle and bottom panels of Fig. 3 show the volume integral values of the heat flux divergence, stress work term in Eq. (6) and internal energy change along the streamline in Eq. (7), respectively. Blue and red ones denote the values of CV including the receding CL (RCL) and the advancing CL (ACL), and green one denotes the CV between them. Note that the CV arrangement for all wall speeds is same as that in Fig. 2. Similar to Fig. 2, the internal energy change is the main part of heat production/absorption in CVs including DCLs while the stress work term is dominant in the middle CV (referred to as "Bulk" in Fig. 3). The internal energy change appears to be proportional to the wall speed, implying that the spatial distributions of the density and the specific energy do not largely change due to the wall speed. On the other hand, the stress work term appears to be proportional to the square of the wall velocity in the middle CV since the shear stress, _i.e._, viscous stress, is proportional to the shear rate in the bulk where the shear rate can be roughly proportional to the wall velocity. Also in the CVs including the DCLs, the work done by the solid-fluid interaction force in Eq. (6) should largely depend on the wall speed because that frictional force is supposed to be proportional to the slip velocity. Under the present wall velocities where the steady-state caterpillar-like flows with DCLs are achieved, the internal energy change is always dominant over the stress work term in the CVs including DCLs: and thus the temperature rise/drop near the DCLs should always exist. In addition, we observed this cooling/heating phenomena at the DCLs induced by the same mechanism also on less wettable walls as shown in SM. Note that this quasi-latent heat around the DCLs is not a dissipation energy, meaning that it cannot be included in the dissipation terms of existing macroscale DCL models.[1; 41; 42] However, it indeed induces temperature changes in the vicinity of the DCLs, which should be included in the DCL models.
## V Conclusion
In this article, we have presented a heat transport analysis methodology applicable in multi-component MD systems, which we have used to investigate the heat transport features of the DCL. The heat analysis revealed that heat is not only produced but also absorbed around DCLs, mainly due to the quasi-latent heat induced by the internal energy change of fluid along the pathline, when the fluid moves among the interfaces, which is accompanied by a change in fluid-fluid and fluid-solid interaction energy. In addition, this latent heat is not a dissipation energy, thus almost the same heat is absorbed and produced at receding and advancing CLs, respectively, while heat is only produced in bulk liquid due to viscous dissipation. Overall, these results provide new insights into the molecular mechanisms controlling the dynamics of the CL. Moreover, the framework for analyzing the heat transport at the molecular scale should be useful for investigating various nanoscale systems such as the flow in carbon nanotubes or in nanoporous media.[60; 61; 62; 63; 64; 65; 66; 67]
## Supplementary Material
The supplementary material contains the calculation methods of the energy density, energy flux and heat flux by the method of plane and the derivation of the Lagrangian derivative of the internal energy in Eq. (4). We also show therein the density, velocity and temperature distributions with various wall speeds on the lyophilic walls corresponding to Fig. 3, and the density and temperature distributions around the DCL on the lyophobic walls.
###### Acknowledgements.
HK, TO and YY are supported by JSPS KAKENHI Grant Nos. JP23KJ0090, JP23H01346 and JP22H01400, respectively. YY is also supported by JST CREST Grant No. JPMJCR18I1, Japan. Numerical simulations were performed on the Supercomputer system "AFI-NITY" at the Advanced Fluid Information Research Center, Institute of Fluid Science, Tohoku University.
## Conflict of Interest Statement
The authors have no conflicts to disclose.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2306.13842 | Convergence of least energy sign-changing solutions for logarithmic
Schrödinger equations on locally finite graphs | In this paper, we study the following logarithmic Schr\"{o}dinger equation \[
-\Delta u+\lambda a(x)u=u\log u^2\ \ \ \ \mbox{ in }V \] on a connected locally
finite graph $G=(V,E)$, where $\Delta$ denotes the graph Laplacian, $\lambda >
0$ is a constant, and $a(x) \geq 0$ represents the potential. Using variational
techniques in combination with the Nehari manifold method based on directional
derivative, we can prove that, there exists a constant $\lambda_0>0$ such that
for all $\lambda\geq\lambda_0$, the above problem admits a least energy
sign-changing solution $u_{\lambda}$. Moreover, as $\lambda\to+\infty$, we
prove that the solution $u_{\lambda}$ converges to a least energy sign-changing
solution of the following Dirichlet problem \[\begin{cases} -\Delta u=u\log
u^2~~~&\mbox{ in }\Omega,\\ u(x)=0~~~&\mbox{ on }\partial\Omega, \end{cases}\]
where $\Omega=\{x\in V: a(x)=0\}$ is the potential well. | Xiaojun Chang, Vicenţiu D. Rădulescu, Ru Wang, Duokui Yan | 2023-06-24T02:33:38Z | http://arxiv.org/abs/2306.13842v1 | Convergence of least energy sign-changing solutions for logarithmic Schrodinger equations on locally finite graphs
###### Abstract
In this paper, we study the following logarithmic Schrodinger equation
\[-\Delta u+\lambda a(x)u=u\log u^{2}\quad\quad\text{in }V\]
on a connected locally finite graph \(G=(V,E)\), where \(\Delta\) denotes the graph Laplacian, \(\lambda>0\) is a constant, and \(a(x)\geq 0\) represents the potential. Using variational techniques in combination with the Nehari manifold method based on directional derivative, we can prove that, there exists a constant \(\lambda_{0}>0\) such that for all \(\lambda\geq\lambda_{0}\), the above problem admits a least energy sign-changing solution \(u_{\lambda}\). Moreover, as \(\lambda\to+\infty\), we prove that the solution \(u_{\lambda}\) converges to a least energy sign-changing solution of the following Dirichlet problem
\[\begin{cases}-\Delta u=u\log u^{2}&\quad\text{in }\Omega,\\ u(x)=0&\quad\text{on }\partial\Omega,\end{cases}\]
where \(\Omega=\{x\in V:a(x)=0\}\) is the potential well.
keywords: Least energy sign-changing solutions; Logarithmic Schrodinger equations; Locally finite graphs; Nehari manifold method. _Mathematics Subject Classification:_ 35A15, 35R02, 35Q55, 39A12.
## 1 Introduction and main results
Theory of network (or graph) has a wide range of applications in various fields such as signal processing, image processing, data clustering and machine learning. (For example, see [1; 2; 3].) A graph \(G=(V,E)\), where \(V\) denotes the vertex set and \(E\) denotes the edge set, is said to be locally finite if for any \(x\in V\), there are only finite \(y\in V\) such that \(xy\in E\). A graph is connected if any two vertices \(x\) and \(y\) can be connected via finite edges. For any \(xy\in E\), we assume that its weight \(\omega_{xy}>0\) and \(\omega_{xy}=\omega_{yx}\). The degree of \(x\in V\) is defined by \(deg(x)=\sum_{y\sim x}\omega_{xy}\), where we write \(y\sim x\) if \(xy\in E\). The distance \(d(x,y)\) of two vertices \(x,y\in V\) is defined by the minimal number of edges which connect these two vertices. The measure \(\mu:V\to\mathbb{R}^{+}\) is defined to be a finite positive function on \(G\).
In recent years, there have been many studies on the existence and multiplicity of solutions to nonlinear elliptic equations on discrete graphs, we refer to [4, 5, 6, 7, 8, 9, 10, 11] and their references. In [7], Grigor'yan, Lin and Yang studied nonlinear Schrodinger equations
\[-\Delta u+b(x)u=f(x,u)\quad\text{in }V \tag{1.1}\]
on a connected locally finite graph \(G\). By applying the mountain pass theorem, they established the existence of strictly positive solutions of (1.1) when \(f\) satisfies the so-called Ambrosetti-Rabinowitz ((AR) for short) condition, and the potential \(b:V\to\mathbb{R}^{+}\) has a positive lower bound and satisfies one of the following hypotheses:
1. \(b(x)\to+\infty\) as \(d(x,x_{0})\to+\infty\) for some fixed \(x_{0}\in V\);
2. \(1/b(x)\in L^{1}(V)\).
In [11], Zhang and Zhao established the existence and convergence (as \(\lambda\to+\infty\)) of ground state solutions for equation (1.1), when \(b(x)=\lambda a(x)+1\) and \(f(x,u)=|u|^{p-1}u\), where \(a(x)\geq 0\) satisfies \((B_{1})\) and the potential well \(\Omega=\{x\in V:a(x)=0\}\) is a non-empty connected and bounded domain in \(V\). Similar results for \(p\)-Laplacian equations and biharmonic equations on locally finite graphs can be found in [12, 13].
In this paper, we consider the following logarithmic Schrodinger equation
\[-\Delta u+\lambda a(x)u=u\log u^{2}\quad\text{ in }V \tag{1.2}\]
on a connected locally finite graph \(G=(V,E)\), where the parameter \(\lambda>0\). We recall that the logarithmic Schrodinger equation in Euclidean space
\[-\Delta u+\lambda b(x)u=u\log u^{2}\ \ \text{in }\mathbb{R}^{N} \tag{1.3}\]
has recently received much attention, one can see [14, 15, 16, 17, 18, 19, 20, 21, 22] and references therein. Logarithmic nonlinear problems have a wide range of applications in fields such as quantum mechanics, quantum optics, nuclear physics, transport and diffusion phenomena, Bose-Einstein condensation and etc. (see [23, 24, 25, 26, 27, 28]).
Different approaches have been developed to study the existence and multiplicity of solutions for nonlinear Schrodinger equations with logarithmic nonlinearities. Cazenave [14] worked in an Orlicz space endowed with a Luxemburg type norm in order to make the associated energy functional of equation (1.3) to be \(C^{1}\). Squassina and Szulkin [20] studied the existence of multiple solutions by using non-smooth critical point theory (see also [15, 16, 18]). Tanaka and Zhang [21] applied the penalization technique to study multi-bump solutions of equation (1.3). For the idea of penalization, see also [17, 29, 30]. In [22], Wang and Zhang proved that the ground state solution of the power-law scalar field equations \(-\Delta u+\lambda u=|u|^{p-2}u\), as \(p\downarrow 2\), converge to the ground state solution of the logarithmic-law equation \(-\Delta u=\lambda u\log u^{2}\). Recently, several results are devoted to studying the sign-changing solutions. Chen and Tang [31] established the existence of least energy sign-changing solutions of some logarithmic Schrodinger equation in bounded domains of \(\mathbb{R}^{N}\). Shuai [19] obtained the existence of least energy sign-changing solutions for equation (1.3) under different types of potentials. Zhang and Wang investigated, in [32], the existence and concentration behaviors of sign-changing solutions for logarithmic scalar field equations in the semiclassical setting. Ji [33] established the existence and multiplicity of multi-bump type nodal solutions for equation (1.3). For more studies on logarithmic nonlinear equations, one may refer to [14, 15, 16, 18, 20, 34, 35] and their references.
The goal of this work is to show the existence of least energy sign-changing solutions of (1.2) and their asymptotic behavior as \(\lambda\to+\infty\). To the best of our knowledge, there is no result on sign-changing solutions for logarithmic Schrodinger problems on locally finite graphs.
In the sequel of this paper, we make the assumption that there exists a constant \(\mu_{\min}>0\) such that the measure \(\mu(x)\geq\mu_{\min}>0\) for all \(x\in V\). As for the potential \(a=a(x)\), we assume that:
1. \(a(x)\geq 0\) and the potential well \(\Omega=\{x\in V:a(x)=0\}\) is a non-empty, connected and bounded domain in V;
\((A_{2})\quad\) there exists \(M>0\) such that the volume of the set \(D_{M}\) is finite, namely,
\[Vol(D_{M})=\sum_{x\in D_{M}}\mu(x)<\infty,\]
where \(D_{M}=\{x\in V:a(x)<M\}\).
To explain our result, we first introduce some notations. For any function \(u:V\to\mathbb{R}\), the graph Laplacian of \(u\) is defined by
\[\Delta u(x)=\frac{1}{\mu(x)}\sum_{y\sim x}\omega_{xy}\left(u(y)-u(x)\right). \tag{1.4}\]
The integral of \(u\) over \(V\) is defined by \(\int_{V}ud\mu=\sum_{x\in V}\mu(x)u(x)\), and the gradient form of the two functions \(u,v\) on \(V\) is defined by
\[\Gamma(u,v)(x)=\frac{1}{2\mu(x)}\sum_{y\sim x}\omega_{xy}\left(u(y)-u(x) \right)\left(v(y)-v(x)\right). \tag{1.5}\]
Write \(\Gamma(u)=\Gamma(u,u)\), and sometimes we use \(\nabla u\nabla v\) to replace \(\Gamma(u,v)\). The length of the gradient of \(u\) is defined by
\[|\nabla u|(x)=\sqrt{\Gamma(u)(x)}=\left(\frac{1}{2\mu(x)}\sum_{y\sim x}\omega_ {xy}\left(u(y)-u(x)\right)^{2}\right)^{1/2}. \tag{1.6}\]
Denote by \(C_{c}(V)\) the set of all functions with compact support, and let \(H^{1}(V)\) be the completion of \(C_{c}(V)\) under the norm
\[\|u\|_{H^{1}(V)}=\left(\int_{V}\left(|\nabla u|^{2}+u^{2}\right)d\mu\right)^{1 /2}.\]
Then, \(H^{1}(V)\) is a Hilbert space with the inner product
\[\langle u,v\rangle=\int_{V}\left(\Gamma(u,v)+uv\right)d\mu,\ \ \ \ \forall u,\ v\in H^{1}(V).\]
We write \(\|u\|_{p}=\left(\int_{V}|u|^{p}d\mu\right)^{1/p}\) for \(p\in[1,+\infty)\) and \(\|u\|_{L^{\infty}}=\sup_{x\in V}|u(x)|\).
For each \(\lambda>0\) we introduce a space
\[\mathcal{H}_{\lambda}=\left\{u\in H^{1}(V):\int_{V}\lambda a(x)u^{2}d\mu<+ \infty\right\}\]
with norm
\[\|u\|_{\mathcal{H}_{\lambda}}^{2}\dot{=}\int_{V}\left(|\nabla u|^{2}+( \lambda a(x)+1)u^{2}\right)d\mu,\]
which is induced by
\[\langle u,v\rangle_{\mathcal{H}_{\lambda}}=\int_{V}\left(\Gamma(u,v)+( \lambda a(x)+1)uv\right)d\mu,\ \forall u,\ v\in\mathcal{H}_{\lambda}.\]
Clearly, \(\mathcal{H}_{\lambda}\) is also a Hilbert space.
Note that equation (1.2) is formally associated with the energy functional \(J_{\lambda}:\ H^{1}(V)\to\mathbb{R}\cup\{+\infty\}\) given by
\[J_{\lambda}(u)=\frac{1}{2}\int_{V}\left(|\nabla u|^{2}+(\lambda a(x)+1)u^{2} \right)d\mu-\frac{1}{2}\int_{V}u^{2}\log u^{2}d\mu. \tag{1.7}\]
Clearly, \(J_{\lambda}\) fails to be \(C^{1}\) in \(H^{1}(V)\). In fact, for some \(G=(V,E)\) with suitable measure \(\mu\), there exists \(u\in H^{1}(V)\) but \(\int_{V}u^{2}\log u^{2}d\mu=-\infty\) (see [37]).
In this paper, we consider the functional \(J_{\lambda}\) in (1.7) on the set
\[\mathcal{D}_{\lambda}=\left\{u\in\mathcal{H}_{\lambda}:\int_{V}u^{2}|\log u^{2}|d \mu<\infty\right\}.\]
That is,
\[J_{\lambda}(u)=\frac{1}{2}\|u\|^{2}_{\mathcal{H}_{\lambda}}-\frac{1}{2}\int_{V} u^{2}\log u^{2}d\mu,\quad\forall u\in\mathcal{D}_{\lambda}.\]
Define the Nehari manifold and sign-changing Nehari set respectively by
\[\mathcal{N}_{\lambda}=\left\{u\in\mathcal{D}_{\lambda}\setminus\left\{0 \right\}:J^{\prime}_{\lambda}(u)\cdot u=0\right\},\]
\[\mathcal{M}_{\lambda}=\left\{u\in\mathcal{D}_{\lambda}:u^{\pm}\neq 0\text{ and }J^{ \prime}_{\lambda}(u)\cdot u^{+}=J^{\prime}_{\lambda}(u)\cdot u^{-}=0\right\},\]
where \(u^{+}=\max\{u,\ 0\}\) and \(u^{-}=\min\{u,\ 0\}\). Clearly, \(\mathcal{N}_{\lambda}\) contains all the nontrivial solutions of equation (1.2) and \(\mathcal{M}_{\lambda}\) contains all the sign-changing solutions. Set
\[c_{\lambda}=\inf_{u\in\mathcal{N}_{\lambda}}J_{\lambda}(u),\quad m_{\lambda}= \inf_{u\in\mathcal{M}_{\lambda}}J_{\lambda}(u).\]
Our first result is as follows.
**Theorem 1.1**.: _Suppose that \(G=(V,E)\) is a connected locally finite graph and the potential \(a:V\to\mathbb{R}\) satisfies \((A_{1})\) and \((A_{2})\). Then, there exists a constant \(\lambda_{0}>0\) such that for all \(\lambda\geq\lambda_{0}\), equation (1.2) admits a least energy sign-changing solution \(u_{\lambda}\in\mathcal{D}_{\lambda}\) such that \(J_{\lambda}(u_{\lambda})=m_{\lambda}\). Moreover, \(m_{\lambda}>2c_{\lambda}\)._
We recall that \(D\subset V\) is a bounded domain if the distance \(d(x,y)\) between any \(x,y\in D\) is uniformly bounded. The boundary of \(D\) is defined by
\[\partial D\dot{=}\{y\not\in D:\text{there exists }x\in D\text{ such that }xy\in E\}\]
and the interior of \(D\) is denoted by \(D^{\circ}\). Obviously, \(D^{\circ}=D\). Set \(\Omega=\{x\in V:a(x)=0\}\). Let \(H^{1}_{0}(\Omega)\) be the completion of \(C_{c}(\Omega)\) under the norm
\[\|u\|_{H^{1}_{0}(\Omega)}=\left(\int_{\Omega\cup\partial\Omega}|\nabla u|^{2} d\mu+\int_{\Omega}u^{2}d\mu\right)^{1/2}.\]
Then, \(H^{1}_{0}(\Omega)\) is a Hilbert space.
We consider the following Dirichlet problem
\[\begin{cases}-\Delta u=u\log u^{2}&\text{ in }\Omega,\\ u(x)=0&\text{ on }\partial\Omega.\end{cases} \tag{1.8}\]
The energy functional \(J_{\Omega}:H^{1}_{0}(\Omega)\to\mathbb{R}\) associated with (1.8) is given by
\[J_{\Omega}(u)\dot{=}\frac{1}{2}\|u\|^{2}_{H^{1}_{0}(\Omega)}-\frac{1}{2}\int_ {\Omega}u^{2}\log u^{2}d\mu,\quad\forall u\in H^{1}_{0}(\Omega).\]
Define
\[\mathcal{N}_{\Omega}=\left\{u\in H^{1}_{0}(\Omega)\setminus\left\{0\right\}:J ^{\prime}_{\Omega}(u)\cdot u=0\right\},\]
\[\mathcal{M}_{\Omega}=\left\{u\in H^{1}_{0}(\Omega):u^{\pm}\neq 0\text{ and }J^{ \prime}_{\Omega}(u)\cdot u^{+}=J^{\prime}_{\Omega}(u)\cdot u^{-}=0\right\}.\]
Set
\[c_{\Omega}=\inf_{u\in\mathcal{N}_{\Omega}}J_{\lambda}(u),\quad m_{\Omega}= \inf_{u\in\mathcal{M}_{\Omega}}J_{\Omega}(u).\]
Similar to Theorem 1.1, problem (1.8) also has a least energy sign-changing solution.
**Theorem 1.2**.: _Let \(G=(V,E)\) be a connected locally finite graph. Assume \(\Omega=\{x\in V:a(x)=0\}\) is a non-empty, connected and bounded domain in V. Then problem (1.8) admits a least energy sign-changing solution \(u_{0}\in H^{1}_{0}(\Omega)\) such that \(J_{\Omega}(u_{\Omega})=m_{\Omega}\). Moreover, \(m_{\Omega}>2c_{\Omega}\)._
Finally, we show the convergence of \(u_{\lambda}\) as \(\lambda\to+\infty\).
**Theorem 1.3**.: _Under the assumptions of Theorem 1.1, we conclude that for any sequence \(\lambda_{k}\to+\infty\), up to a subsequence, the corresponding least energy sign-changing solution \(u_{\lambda_{k}}\) of equation (1.2) converges in \(H^{1}(V)\) to a least energy sign-changing solution of problem (1.8)._
One of the main challenges in proving Theorem 1.1-1.3 is to deal with the logarithmic term in equation (1.2). In Euclidean space, logarithmic Sobolev inequality is important in studying the logarithmic Schrodinger equation (see [19; 20; 30] etc.). While, on discrete graphs, the logarithmic Sobolev inequality is only available under a positive curvature condition, which requires the measure \(\mu\) to be finite (see [36] for details). In our case, the measure \(\mu\) has a uniform positive lower bound, which violates the positive curvature condition. To overcome this difficulty, we will develop new and delicate arguments not relying on the logarithmic Sobolev inequality.
In addition, in view of the associated energy functional with (1.2) is not well-defined, inspired by ideas in [19; 22], we will restrict \(u^{2}\log u^{2}\in L^{1}(V)\). New challenge arises since the techniques in [19; 22] are not applicable here because the graph Laplacian operator is non-local. To be precise, in [19], denoting by \(I\) the corresponding energy functional, we know that the following decomposition
\[I(u)=I(u^{+})+I(u^{-}),\quad\left\langle I^{\prime}(u),u\right\rangle=\left \langle I^{\prime}(u^{+}),u^{+}\right\rangle+\left\langle I^{\prime}(u^{-}),u ^{-}\right\rangle, \tag{1.9}\]
plays a key role in studying nodal solutions. However, in our case, such a decomposition does not hold. Actually, by a direct computation, it follows that for each \(u\in\mathcal{D}_{\lambda}\setminus\{0\}\),
\[J_{\lambda}(u)=J_{\lambda}(u^{+})+J_{\lambda}(u^{-})-\frac{1}{2} K_{V}(u),\] \[J^{\prime}_{\lambda}(u)\cdot u^{\pm}=J^{\prime}_{\lambda}(u^{\pm })\cdot u^{\pm}-\frac{1}{2}K_{V}(u),\]
where \(K_{V}(u)=\sum\limits_{x\in V}\sum\limits_{y\sim x}\omega_{xy}\left[u^{+}(x)u^ {-}(y)+u^{-}(x)u^{+}(y)\right]<0\), see Section 2 for details. Clearly, \(J(u)\neq J(u^{+})+J(u^{-})\) and \(\left\langle J^{\prime}(u),u\right\rangle\neq\left\langle J^{\prime}(u^{+}), u^{+}\right\rangle+\left\langle J^{\prime}(u^{-}),u^{-}\right\rangle\), which imply that (1.9) fails. Motivated by [38; 39], we will develop new variational arguments involving nonlocal operator based on directional derivative to study (1.2).
The paper is organized as follows. In Section 2, we introduce some notations, definitions and preliminary lemmas. In Section 3, we apply the Nehari manifold method to prove the existence of least energy sign-changing solution of equation (1.2) and the Dirichlet problem (1.8). In Section 4, we give the proof of Theorem 1.3.
## 2 Some preliminary results
### Some definitions
To prove Theorem 1.1, we introduce the following definitions.
**Definition 2.1**.: _Given \(u\in\mathcal{D}_{\lambda}\) and \(\phi\in C_{c}(V)\), the derivative of \(J_{\lambda}\) in the direction \(\phi\) at \(u\), denoted by \(J^{\prime}_{\lambda}(u)\cdot\phi\), is defined as \(\lim_{t\to 0^{+}}\frac{1}{t}\left[J_{\lambda}(u+t\phi)-J_{\lambda}(u)\right]\)._
**Definition 2.2**.:
1. _For_ \(u,v\in\mathcal{D}_{\lambda}\)_, we define_ \[J^{\prime}_{\lambda}(u)\cdot v:=\int_{V}\left(\Gamma(u,v)+\lambda a(x)uv \right)d\mu-\int_{V}uv\log u^{2}d\mu.\] _Clearly,_ \(\int_{V}uv\log u^{2}d\mu\) _is well-defined for_ \(u,v\in\mathcal{D}_{\lambda}\)
2. _We say that_ \(u\in\mathcal{H}_{\lambda}\) _is a critical point of_ \(J_{\lambda}\) _if_ \(u\in\mathcal{D}_{\lambda}\) _and_ \(J_{\lambda}^{\prime}(u)\cdot v=0\) _for all_ \(v\in\mathcal{D}_{\lambda}\)_. We also say that_ \(d_{\lambda}\in\mathbb{R}\) _is a critical value for_ \(J_{\lambda}\) _if there exists a critical point_ \(u\in\mathcal{H}_{\lambda}\) _such that_ \(J_{\lambda}(u)=d_{\lambda}\)_._
It is easily seen that, \(u\) is a weak solution to equation (1.2) if and only if \(u\) is a critical point of \(J_{\lambda}\).
Note that, for any \(0<\varepsilon<1\), there exists \(C_{\varepsilon}>0\) such that
\[|u^{2}\log u^{2}|\leq C_{\varepsilon}(|u|^{2-\varepsilon}+|u|^{2+\varepsilon}).\]
Since \(H^{1}(\Omega)\hookrightarrow L^{p}(\Omega)\) is compact for \(p\in[1,+\infty]\), we have \(J_{\Omega}\in C^{1}(H^{1}_{0}(\Omega),\mathbb{R})\) and
\[J_{\Omega}^{\prime}(u)\cdot v=\int_{\Omega\cup\partial\Omega}\nabla u\nabla vd \mu-\int_{\Omega}uv\log u^{2}d\mu,\forall u,v\in H^{1}_{0}(\Omega).\]
Clearly, \(u\) is a weak solution to problem (1.8) if and only if \(u\) is a critical point of \(J_{\Omega}\).
**Lemma 2.3**.: _If \(u\in\mathcal{D}_{\lambda}\) is a weak solution of (1.2), then \(u\) is a point-wise solution of (1.2)._
_Proof._ If \(u\in\mathcal{D}_{\lambda}\) is a weak solution of (1.2), then for any \(\varphi\in\mathcal{D}_{\lambda}\), there holds
\[\int_{V}\left(\Gamma(u,\varphi)+\lambda a(x)u\varphi\right)d\mu=\int_{V}u \varphi\log u^{2}d\mu.\]
Using \(C_{c}(V)\) is dense in \(\mathcal{D}_{\lambda}\) and \(\omega_{x,y}\) is symmetric, for any \(\varphi\in C_{c}(V)\), by integration by parts, we have
\[\int_{V}\Gamma(u,\varphi)d\mu= \frac{1}{2}\sum_{x\in V}\sum_{y\sim x}\omega_{xy}\left(u(y)-u(x) \right)\left(\varphi(y)-\varphi(x)\right)\] \[= -\frac{1}{2}\sum_{y\in V}\sum_{x\sim y}\omega_{xy}\left(u(y)-u(x) \right)\varphi(x)-\frac{1}{2}\sum_{x\in V}\sum_{y\sim x}\omega_{xy}\left(u(y) -u(x)\right)\varphi(x)\] \[= -\sum_{x\in V}\sum_{y\sim x}\omega_{xy}\left(u(y)-u(x)\right) \varphi(x)\] \[= -\int_{V}\Delta u\varphi d\mu,\]
which gives
\[\int_{V}\left(-\Delta u+\lambda a(x)u\right)\varphi d\mu=\int_{V}u\varphi \log u^{2}d\mu,\ \ \forall\varphi\in C_{c}(V). \tag{2.1}\]
For any fixed \(y\in V\), take a test function \(\varphi:V\to\mathbb{R}\) in (2.1) with
\[\varphi(x)=\begin{cases}1,&x=y,\\ 0,&x\neq y.\end{cases}\]
Clearly, \(\varphi\in\mathcal{D}_{\lambda}\) and \(-\Delta u(y)+\lambda a(y)u(y)-u(y)\log\left(u(y)\right)^{2}=0\). Since \(y\) is arbitrary, we conclude that \(u\) is a point-wise solution of (1.2).
Similarly, we obtain
**Lemma 2.4**.: _If \(u\in H^{1}_{0}(\Omega)\) is a weak solution of (1.8), then \(u\) is a point-wise solution of (1.8)._
Next, we have the following observations:
\[\int_{V}\Gamma(u^{+}+u^{-})d\mu=\int_{V}\Gamma(u^{+})d\mu+\int_{V }\Gamma(u^{-})d\mu-K_{V}(u), \tag{2.2}\] \[\int_{V}\Gamma(u^{+}+u^{-},u^{+})d\mu=\int_{V}\Gamma(u^{+})d\mu- \frac{1}{2}K_{V}(u),\] (2.3) \[\int_{V}\Gamma(u^{+}+u^{-},u^{-})d\mu=\int_{V}\Gamma(u^{-})d\mu -\frac{1}{2}K_{V}(u). \tag{2.4}\]
Then, for each \(u\in\mathcal{D}_{\lambda}\), we have
\[J_{\lambda}(u)=J_{\lambda}(u^{+})+J_{\lambda}(u^{-})-\frac{1}{2}K_{V }(u),\] \[J^{\prime}_{\lambda}(u)\cdot u^{\pm}=J^{\prime}_{\lambda}(u^{\pm} )\cdot u^{\pm}-\frac{1}{2}K_{V}(u),\]
and for each \(u\in H^{1}_{0}(\Omega)\),
\[J_{\Omega}(u)=J_{\Omega}(u^{+})+J_{\Omega}(u^{-})-\frac{1}{2}K_{ \Omega}(u),\] \[J^{\prime}_{\Omega}(u)\cdot u^{\pm}=J^{\prime}_{\Omega}(u^{\pm} )\cdot u^{\pm}-\frac{1}{2}K_{\Omega}(u),\]
where \(K_{\Omega}(u):=\sum\limits_{x\in\Omega\cup\partial\Omega}\sum\limits_{y\sim x }\omega_{xy}\left[u^{+}(x)u^{-}(y)+u^{-}(x)u^{+}(y)\right]\).
### Sobolev embedding
In this subsection, we establish a Sobolev embedding result.
**Lemma 2.5**.: _If \(\mu(x)\geq\mu_{\min}>0\) and \(a(x)\) satisfies \((A_{1})-(A_{2})\), then there exist a constant \(\lambda_{0}>0\) such that, for all \(\lambda\geq\lambda_{0}\), the space \(\mathcal{H}_{\lambda}\) is compactly embedded into \(L^{p}(V)\) for all \(2\leq p\leq+\infty\)._
_Proof._ For all \(\lambda>0\), at any vertex \(x_{0}\in V\), by \((A_{1})\) we have
\[\|u\|^{2}_{\mathcal{H}_{\lambda}}= \int_{V}\left(|\nabla u|^{2}+(\lambda a(x)+1)u^{2}\right)d\mu\geq \int_{V}u^{2}d\mu\geq\mu_{\min}u^{2}(x_{0}),\]
which implies that \(|u(x_{0})|\leq\sqrt{\frac{1}{\mu_{\min}}}\|u\|_{\mathcal{H}_{\lambda}}\). Thus \(\mathcal{H}_{\lambda}\hookrightarrow L^{\infty}(V)\) continuously. Hence, using interpolation gives that \(\mathcal{H}_{\lambda}\hookrightarrow L^{p}(V)\) continuously for all \(2\leq p\leq\infty\). Assuming \(\{u_{k}\}\) is bounded in \(\mathcal{H}_{\lambda}\), we have that, up to a subsequence, \(u_{k}\rightharpoonup u\) in \(\mathcal{H}_{\lambda}\). In particular, \(\{u_{k}\}\subset\mathcal{H}_{\lambda}\) is also bounded in \(L^{2}(V)\) and by the weak convergence in \(L^{2}(V)\) it follows that, for any \(\varphi\in L^{2}(V)\),
\[\lim_{k\to\infty}\int_{V}(u_{k}-u)\varphi d\mu=\lim_{k\to\infty}\sum_{x\in V} \mu(x)\left(u_{k}(x)-u(x)\right)\varphi(x)=0. \tag{2.5}\]
Take any \(x_{0}\in V\) and let
\[\varphi_{0}(x)=\begin{cases}1,\ x=x_{0},\\ 0,\ x\neq x_{0}.\end{cases}\]
Obviously, \(\varphi_{0}(x)\in L^{2}(V)\). By substituting \(\varphi_{0}\) into (2.5), we get \(\lim_{k\to\infty}u_{k}(x)=u(x)\) for any fixed \(x\in V\).
Since \(u_{k}\) is bounded in \(\mathcal{H}_{\lambda}\) and \(u\in\mathcal{H}_{\lambda}\), there exists \(C_{1}>0\) such that
\[\lambda\int_{V}a(x)(u_{k}-u)^{2}d\mu\leq C_{1}.\]
We claim that, up to a subsequence,
\[\lim_{k\to+\infty}\int_{V}(u_{k}-u)^{2}d\mu=0.\]
In fact, since \(a(x)\) satisfies \((A_{2})\), there exists some \(M>0\) such that
\[\int_{V}(u_{k}-u)^{2}d\mu= \int_{D_{M}}(u_{k}-u)^{2}d\mu+\int_{V\setminus D_{M}}(u_{k}-u)^{2 }d\mu\] \[\leq \int_{D_{M}}(u_{k}-u)^{2}d\mu+\int_{V\setminus D_{M}}\frac{1}{ \lambda M}\lambda a(x)(u_{k}-u)^{2}d\mu\] \[\leq \int_{D_{M}}(u_{k}-u)^{2}d\mu+\frac{C_{1}}{\lambda M}.\]
Then, for all \(\varepsilon>0\), there exists \(\lambda_{0}>0\) such that when \(\lambda>\lambda_{0}\), we have \(\frac{C_{1}}{\lambda M}<\varepsilon\). Moreover, up to a subsequence, we have
\[\lim_{k\to+\infty}\int_{D_{M}}(u_{k}-u)^{2}d\mu=0.\]
Hence the claim holds. In view of \(\|u_{k}-u\|_{\infty}^{2}\leq\frac{1}{\mu_{\min}}\int_{V}|u_{k}-u|^{2}d\mu\), for any \(2<p<\infty\), we deduce
\[\int_{V}|u_{k}-u|^{p}d\mu\leq\left(\frac{1}{\mu_{\min}}\right)^{\frac{p-2}{2}} \left(\int_{V}|u_{k}-u|^{2}d\mu\right)^{\frac{p}{2}}.\]
Therefore, up to a subsequence, \(u_{k}\to u\) in \(L^{p}(V)\) for all \(2\leq p\leq+\infty\).
## 3 Existence of least energy sign-changing solutions
This section is devoted to proving that equation (1.2), as well as (1.8), admits a least energy sign-changing solution.
The following result will be useful.
**Lemma 3.1**.: _For all \(u\in\mathcal{M}_{\lambda}\) and \(s,t>0\), there holds_
\[J_{\lambda}(u)\geq J_{\lambda}(su^{+}+tu^{-}).\]
_The "=" holds if and only if \(s=t=1\)._
Proof.: For any \(u\in\mathcal{M}_{\lambda}\),
\[J_{\lambda}(u)= J_{\lambda}(u)-\frac{1}{2}J_{\lambda}^{\prime}(u)\cdot u^{+}- \frac{1}{2}J_{\lambda}^{\prime}(u)\cdot u^{-}\] \[= \left(\frac{1}{2}\|u^{+}\|_{\mathcal{H}_{\lambda}}^{2}-\frac{1}{2 }\int_{V}|u^{+}|^{2}\log|u^{+}|^{2}d\mu\right)-\left(\frac{1}{2}\|u^{+}\|_{ \mathcal{H}_{\lambda}}^{2}-\frac{1}{2}\int_{V}|u^{+}|^{2}\log|u^{+}|^{2}d\mu- \frac{1}{2}\|u^{+}\|_{2}^{2}\right)\] \[+\left(\frac{1}{2}\|u^{-}\|_{\mathcal{H}_{\lambda}}^{2}-\frac{1}{ 2}\int_{V}|u^{-}|^{2}\log|u^{-}|^{2}d\mu\right)-\left(\frac{1}{2}\|u^{-}\|_{ \mathcal{H}_{\lambda}}^{2}-\frac{1}{2}\int_{V}|u^{-}|^{2}\log|u^{-}|^{2}d\mu- \frac{1}{2}\|u^{+}\|_{2}^{2}\right)\] \[= \frac{1}{2}\|u^{+}\|_{2}^{2}+\frac{1}{2}\|u^{-}\|_{2}^{2}.\]
For \(s,t>0\), by (2.2) we obtain
\[\int_{V}\Gamma(su^{+}+tu^{-})d\mu=\int_{V}\Gamma(su^{+})d\mu+\int_{V}\Gamma(tu ^{-})-stK_{V}(u).\]
Hence,
\[J_{\lambda}(su^{+}+tu^{-})\] \[= J_{\lambda}(su^{+})+J_{\lambda}(tu^{-})-\frac{st}{2}K_{V}(u)\] \[= s^{2}J_{\lambda}(u^{+})-\frac{1}{2}s^{2}\log s^{2}\|u^{+}\|_{2}^ {2}+t^{2}J_{\lambda}(u^{-})-\frac{1}{2}t^{2}\log t^{2}\|u^{-}\|_{2}^{2}-\frac{ st}{2}K_{V}(u)\] \[= s^{2}\left[J_{\lambda}(u^{+})-\frac{1}{2}J_{\lambda}^{\prime}(u )\cdot u^{+}\right]-\frac{1}{2}s^{2}\log s^{2}\|u^{+}\|_{2}^{2}+t^{2}\left[J_{ \lambda}(u^{-})-\frac{1}{2}J_{\lambda}^{\prime}(u)\cdot u^{-}\right]\] \[-\frac{1}{2}t^{2}\log t^{2}\|u^{-}\|_{2}^{2}-\frac{st}{2}K_{V}(u)\] \[= s^{2}\left[J_{\lambda}(u^{+})-\frac{1}{2}J_{\lambda}^{\prime}(u ^{+})\cdot u^{+}+\frac{1}{4}K_{V}(u)\right]-\frac{1}{2}s^{2}\log s^{2}\|u^{+}\| _{2}^{2}\] \[+t^{2}\left[J_{\lambda}(u^{-})-\frac{1}{2}J_{\lambda}^{\prime}(u^ {-})\cdot u^{-}+\frac{1}{4}K_{V}(u)\right]-\frac{1}{2}t^{2}\log t^{2}\|u^{-}\| _{2}^{2}-\frac{st}{2}K_{V}(u)\] \[= \frac{1}{2}(s^{2}-s^{2}\log s^{2})\|u^{+}\|_{2}^{2}+\frac{1}{2}(t ^{2}-t^{2}\log t^{2})\|u^{-}\|_{2}^{2}+\frac{(s-t)^{2}}{4}K_{V}(u).\]
Therefore, defining \(f(\tau)=\tau^{2}-\tau^{2}\log\tau^{2}-1\) for any \(\tau\geq 0\), we have
\[J_{\lambda}(su^{+}+tu^{-})-J_{\lambda}(u)\] \[= \frac{1}{2}(s^{2}-s^{2}\log s^{2}-1)\|u^{+}\|_{2}^{2}+\frac{1}{2}( t^{2}-t^{2}\log t^{2}-1)\|u^{-}\|_{2}^{2}+\frac{(s-t)^{2}}{4}K_{V}(u)\] \[= \frac{1}{2}f(s)\|u^{+}\|_{2}^{2}+\frac{1}{2}f(t)\|u^{-}\|_{2}^{2}+ \frac{(s-t)^{2}}{4}K_{V}(u).\]
Since \(f(0)=-1\), \(f(1)=0\) and \(f(\tau)<0\) if \(\tau\neq 1\), \(\frac{(s-t)^{2}}{4}K_{V}(u)<0\) for any \(s\neq t\), the conclusions follow.
Next we show \(\mathcal{M}_{\lambda}\neq\emptyset\).
**Lemma 3.2**.: _If \(u\in\mathcal{D}_{\lambda}\setminus\{0\}\) with \(u^{\pm}\neq 0\), then there exists a unique positive number pair \((s_{u},t_{u})\) satisfying \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\lambda}\)._
_Proof._ For \(s,t>0\), by (2.3) and (2.4), we get
\[\int_{V}\Gamma(su^{+}+tu^{-},su^{+})d\mu=\int_{V}\Gamma(su^{+})d\mu-\frac{st}{ 2}K_{V}(u)\]
and
\[\int_{V}\Gamma(su^{+}+tu^{-},tu^{-})d\mu=\int_{V}\Gamma(tu^{-})d\mu-\frac{st}{ 2}K_{V}(u).\]
Let
\[g_{1}(s,t)\doteq J_{\lambda}^{\prime}(su^{+}+tu^{-})\cdot(su^{+})\] \[= s^{2}\|u^{+}\|_{\mathcal{H}_{\lambda}}^{2}-s^{2}\int_{V}|u^{+}|^ {2}\log|u^{+}|^{2}d\mu-s^{2}\log s^{2}\|u^{+}\|_{2}^{2}-s^{2}\|u^{+}\|_{2}^{2}- \frac{st}{2}K_{V}(u)\]
and
\[g_{2}(s,t)\doteq J_{\lambda}^{\prime}(su^{+}+tu^{-})\cdot(tu^{-})\] \[= t^{2}\|u^{-}\|_{\mathcal{H}_{\lambda}}^{2}-t^{2}\int_{V}|u^{-}|^ {2}\log|u^{-}|^{2}d\mu-t^{2}\log t^{2}\|u^{-}\|_{2}^{2}-t^{2}\|u^{-}\|_{2}^{2} -\frac{st}{2}K_{V}(u).\]
We can see that there exists \(r_{1}>0\) small enough and \(R_{1}>0\) large enough such that
\[g_{1}(s,s)>0,\ g_{2}(s,s)>0\ \text{for all}\ s\in(0,r_{1}),\] \[g_{1}(s,s)<0,\ g_{2}(s,s)<0\ \text{for all}\ s\in(R_{1},+\infty).\]
Hence, there exist \(0<r<R\) such that
\[g_{1}(r,t)>0,\ g_{1}(R,t)<0\ \text{for all}\ t\in[r,R],\] \[g_{2}(s,r)>0,\ g_{2}(s,R)<0\ \text{for all}\ s\in[r,R].\]
Applying Miranda's theorem [40], there exist some \(s_{u},t_{u}\in[r,R]\) such that \(g_{1}(s_{u},t_{u})=g_{2}(s_{u},t_{u})=0\), which implies that \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\lambda}\).
In what follows, we prove the uniqueness of \((s_{u},t_{u})\). If \(u\in\mathcal{M}_{\lambda}\), then
\[0=J_{\lambda}^{\prime}(u)\cdot u^{+}=J_{\lambda}^{\prime}(u^{+})\cdot u^{+}- \frac{1}{2}K_{V}(u) \tag{3.1}\]
and
\[0=J_{\lambda}^{\prime}(u)\cdot u^{-}=J_{\lambda}^{\prime}(u^{-})\cdot u^{-}- \frac{1}{2}K_{V}(u). \tag{3.2}\]
We claim that \((s_{u},t_{u})=(1,1)\) is the unique pair such that \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\lambda}\). Indeed, if \((s_{u},t_{u})=(1,1)\) satisfies \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\lambda}\), without loss of generality, we assume that \(0<s_{u}\leq t_{u}\). Then
\[\begin{split} 0=& J^{\prime}_{\lambda}(s_{u}u^{+}+t_{u}u^{- })\cdot(s_{u}u^{+})\\ =& s_{u}^{2}J^{\prime}_{\lambda}(u^{+})\cdot u^{+}-s _{u}^{2}\log s_{u}^{2}\|u^{+}\|_{2}^{2}-\frac{s_{u}t_{u}}{2}K_{V}(u)\\ \geq& s_{u}^{2}J^{\prime}_{\lambda}(u^{+})\cdot u^{+ }-s_{u}^{2}\log s_{u}^{2}\|u^{+}\|_{2}^{2}-\frac{s_{u}^{2}}{2}K_{V}(u)\end{split} \tag{3.3}\]
and
\[\begin{split} 0=& J^{\prime}_{\lambda}(s_{u}u^{+}+t_{u}u^ {-})\cdot(t_{u}u^{-})\\ =& t_{u}^{2}J^{\prime}_{\lambda}(u^{-})\cdot u^{-}- t_{u}^{2}\log t_{u}^{2}\|u^{-}\|_{2}^{2}-\frac{s_{u}t_{u}}{2}K_{V}(u)\\ \leq& t_{u}^{2}J^{\prime}_{\lambda}(u^{-})\cdot u^{- }-t_{u}^{2}\log t_{u}^{2}\|u^{-}\|_{2}^{2}-\frac{t_{u}^{2}}{2}K_{V}(u).\end{split} \tag{3.4}\]
Together with (3.1) and (3.3), we get
\[s_{u}^{2}\log s_{u}^{2}\int_{V}|u^{+}|^{2}d\mu\geq 0,\]
Similarly, by (3.2) and (3.4), we can deduce that
\[t_{u}^{2}\log t_{u}^{2}\int_{V}|u^{-}|^{2}d\mu\leq 0,\]
which implies that \(s_{u}\geq 1\) and \(t_{u}\leq 1\). In view of \(0<s_{u}\leq t_{u}\), it follows that \(s_{u}=t_{u}=1\).
If \(u\not\in\mathcal{M}_{\lambda}\), let \((s_{1},t_{1})\) and \((s_{2},t_{2})\) be two different positive pairs such that \(v_{i}:=s_{i}u^{+}+t_{i}u^{-}\in\mathcal{M}_{\lambda},i=1,2\), which shows that
\[\frac{s_{2}}{s_{1}}v_{1}^{+}+\frac{t_{2}}{t_{1}}v_{1}^{-}=v_{2}\in\mathcal{M} _{\lambda}.\]
By similar analysis as above, we obtain \(\frac{s_{2}}{s_{1}}=\frac{s_{2}}{s_{1}}=1\). This implies that \((s_{1},t_{1})=(s_{2},t_{2})\).
**Lemma 3.3**.: _Let \(u\in\mathcal{D}_{\lambda}\) with \(u^{\pm}\neq 0\) such that \(J^{\prime}_{\lambda}(u)\cdot u^{\pm}\leq 0\). Then the unique pair \((s_{u},t_{u})\) obtained in Lemma 3.2 satisfies \(s_{u},t_{u}\in(0,1]\). In particular, the "=" holds if and only if \(s_{u}=t_{u}=1\)._
_Proof._ Without loss of generality, we assume that \(0<t_{u}\leq s_{u}\). Since \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\lambda}\), we have
\[0=J^{\prime}_{\lambda}(s_{u}u^{+}+t_{u}u^{-})\cdot(s_{u}u^{+})=s_{u}^{2}J^{ \prime}_{\lambda}(u^{+})\cdot u^{+}-s_{u}^{2}\log s_{u}^{2}\|u^{+}\|_{2}^{2}- \frac{s_{u}t_{u}}{2}K_{V}(u). \tag{3.5}\]
Note that \(K_{V}(x,y)<0\), using \(J^{\prime}_{\lambda}(u)\cdot u^{+}\leq 0\) and (3.5), we can deduce that
\[\begin{split} 0\leq& s_{u}^{2}\left(J^{\prime}_{ \lambda}(u^{+})\cdot u^{+}-\frac{1}{2}K_{V}(x,y)\right)-s_{u}^{2}\log s_{u}^{2 }\|u^{+}\|_{2}^{2}\\ =& s_{u}^{2}J^{\prime}_{\lambda}(u)\cdot u^{+}-s_{u}^ {2}\log s_{u}^{2}\|u^{+}\|_{2}^{2}\\ \leq&-s_{u}^{2}\log s_{u}^{2}\|u^{+}\|_{2}^{2}, \end{split}\]
which implies that \(0<s_{u}\leq 1\). Therefore, \(0<t_{u}\leq s_{u}\leq 1\).
Similarly, we have
**Lemma 3.4**.: _If \(u\in H^{1}_{0}(\Omega)\setminus\{0\}\) with \(u^{\pm}\neq 0\), then there exists a unique positive number pair \((s_{u},t_{u})\) satisfying \(s_{u}u^{+}+t_{u}u^{-}\in\mathcal{M}_{\Omega}\)._
**Lemma 3.5**.: _Let \(u\in H^{1}_{0}(\Omega)\) with \(u^{\pm}\neq 0\) such that \(J^{\prime}_{\Omega}(u)\cdot u^{\pm}\leq 0\). Then the unique pair \((s_{u},t_{u})\) obtained in Lemma 3.4 satisfies \(s_{u},t_{u}\in(0,1]\). In particular, the "\(=\)" holds if and only if \(s_{u}=t_{u}=1\)._
Now we prove that the minimizer of \(J_{\lambda}\) on \(\mathcal{M}_{\lambda}\) is achieved.
**Lemma 3.6**.: _Supposed \((A_{1})\) and \((A_{2})\) hold. Then \(m_{\lambda}>0\) is achieved._
_Proof._ Taking a minimizing sequence \(\{u_{k}\}\subset\mathcal{M}_{\lambda}\) of \(J_{\lambda}\) yields
\[\begin{split}\lim\limits_{k\to+\infty}J_{\lambda}(u_{k})& =\lim\limits_{k\to+\infty}\left[J_{\lambda}(u_{k})-\frac{1}{2}J^{ \prime}_{\lambda}(u_{k})\cdot u_{k}^{+}-\frac{1}{2}J^{\prime}_{\lambda}(u_{k} )\cdot u_{k}^{-}\right]\\ &=\lim\limits_{k\to+\infty}\left[J_{\lambda}(u_{k}^{+})-\frac{1} {2}J^{\prime}_{\lambda}(u_{k}^{+})\cdot u_{k}^{+}+J_{\lambda}(u_{k}^{-})-J^{ \prime}_{\lambda}(u_{k}^{-})\cdot u_{k}^{-}\right]\\ &=\lim\limits_{k\to+\infty}\left(\frac{1}{2}\|u_{k}^{+}\|_{2}^{2 }+\frac{1}{2}\|u_{k}^{-}\|_{2}^{2}\right)=m_{\lambda}.\end{split} \tag{3.6}\]
By Lemma 2.5, the Holder's inequality and Young inequality, for any \(\varepsilon\in(0,1)\), there exist \(C_{\varepsilon},C^{\prime}_{\varepsilon},C^{\prime\prime}_{\varepsilon}>0\) such that
\[\begin{split}\int_{V}|u_{k}^{\pm}|^{2}\log|u_{k}^{\pm}|^{2}d\mu \leq&\int_{V}(|u_{k}^{\pm}|^{2}\log|u_{k}^{\pm}|^{2})^{+}d\mu \leq C_{\varepsilon}\int_{V}|u_{k}^{\pm}|^{2+\varepsilon}d\mu\\ \leq& C_{\varepsilon}\left(\int_{V}|u_{k}^{\pm}|^{2} d\mu\right)^{\frac{1}{2}}\left(\int_{V}|u_{k}^{\pm}|^{2(1+\varepsilon)}d\mu \right)^{\frac{1}{2}}\\ \leq& C^{\prime}_{\varepsilon}\|u_{k}^{\pm}\|_{2}\|u_ {k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{1+\varepsilon}\\ \leq&\frac{1}{2}\|u_{k}^{\pm}\|_{\mathcal{H}_{ \lambda}}^{2}+C^{\prime\prime}_{\varepsilon}\|u_{k}^{\pm}\|_{2}^{\frac{2}{1- \varepsilon}}.\end{split}\]
Since \(\{u_{k}\}\subset\mathcal{M}_{\lambda}\), we deduce that
\[\|u_{k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}-\frac{1}{2}K_{V}^{k}(x,y)\leq \frac{1}{2}\|u_{k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}+C^{\prime\prime}_{ \varepsilon}\|u_{k}^{\pm}\|_{2}^{\frac{2}{1-\varepsilon}}+\|u_{k}^{\pm}\|_{2} ^{2}, \tag{3.7}\]
where \(K_{V}^{k}(u)=\sum\limits_{x\in V}\sum\limits_{y\sim x}\left[u_{k}^{+}(x)u_{k}^{ -}(y)+u_{k}^{-}(x)u_{k}^{+}(y)\right]\). This together with (3.6) implies that \(\{u_{k}^{\pm}\}\) is bounded in \(\mathcal{H}_{\lambda}\) and \(\{u_{k}\}\) is also bounded in \(\mathcal{H}_{\lambda}\). Then, there exists \(\lambda_{0}>0\) such that \(\lambda\geq\lambda_{0}\), by Lemma 2.5, there exists \(u_{\lambda}\in\mathcal{H}_{\lambda}\) such that
\[\begin{cases}u_{k}\rightharpoonup u_{\lambda}&\text{weakly in }\mathcal{H}_{ \lambda},\\ u_{k}\to u_{\lambda}&\text{point-wisely in }V,\\ u_{k}\to u_{\lambda}&\text{strongly in }L^{p}(V)\text{ for }p\in[2,+\infty].\end{cases}\]
Thus, together with the weak-lower semi-continuity of norm and Fatou's lemma, we get
\[\begin{split}&\int_{V}\left(\Gamma(u_{\lambda}^{+})+(\lambda a (x)+1)\,|u_{\lambda}^{+}|^{2}\right)d\mu-\int_{V}(|u_{\lambda}^{+}|^{2}\log|u_ {\lambda}^{+}|^{2})^{-}d\mu-\frac{1}{2}K_{V}^{\lambda}(u)\\ \leq&\liminf\limits_{k\to+\infty}\left[\int_{V}\left( \Gamma(u_{k}^{+})+(\lambda a(x)+1)\,|u_{k}^{+}|^{2}\right)d\mu-\int_{V}(|u_{k}^ {+}|^{2}\log|u_{k}^{+}|^{2})^{-}d\mu-\frac{1}{2}K_{V}^{k}(u)\right]\\ =&\liminf\limits_{k\to+\infty}\int_{V}\left(|u_{k}^{+}|^{2}+(|u_{k}^{+}| ^{2}\log|u_{k}^{+}|^{2})^{+}\right)d\mu\\ =&\int_{V}|u_{\lambda}^{+}|^{2}d\mu+\int_{V}(|u_{\lambda}^{+}| ^{2}\log|u_{\lambda}^{+}|^{2})^{+}d\mu,\end{split}\]
where \(K_{V}^{\lambda}(u)=\sum\limits_{x\in V}\sum\limits_{y\sim x}\left[u_{\lambda}^{+ }(x)u_{\lambda}^{-}(y)+u_{\lambda}^{-}(x)u_{\lambda}^{+}(y)\right]\). It follows that
\[J^{\prime}_{\lambda}(u_{\lambda})\cdot u_{\lambda}^{+}=\int_{V}\left(\Gamma(u_{ \lambda}^{+})+\lambda a(x)|u_{\lambda}^{+}|^{2}\right)d\mu-\int_{V}|u_{\lambda}^ {+}|^{2}\log|u_{\lambda}^{+}|^{2}d\mu-\frac{1}{2}K_{V}^{\lambda}(u)\leq 0.\]
Similarly, it holds that
\[J^{\prime}_{\lambda}(u_{\lambda})\cdot u_{\lambda}^{-}=\int_{V}\left(\Gamma(u_{ \lambda}^{-})+\lambda a(x)|u_{\lambda}^{-}|^{2}\right)d\mu-\int_{V}|u_{\lambda}^ {-}|^{2}\log|u_{\lambda}^{-}|^{2}d\mu-\frac{1}{2}K_{V}^{\lambda}(u)\leq 0.\]
In view of Lemma 3.2 and Lemma 3.3, there exist \(s,t\in(0,1]\) such that \(\widetilde{u}=su_{\lambda}^{+}+tu_{\lambda}^{-}\in\mathcal{M}_{\lambda}\). Then
\[m_{\lambda}\leq J_{\lambda}(\widetilde{u})=J_{\lambda}(\widetilde{u})-\frac{1}{2 }J^{\prime}_{\lambda}(\widetilde{u})\cdot(su_{\lambda}^{+})-\frac{1}{2}J^{ \prime}_{\lambda}(\widetilde{u})\cdot(tu_{\lambda}^{-})\] \[= \frac{s^{2}}{2}\|u_{\lambda}^{+}\|_{2}^{2}+\frac{t^{2}}{2}\|u_{ \lambda}^{-}\|_{2}^{2}\] \[\leq \liminf_{k\to+\infty}\left[\frac{1}{2}\|u_{k}^{+}\|_{2}^{2}+\frac {1}{2}\|u_{k}^{-}\|_{2}^{2}\right]\] \[= \liminf_{k\to+\infty}\left[J_{\lambda}(u_{k})-\frac{1}{2}J^{ \prime}_{\lambda}(u_{k})\cdot u_{k}^{+}-\frac{1}{2}J^{\prime}_{\lambda}(u_{k} )\cdot u_{k}^{-}\right]\] \[= \liminf_{k\to+\infty}J_{\lambda}(u_{k})=m_{\lambda}.\]
This implies that \(s=t=1\), i.e., \(u_{\lambda}\in\mathcal{M}_{\lambda}\) satisfying \(J_{\lambda}(u_{\lambda})=m_{\lambda}\).
We claim that \(m_{\lambda}>0\). In fact, if \(m_{\lambda}=0\), we have
\[0=J_{\lambda}(u_{\lambda})-\frac{1}{2}J^{\prime}(u_{\lambda})\cdot u_{\lambda }^{+}-\frac{1}{2}J^{\prime}(u_{\lambda})\cdot u_{\lambda}^{-}=\frac{1}{2}\|u_{ \lambda}^{+}\|_{2}^{2}+\frac{1}{2}\|u_{\lambda}^{-}\|_{2}^{2}.\]
Then, by similar arguments as in (3.7), it follows that \(\|u_{\lambda}^{\pm}\|_{\mathcal{H}_{\lambda}}=0\). However, by Lemma 2.5, for any \(q>2\), there exists \(C_{q}>0\) such that
\[\|u_{\lambda}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}<\int_{V}|u_{\lambda}^{\pm}|^ {2}\log|u_{\lambda}^{\pm}|^{2}d\mu\leq\int_{V}(|u_{\lambda}^{\pm}|^{2}\log|u_{ \lambda}^{\pm}|^{2})^{+}d\mu\leq C_{q}\int_{V}|u_{\lambda}^{\pm}|^{q}d\mu\leq C \|u_{\lambda}^{\pm}\|_{\mathcal{H}_{\lambda}}^{q},\]
which provides a contradiction. Hence the claim holds.
The following lemma completes the proof of Theorem 1.1.
**Lemma 3.7**.: _If \(u\in\mathcal{M}_{\lambda}\) with \(J_{\lambda}(u)=m_{\lambda}\), then \(u\) is a sign-changing solution of equation (1.2). Moreover, \(m_{\lambda}>2c_{\lambda}\)._
_Proof._ We assume by contradiction that \(u\in\mathcal{M}_{\lambda}\) with \(J_{\lambda}(u)=m_{\lambda}\), but \(u\) is not a solution of equation (1.2). Then we can find a function \(\phi\in C_{c}(V)\) such that
\[\int_{V}\left(\nabla u\nabla\phi+\lambda a(x)u\phi\right)d\mu-\int_{V}u\phi \log u^{2}d\mu\leq-1,\]
which implies that, for some \(\varepsilon>0\) small enough,
\[J^{\prime}_{\lambda}(su^{+}+tu^{-}+\sigma\phi)\cdot\phi\leq-\frac{1}{2}\text{ for all }|s-1|+|t-1|+|\sigma|\leq\epsilon.\]
In what follows, we estimate \(\sup_{s,t}J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)\), where \(\eta\) is a cut-off function such that
\[\eta(s,t)=\begin{cases}1&\text{if }|s-1|\leq\frac{1}{2}\varepsilon\text{ and }|t-1|\leq\frac{1}{2}\varepsilon,\\ 0&\text{if }|s-1|\geq\varepsilon\text{ or }|t-1|\geq\varepsilon.\end{cases}\]
In the case of \(|s-1|\leq\varepsilon\) and \(|t-1|\leq\varepsilon\), we have
\[J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)= J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)-J_{ \lambda}(su^{+}+tu^{-})+J_{\lambda}(su^{+}+tu^{-})\] \[= J_{\lambda}(su^{+}+tu^{-})+\int_{0}^{1}J_{\lambda}^{\prime} \left(su^{+}+tu^{-}+\sigma\varepsilon\eta(s,t)\phi\right)\cdot\left(\varepsilon \eta(s,t)\phi\right)d\sigma\] \[= J_{\lambda}(su^{+}+tu^{-})+\varepsilon\eta(s,t)\int_{0}^{1}J_{ \lambda}^{\prime}\left(su^{+}+tu^{-}+\sigma\varepsilon\eta(s,t)\phi\right) \cdot\phi d\sigma\] \[\leq J_{\lambda}(su^{+}+tu^{-})-\frac{1}{2}\varepsilon\eta(s,t).\]
For the other case, that is \(|s-1|\geq\varepsilon\) or \(|t-1|\geq\varepsilon\), \(\eta(s,t)=0\), the above estimate is obvious. Now since \(u\in\mathcal{M}_{\lambda}\), for \((s,t)\neq(1,1)\), by Lemma 3.1, we have \(J_{\lambda}(su^{+}+tu^{-})<J_{\lambda}(u)\). Hence
\[J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)\leq J_{\lambda} (su^{+}+tu^{-})<J_{\lambda}(u)\text{ for all }(s,t)\neq(1,1).\]
For \((s,t)=(1,1)\),
\[J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)\leq J_{\lambda} (su^{+}+tu^{-})-\frac{1}{2}\varepsilon\eta(1,1)=J_{\lambda}(u)-\frac{1}{2}\varepsilon.\]
In any case, we have \(J_{\lambda}\left(su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\right)<J_{\lambda}(u)= m_{\lambda}\). In particular, for \(0<\varepsilon<1-\varepsilon\),
\[\sup_{\varepsilon\leq s,t\leq 2-\varepsilon}J_{\lambda}\left(su^{+}+tu^{-}+ \varepsilon\eta(s,t)\phi\right)=\widetilde{m}_{\lambda}<m_{\lambda}.\]
Set \(v=su^{+}+tu^{-}+\varepsilon\eta(s,t)\phi\) and define
\[H(s,t)=\left(F_{1}(s,t),F_{2}(s,t)\right)\dot{=}\left(J_{\lambda}^{\prime}(v) \cdot v^{+},J_{\lambda}^{\prime}(v)\cdot v^{-}\right).\]
By the definition of \(\eta\), when \(s=\varepsilon,\ t\in(\epsilon,2-\epsilon)\), we have \(\eta(s,t)=0\) and \(s<t\). Hence
\[F_{1}(\varepsilon,t) \dot{=}J_{\lambda}^{\prime}(su^{+}+tu^{-})\cdot(su^{+})\Big{|}_{s=\varepsilon}\] \[= \left[s^{2}J_{\lambda}^{\prime}(u^{+})\cdot u^{+}-\frac{st}{2}K_{V }(u)-s^{2}\log s^{2}\|u^{+}\|_{2}^{2}\right]_{s=\varepsilon}\] \[> \left[s^{2}\left(J_{\lambda}^{\prime}(u^{+})-\frac{1}{2}K_{V}(u) \right)-s^{2}\log s^{2}\|u^{+}\|_{2}^{2}\right]_{s=\varepsilon}\] \[= -\varepsilon^{2}\log\varepsilon^{2}\|u^{+}\|_{2}^{2}\] \[> 0.\]
When \(s=2-\varepsilon,\ t\in(\epsilon,2-\epsilon)\), we have \(\eta(s,t)=0\) and \(s>t\). Therefore,
\[F_{1}(2-\varepsilon,t) \dot{=}J_{\lambda}^{\prime}(su^{+}+tu^{-})\cdot(su^{+})\Big{|}_{ s=2-\varepsilon}\] \[= \left[s^{2}J_{\lambda}^{\prime}(u^{+})\cdot u^{+}-\frac{st}{2}K_{ V}(u)-s^{2}\log s^{2}\|u^{+}\|_{2}^{2}\right]_{s=2-\varepsilon}\] \[< \left[s^{2}\left(J_{\lambda}^{\prime}(u^{+})-\frac{1}{2}K_{V}(u) \right)-s^{2}\log s^{2}\|u^{+}\|_{2}^{2}\right]_{s=2-\varepsilon}\] \[= -(2-\varepsilon)^{2}\log(2-\varepsilon)^{2}\|u^{+}\|_{2}^{2}\] \[< 0.\]
That is
\[F_{1}(\varepsilon,t)>0,\ F_{1}(2-\varepsilon,t)<0\text{ for all }t\in(\varepsilon,2-\varepsilon).\]
Similarly, we have
\[F_{2}(s,\varepsilon)>0,\ F_{2}(s,2-\varepsilon)<0\ \text{for all}\ s\in( \varepsilon,2-\varepsilon).\]
Thus, applying Miranda's theorem [40], there exists \((s_{0},t_{0})\in(\varepsilon,2-\varepsilon)\times(\varepsilon,2-\varepsilon)\) such that \(\widetilde{u}=s_{0}u^{+}+t_{0}u^{-}+\varepsilon\eta(s_{0},t_{0})\phi\in \mathcal{M}_{\lambda}\) and \(J_{\lambda}(\widetilde{u})<m_{\lambda}\). This give a contradiction to the definition of \(m_{\lambda}\).
Next, we prove that \(m_{\lambda}>2c_{\lambda}\). Assume that \(u\in\mathcal{M}_{\lambda}\) such that \(J_{\lambda}(u)=m_{\lambda}\). Then \(u^{\pm}\neq 0\). Similar to the proof of Lemma 3.2 and Lemma 3.3, we can deduce that there exists a unique \(s_{u^{\pm}}\in(0,1]\) such that \(s_{u^{+}}u^{+}\in\mathcal{N}_{\lambda}\), and a unique \(t_{u^{-}}\in(0,1]\) such that \(t_{u^{-}}u^{-}\in\mathcal{N}_{\lambda}\). Similar to the proofs of Lemma 3.6 and Lemma 3.7, we can deduce that \(c_{\lambda}>0\) can be achieved. Furthermore, if \(u\in\mathcal{N}_{\lambda}\) with \(J_{\lambda}(u)=c_{\lambda}\), then \(u\) is a least energy solution.
By the definition of \(J_{\lambda}\) and \(K_{V}(x,y)<0\), we have
\[J_{\lambda}(s_{u^{+}}u^{+}+t_{u^{-}}u^{-})= J_{\lambda}(s_{u^{+}}u^{+})+J_{\lambda}(t_{u^{-}}u^{-})-\frac{s_{u^{+}} t_{u^{-}}}{2}K_{V}(u)\] \[> J_{\lambda}(s_{u^{+}}u^{+})+J_{\lambda}(t_{u^{-}}u^{-}).\]
By Lemma 3.1, we deduce that
\[m_{\lambda}=J_{\lambda}(u^{+}+u^{-})\geq J_{\lambda}(s_{u^{+}}u^{+}+t_{u^{-}}u^{-})>J_{\lambda}(s_{u^{+}}u^{+}) +J_{\lambda}(t_{u^{-}}u^{-})\geq 2c_{\lambda}.\]
This completes the proof.
Proof of Theorem 1.2.: The proof can be obtained by similar arguments in Theorem 1.1.
## 4 Convergence of the least energy sign-changing solution \(u_{\lambda}\)
In this section, we shall study the asymptotic behavior of \(u_{\lambda}\) as \(\lambda\to+\infty\). Firstly, we show that \(\{u_{\lambda}\}\) is uniformly bounded above and below away from zero.
**Lemma 4.1**.: _There exists \(\sigma>0\) (independent of \(\lambda\)) such that \(\|u\|_{\mathcal{H}_{\lambda}}\geq\|u\|_{H^{1}(V)}\geq\sigma\) for all \(u\in\mathcal{M}_{\lambda}\)._
Proof.: Note that for all \(\varepsilon>0\), if \(s\geq e^{-\frac{1}{2}}\), then
\[e^{\frac{\varepsilon}{2}}s^{2+\varepsilon}\geq s^{2}. \tag{4.1}\]
Since \(u\in\mathcal{M}_{\lambda}\), by Lemma 2.5 and (4.1), we have
\[0= J_{\lambda}^{\prime}(u)\cdot u^{+}=J_{\lambda}^{\prime}(u^{+}) \cdot u^{+}-\frac{1}{2}K_{V}(u)\] \[\geq \int_{V}\left(\Gamma(u^{+})+(\lambda a(x)+1)|u^{+}|^{2}\right)d \mu-\int_{V}|u^{+}|^{2}d\mu-\int_{V}|u^{+}|^{2}\log|u^{+}|^{2}d\mu\] \[= \|u^{+}\|_{\mathcal{H}_{\lambda}}^{2}-\int_{|u^{+}|<e^{-\frac{1}{ 2}}}\left(|u^{+}|^{2}+|u^{+}|^{2}\log|u^{+}|^{2}\right)d\mu-\int_{|u^{+}|\geq e ^{-\frac{1}{2}}}|u^{+}|^{2}d\mu\] \[-\int_{e^{-\frac{1}{2}}\leq|u^{+}|\leq 1}|u^{+}|^{2}\log|u^{+}|^{2 }d\mu-\int_{|u^{+}|>1}|u^{+}|^{2}\log|u^{+}|^{2}d\mu\] \[\geq \|u^{+}\|_{\mathcal{H}_{\lambda}}^{2}-e^{\frac{\varepsilon}{2}} \int_{|u^{+}|\geq e^{-\frac{1}{2}}}|u^{+}|^{2+\varepsilon}d\mu-C_{\varepsilon }\int_{|u^{+}|>1}|u^{+}|^{2+\varepsilon}d\mu\] \[\geq \|u^{+}\|_{\mathcal{H}_{\lambda}}^{2}-C_{\varepsilon}^{\prime} \int_{V}|u^{+}|^{2+\varepsilon}d\mu\] \[\geq \|u^{+}\|_{H^{1}(V)}^{2}-C_{\varepsilon}^{\prime\prime}\|u^{+} \|_{H^{1}(V)}^{2+\varepsilon}.\]
Then
\[\|u^{+}\|_{\mathcal{H}_{\lambda}}\geq\|u^{+}\|_{H^{1}(V)}\geq(C_{\varepsilon}^ {\prime\prime})^{-\frac{1}{\varepsilon}}>0.\]
Similarly, we get
\[\|u^{-}\|_{\mathcal{H}_{\lambda}}\geq\|u^{-}\|_{H^{1}(V)}\geq(C_{\varepsilon}^{ \prime\prime})^{-\frac{1}{\varepsilon}}>0.\]
Hence,
\[\|u\|_{\mathcal{H}_{\lambda}}^{2}\geq\|u\|_{H^{1}(V)}^{2}=\|u^{+}\|_{H^{1}(V)}^{ 2}+\|u^{-}\|_{H^{1}(V)}^{2}-K_{V}(u)>\|u^{+}\|_{H^{1}(V)}^{2}+\|u^{-}\|_{H^{1}(V )}^{2}\geq 2(C_{\varepsilon}^{\prime\prime})^{-\frac{2}{\varepsilon}}.\]
Thus we can choose \(\sigma=\sqrt{2}(C_{\varepsilon}^{\prime\prime})^{-\frac{1}{\varepsilon}}\) such that \(\|u\|_{\mathcal{H}_{\lambda}}\geq\|u\|_{H^{1}(V)}\geq\sigma\).
**Lemma 4.2**.: _There exists \(c_{0}>0\) (independent of \(\lambda\)) such that if sequence \(\{u_{k}\}\subset\mathcal{M}_{\lambda}\) of \(J_{\lambda}\) with \(\lim_{k\to\infty}J_{\lambda}(u_{k})=m_{\lambda}\), then \(\|u_{k}\|_{\mathcal{H}_{\lambda}}\leq c_{0}\)._
_Proof._ Since \(\mathcal{M}_{\Omega}\subset\mathcal{M}_{\lambda}\), it is easily seen that \(m_{\lambda}\leq m_{\Omega}\) for any \(\lambda>0\). Since \(\{u_{k}\}\subset\mathcal{M}_{\lambda}\) and \(\lim_{k\to\infty}J_{\lambda}(u_{k})=m_{\lambda}\), we have
\[\begin{split}\lim_{k\to+\infty}J_{\lambda}(u_{k})&= \lim_{k\to+\infty}\left[J_{\lambda}(u_{k})-\frac{1}{2}J_{\lambda}^{\prime}(u_ {k})\cdot u_{k}^{+}-\frac{1}{2}J_{\lambda}^{\prime}(u_{k})\cdot u_{k}^{-} \right]\\ &=\lim_{k\to+\infty}\left(\frac{1}{2}\|u_{k}^{+}\|_{2}^{2}+\frac{ 1}{2}\|u_{k}^{-}\|_{2}^{2}\right)=m_{\lambda}\leq m_{\Omega}.\end{split} \tag{4.2}\]
By Lemma 2.5, the Holder's inequality and Young inequality, for any \(\varepsilon\in(0,1)\), there exist \(C_{\varepsilon},C_{\varepsilon}^{\prime},C_{\varepsilon}^{\prime\prime}>0\) such that
\[\begin{split}\int_{V}|u_{k}^{\pm}|^{2}\log|u_{k}^{\pm}|^{2}d\mu \leq&\int_{V}(|u_{k}^{\pm}|^{2}\log|u_{k}^{\pm}|^{2})^{+}d\mu \leq C_{\varepsilon}\int_{V}|u_{k}^{\pm}|^{2+\varepsilon}d\mu\\ \leq& C_{\varepsilon}\left(\int_{V}|u_{k}^{\pm}|^{2} d\mu\right)^{\frac{1}{2}}\left(\int_{V}|u_{k}^{\pm}|^{2(1+\varepsilon)}d\mu \right)^{\frac{1}{2}}\\ \leq& C_{\varepsilon}^{\prime}\|u_{k}^{\pm}\|_{2}\|u_ {k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{1+\varepsilon}\\ \leq&\frac{1}{2}\|u_{k}^{\pm}\|_{\mathcal{H}_{ \lambda}}^{2}+C_{\varepsilon}^{\prime\prime}\|u_{k}^{\pm}\|_{2}^{\frac{2}{1- \varepsilon}}.\end{split}\]
Since \(\{u_{k}\}\subset\mathcal{M}_{\lambda}\), we deduce that
\[\|u_{k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}-\frac{1}{2}K_{V}^{k}(x,y)\leq \frac{1}{2}\|u_{k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}+C_{\varepsilon}^{ \prime\prime}\|u_{k}^{\pm}\|_{2}^{\frac{2}{1-\varepsilon}}+\|u_{k}^{\pm}\|_{2 }^{2}.\]
This together with (4.2) implies
\[\lim_{k\to+\infty}\left(\|u_{k}^{\pm}\|_{\mathcal{H}_{\lambda}}^{2}-\frac{1}{2 }K_{V}^{k}(u)\right)\leq C_{\varepsilon}^{\prime\prime\prime}\left(m_{\Omega}^{ \frac{1}{1-\varepsilon}}+m_{\Omega}\right).\]
From Lemma 3.6 we know \(m_{\lambda}>0\) and \(m_{\Omega}>0\). Therefore it suffices to choose \(c_{0}=C_{\varepsilon}^{\prime\prime\prime}\left(m_{\Omega}^{\frac{1}{1- \varepsilon}}+m_{\Omega}\right)\).
Secondly, we have the following relation about \(m_{\lambda}\) and \(m_{\Omega}\).
**Lemma 4.3**.: \(m_{\lambda}\to m_{\Omega}\) _as \(\lambda\to+\infty\)._
_Proof._ By \(m_{\lambda}\leq m_{\Omega}\) for any \(\lambda>0\), passing to subsequence if necessary, we may take a sequence \(\lambda_{k}\to+\infty\) such that
\[\lim_{k\to\infty}m_{\lambda_{k}}=\eta\leq m_{\Omega}, \tag{4.3}\]
where \(m_{\lambda_{k}}=\inf_{u_{k}\in\mathcal{M}_{\lambda_{k}}}J_{\lambda_{k}}(u_{k})\). Then, combining Lemma 4.1 and (3.7), it is easy to get \(\eta>0\). By Lemma 4.2, we have that \(\{u_{\lambda_{k}}\}\) is uniformly bounded in \(\mathcal{H}_{\lambda_{k}}\). Consequently, \(\{u_{\lambda_{k}}\}\) is also bounded in \(H^{1}(V)\) and thus, up to a subsequence, there exists some \(u_{0}\in H^{1}(V)\) such that
\[\begin{cases}u_{\lambda_{k}}\rightharpoonup u_{0}&\text{weakly in }H^{1}(V),\\ u_{\lambda_{k}}\to u_{0}&\text{point-wisely in }V,\\ u_{\lambda_{k}}\to u_{0}&\text{strongly in }L^{p}(V)\text{ for }p\in[2,+\infty].\end{cases} \tag{4.4}\]
We claim that \(u_{0}\mid_{\Omega^{c}}=0\). In fact, if there exists a vertex \(x_{0}\in\Omega^{c}\) such that \(u_{0}(x_{0})\neq 0\). Since \(u_{\lambda_{k}}\in\mathcal{M}_{\lambda_{k}}\), we have
\[J_{\lambda_{k}}(u_{\lambda_{k}}) =\frac{1}{2}\|u_{\lambda_{k}}\|_{\mathcal{H}_{\lambda_{k}}}^{2}- \frac{1}{2}\int_{V}u_{\lambda_{k}}^{2}\log u_{\lambda_{k}}^{2}d\mu\] \[\geq\frac{\lambda_{k}}{2}\int_{V}a(x)u_{\lambda_{k}}^{2}d\mu- \frac{1}{2}\int_{V}(u_{\lambda_{k}}^{2}\log u_{\lambda_{k}}^{2})^{+}d\mu\] \[\geq\frac{\lambda_{k}}{2}\int_{V}a(x)u_{\lambda_{k}}^{2}d\mu- \frac{C_{\varepsilon}}{2}\int_{V}|u_{\lambda_{k}}|^{2+\varepsilon}d\mu\] \[\geq\frac{\lambda_{k}}{2}\sum_{x\in V}\mu(x)a(x)u_{\lambda_{k}}^{ 2}(x)-C_{\varepsilon}^{\prime}\|u_{\lambda_{k}}\|_{H^{1}(V)}^{2+\varepsilon}\] \[\geq\frac{\lambda_{k}}{2}\mu_{\min}a(x_{0})u_{\lambda_{k}}^{2}(x _{0})-C_{\varepsilon}^{\prime\prime}.\]
Since \(a(x_{0})>0\), \(u_{\lambda_{k}}(x_{0})\to u_{0}(x_{0})\neq 0\) and \(\lambda_{k}\to+\infty\), we get
\[\lim_{k\to+\infty}J_{\lambda_{k}}(u_{\lambda_{k}})=+\infty,\]
This is in contradiction with (4.3). Hence the claim holds.
Since \(u_{0}\mid_{\Omega^{c}}=0\), by the weak lower semi-continuity of the norm \(\|\cdot\|_{H^{1}(V)}\) and Fatou's lemma, taking \(u_{\lambda_{k}}^{+}\) as test function in equation (1.2), we get
\[\int_{\Omega\cup\partial\Omega}\Gamma(u_{0}^{+})d\mu+\int_{ \Omega}|u_{0}^{+}|^{2}d\mu-\int_{\{\Omega:|u_{0}^{+}|\leq 1\}}|u_{0}^{+}|^{2} \log|u_{0}^{+}|^{2}d\mu-\frac{1}{2}K_{\Omega}^{0}(u)\] \[\leq \int_{V}\left(\Gamma(u_{0}^{+})+|u_{0}^{+}|^{2}\right)d\mu-\int_{ \{V:|u_{0}^{+}|\leq 1\}}|u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu-\frac{1}{2}K_{V}^{0} (u)\] \[\leq \liminf_{k\to+\infty}\left[\int_{V}\left(\Gamma(u_{\lambda_{k}}^{ +})+\left(\lambda_{k}a(x)+1\right)|u_{\lambda_{k}}^{+}|^{2}\right)d\mu-\int_{\{ V:|u_{\lambda_{k}}^{+}|\leq 1\}}|u_{\lambda_{k}}^{+}|^{2}\log|u_{\lambda_{k}}^{+}|^{2 }d\mu-\frac{1}{2}K_{V}^{\lambda_{k}}(u)\right]\] \[= \liminf_{k\to+\infty}\left[\int_{V}|u_{\lambda_{k}}^{+}|^{2}d\mu +\int_{\{V:|u_{\lambda_{k}}^{+}|>1\}}|u_{\lambda_{k}}^{+}|^{2}\log|u_{\lambda _{k}}^{+}|^{2}d\mu\right]\] \[= \int_{V}|u_{0}^{+}|^{2}d\mu+\int_{\{V:|u_{0}^{+}|>1\}}|u_{0}^{+} |^{2}\log|u_{0}^{+}|^{2}d\mu\] \[= \int_{\Omega}|u_{0}^{+}|^{2}d\mu+\int_{\{\Omega:|u_{0}^{+}|>1\}} |u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu,\]
where
\[K_{\Omega}^{0}(u)=\sum_{x\in\Omega\cup\partial\Omega}\sum_{y\sim x }\left[u_{0}^{+}(x)u_{0}^{-}(y)+u_{0}^{-}(x)u_{0}^{+}(y)\right],\] \[K_{V}^{0}(u)=\sum_{x\in V}\sum_{y\sim x}\left[u_{0}^{+}(x)u_{0}^{ -}(y)+u_{0}^{-}(x)u_{0}^{+}(y)\right],\] \[K_{V}^{\lambda_{k}}(u)=\sum_{x\in V}\sum_{y\sim x}\left[u_{\lambda _{k}}^{+}(x)u_{\lambda_{k}}^{-}(y)+u_{\lambda_{k}}^{-}(x)u_{\lambda_{k}}^{+}(y) \right].\]
Then
\[J_{\Omega}^{\prime}(u_{0})\cdot u_{0}^{+}=\int_{\Omega\cup\partial\Omega} \Gamma(u_{0}^{+})d\mu-\int_{\Omega}|u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu-\frac {1}{2}K_{\Omega}^{0}(u)\leq 0.\]
Similarly, it holds that
\[J_{\Omega}^{\prime}(u_{0})\cdot u_{0}^{-}=\int_{\Omega\cup\partial\Omega} \Gamma(u_{0}^{-})d\mu-\int_{\Omega}|u_{0}^{-}|^{2}\log|u_{0}^{-}|^{2}d\mu-\frac{1 }{2}K_{\Omega}^{0}(u)\leq 0.\]
In view of Lemma 3.4 and Lemma 3.5, there exist \(s,t\in(0,1]\) such that \(\widetilde{u}_{0}=su_{0}^{+}+tu_{0}^{-}\in\mathcal{M}_{\Omega}\). Then
\[m_{\Omega}\leq J_{\Omega}(\widetilde{u}_{0})=J_{\Omega}(\widetilde{u}_{0})-\frac{1 }{2}J_{\Omega}^{\prime}(\widetilde{u}_{0})\cdot(su_{0}^{+})-\frac{1}{2}J_{ \Omega}^{\prime}(\widetilde{u}_{0})\cdot(tu_{0}^{-})\] \[= \frac{s^{2}}{2}\|u_{0}^{+}\|_{L^{2}(\Omega)}^{2}+\frac{t^{2}}{2}\| u_{0}^{-}\|_{L^{2}(\Omega)}^{2}\] \[\leq \liminf_{k\to+\infty}\left[\frac{1}{2}\|u_{\lambda_{k}}^{+}\|_{2 }^{2}+\frac{1}{2}\|u_{\lambda_{k}}^{-}\|_{2}^{2}\right]\] \[= \liminf_{k\to\infty}\left[J_{\lambda_{k}}(u_{\lambda_{k}})-\frac {1}{2}J_{\lambda_{k}}^{\prime}(u_{\lambda_{k}})\cdot u_{\lambda_{k}}^{+}-\frac {1}{2}J_{\lambda_{k}}^{\prime}(u_{\lambda_{k}})\cdot u_{\lambda_{k}}^{-}\right]\] \[= \liminf_{k\to+\infty}J_{\lambda_{k}}(u_{\lambda_{k}})=\eta\leq m _{\Omega}.\]
Hence, \(\lim_{\lambda\to+\infty}m_{\lambda}=m_{\Omega}\). This completes the proof.
Proof of Theorem 1.3.: Assume that \(u_{\lambda_{k}}\in\mathcal{M}_{\lambda_{k}}\) satisfies \(J_{\lambda_{k}}(u_{\lambda_{k}})=m_{\lambda_{k}}\). We shall prove that \(u_{\lambda_{k}}\) converges in \(H^{1}(V)\) to a least energy sign-changing solution \(u_{0}\) of equation (1.8) along a subsequence.
Lemma 4.2 gives that \(u_{\lambda_{k}}\in\mathcal{H}_{\lambda_{k}}\) is uniformly bounded. Consequently, we have that \(\{u_{\lambda_{k}}\}\) is also bounded in \(H^{1}(V)\). Therefore, we can assume that for any \(p\in[2,\infty)\), \(u_{\lambda_{k}}\to u_{0}\) in \(L^{p}(V)\) and \(u_{\lambda_{k}}\rightharpoonup u_{0}\) in \(H^{1}(V)\). Moreover, in view of \(u_{0}\in\mathcal{N}_{\Omega}\) and we get from Lemma 4.1 that \(u_{0}\not\equiv 0\). As proved in Lemma 4.3, we can prove that \(u_{0}\mid_{\Omega^{c}}=0\). Then it suffices to show that, as \(k\to+\infty\), we have \(\lambda_{k}\int_{V}a(x)|u_{\lambda_{k}}^{\pm}|^{2}d\mu\to 0\) and \(\int_{V}\Gamma(u_{\lambda_{k}}^{\pm})d\mu\to\int_{V}\Gamma(u_{0}^{\pm})d\mu\). If not, we may assume that
\[\lim_{k\to+\infty}\lambda_{k}\int_{V}a(x)|u_{\lambda_{k}}^{\pm}|^{2}d\mu=\delta >0.\]
Since \(u_{0}\mid_{\Omega^{c}}=0\), by weak lower semi-continuity of the norm \(\|\cdot\|_{H^{1}(V)}\) and Fatou's lemma, taking \(u_{\lambda_{k}}^{+}\) as test function in equation (1.2), we get
\[\int_{\Omega\cup\partial\Omega}\Gamma(u_{0}^{+})d\mu+\int_{ \Omega}|u_{0}^{+}|^{2}d\mu-\int_{\{\Omega:|u_{0}^{+}|\leq 1\}}|u_{0}^{+}|^{2} \log|u_{0}^{+}|^{2}d\mu-\frac{1}{2}K_{\Omega}^{0}(u)\] \[< \int_{V}\left(\Gamma(u_{0}^{+})+|u_{0}^{+}|^{2}\right)d\mu+\delta -\int_{\{V:|u_{0}^{+}|\leq 1\}}|u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu-\frac{1}{2}K_{V} ^{0}(u)\] \[\leq \liminf_{k\to+\infty}\left[\int_{V}\left(\Gamma(u_{\lambda_{k}}^{ +})+(\lambda_{k}a(x)+1)\,|u_{\lambda_{k}}^{+}|^{2}\right)d\mu-\int_{\{V:|u_{ \lambda_{k}}^{+}|\leq 1\}}|u_{\lambda_{k}}^{+}|^{2}\log|u_{\lambda_{k}}^{+}|^{2}d\mu- \frac{1}{2}K_{V}^{\lambda_{k}}(u)\right]\] \[= \liminf_{k\to+\infty}\left[\int_{V}|u_{\lambda_{k}}^{+}|^{2}d\mu +\int_{\{V:|u_{\lambda_{k}}^{+}|>1\}}|u_{\lambda_{k}}^{+}|^{2}\log|u_{\lambda_ {k}}^{+}|^{2}d\mu\right]\] \[= \int_{V}|u_{0}^{+}|^{2}d\mu+\int_{\{V:|u_{0}^{+}|>1\}}|u_{0}^{+}| ^{2}\log|u_{0}^{+}|^{2}d\mu\] \[= \int_{\Omega}|u_{0}^{+}|^{2}d\mu+\int_{\{\Omega:|u_{0}^{+}|>1\}}| u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu,\]
which implies that
\[J_{\Omega}^{\prime}(u_{0})\cdot u_{0}^{+}=\int_{\Omega\cup\partial\Omega}\Gamma(u_ {0}^{+})d\mu-\int_{\Omega}|u_{0}^{+}|^{2}\log|u_{0}^{+}|^{2}d\mu-\frac{1}{2}K_{ \Omega}^{0}(u)<0. \tag{4.5}\]
Similarly, it holds that
\[J_{\Omega}^{\prime}(u_{0})\cdot u_{0}^{-}=\int_{\Omega\cup\partial\Omega}\Gamma(u _{0}^{-})d\mu-\int_{\Omega}|u_{0}^{-}|^{2}\log|u_{0}^{-}|^{2}d\mu-\frac{1}{2}K_{ \Omega}^{0}(u)<0. \tag{4.6}\]
By similar arguments as above, if
\[\lim_{k\rightarrow+\infty}\int_{V}\Gamma(u_{\lambda_{k}}^{\pm})d\mu>\int_{V} \Gamma(u_{0}^{\pm})d\mu,\]
we also have (4.5) and (4.6).
In view of Lemma 3.4 and Lemma 3.5, there exist two constants \(s,t\in(0,1)\) such that \(\widetilde{u}_{0}=su_{0}^{+}+tu_{0}^{-}\in\mathcal{M}_{\Omega}\). Consequently, we have
\[m_{\Omega}\leq J_{\Omega}(\widetilde{u}_{0})=J_{\Omega}(\widetilde{u}_{0})- \frac{1}{2}J_{\Omega}^{\prime}(\widetilde{u}_{0})\cdot(su_{0}^{+})-\frac{1}{2} J_{\Omega}^{\prime}(\widetilde{u}_{0})\cdot(tu_{0}^{-})\] \[= \frac{s^{2}}{2}\|u_{0}^{+}\|_{L^{2}(\Omega)}^{2}+\frac{t^{2}}{2} \|u_{0}^{-}\|_{L^{2}(\Omega)}^{2}\] \[< \frac{1}{2}\|u_{0}^{+}\|_{2}^{2}+\frac{1}{2}\|u_{0}^{-}\|_{2}^{2}\] \[\leq \liminf_{k\rightarrow+\infty}\left[\frac{1}{2}\|u_{\lambda_{k}}^ {+}\|_{2}^{2}+\frac{1}{2}\|u_{\lambda_{k}}^{-}\|_{2}^{2}\right]\] \[= \liminf_{k\rightarrow+\infty}\left[J_{\lambda_{k}}(u_{\lambda_{k} })-\frac{1}{2}J_{\lambda_{k}}^{\prime}(u_{\lambda_{k}})\cdot u_{\lambda_{k}}^ {+}-\frac{1}{2}J_{\lambda_{k}}^{\prime}(u_{\lambda_{k}})\cdot u_{\lambda_{k}}^ {-}\right]\] \[= \liminf_{k\rightarrow+\infty}J_{\lambda_{k}}(u_{\lambda_{k}})\] \[= \liminf_{k\rightarrow+\infty}m_{\lambda_{k}}=m_{\Omega},\]
which leads to a contradiction. Hence, we obtain that \(u_{\lambda_{k}}\to u_{0}\) in \(H^{1}(V)\) and \(u_{0}\) is a least energy sign-changing solution of problem (1.8).
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Data availability
No data was used for the research described in the article.
## Acknowledgements
The research of Xiaojun Chang was supported by National Natural Science Foundation of China (Grant No.11971095), while Duokui Yan was supported by National Natural Science Foundation of China (Grant No.11871086).
|
2310.10475 | The monotone-light factorization for n-categories via n-preorders | Starting with a symmetric monoidal adjunction with certain properties, one
derives another symmetric monoidal adjunction with the same properties between
the respective categories of all V-categories. If one begins with a reflection
of a full replete subcategory, the derived adjunction is also a reflection of
the same kind. Semi-left-exactness (also called admissibility in categorical
Galois theory) or the stronger stable units property is inherited by the
derived reflection. Applying these results, one concludes that the reflection
of the category of all n-categories into the category of n-preorders has stable
units. Then, it is also shown that this reflection determines a monotone-light
factorization system on n-categories, n>=1, and that the light morphisms are
precisely the n-functors faithful with respect to n-cells. In order to achieve
such results, it was also shown that n-functors surjective both on vertically
composable triples of horizontally composable pairs of n-cells, and on
horizontally composable triples of vertically composable pairs of n-cells are
effective descent morphisms in the category of all n-categories nCat, n>=1. | João J. Xarez | 2023-10-16T14:57:37Z | http://arxiv.org/abs/2310.10475v2 | # The monotone-light factorization for n-categories via n-preorders
###### Abstract.
Starting with a symmetric monoidal adjunction with certain properties, one derives another symmetric monoidal adjunction with the same properties between the respective categories of all \(\mathcal{V}\)-categories. If one begins with a reflection of a full replete subcategory, the derived adjunction is also a reflection of the same kind. Semi-left-exactness (also called admissibility in categorical Galois theory) or the stronger stable units property is inherited by the derived reflection. Applying these results, one concludes that the reflection of the category of all \(n\)-categories into the category of \(n\)-preorders has stable units. Then, it is also shown that this reflection determines a monotone-light factorization system on \(n\)-categories, \(n\geq 1\), and that the light morphisms are precisely the \(n\)-functors faithful with respect to \(n\)-cells. In order to achieve such results, it was also shown that \(n\)-functors surjective both on vertically composable triples of horizontally composable pairs of \(n\)-cells, and on horizontally composable triples of vertically composable pairs of \(n\)-cells are effective descent morphisms in the category of all n-categories \(nCat\), \(n\geq 1\).
Key words and phrases:symmetric monoidal categories, categorical Galois theory, monotone-light factorization, n-categories 2020 Mathematics Subject Classification: 18M05,18D20,18A32,18E50,18N10
## 1. Introduction
It will be shown that the reflection \(nCat\to nPreord\) of the category of all n-categories into the category of n-preorders determines a monotone-light factorization system on \(nCat\), for every positive integer \(n\). In order to achieve such result it was also proved that the reflection \(nCat\to nPreord\) has stable units, a stronger condition than admissibility in categorical Galois theory, and that the n-functors surjective both on vertically composable triples of horizontally composable pairs of n-cells, and on horizontally composable triples of vertically composable pairs of n-cells (cf. Proposition 10.1), are effective descent morphisms in \(nCat\). This generalizes the results in our previous papers [9] and [10].
The characterization of effective descent morphisms is a generalization of what was done in [10], considering a convenient description of
the category of all n-categories as a full subcategory of a presheaf category, and using some results in [4].
In order to prove that the reflection of n-categories into n-preorders has stable units in the sense of [2] (see [1]), we took a very different path from the one trailed in [10]. In fact, we considered the category of all \(n\)-categories in the context of \(\mathcal{V}\)-categories (see [6]). Remark that, considering \(\mathcal{V}=Set\) the category of sets and then iterating one obtains \(n\)-categories.
Beginning with a symmetric monoidal adjunction \(\mathbb{C}\to\mathbb{X}\) with certain properties, one derives another symmetric monoidal adjunction \(\mathbb{C}-Cat\to\mathbb{X}-Cat\) with the same properties between categories of all \(\mathcal{V}\)-categories (\(\mathcal{V}\in\{\mathbb{C},\ \mathbb{X}\}\)). In this context, if one begins with a reflection such that \(\mathbb{X}\) is a full replete subcategory of \(\mathbb{C}\), the derived adjunction is also a reflection in which \(\mathbb{X}-Cat\) is a full replete subcategory of \(\mathbb{C}-Cat\). It follows trivially, according to the way limits are calculated in \(\mathbb{C}-Cat\), provided they exist in \(\mathbb{C}\), that semi-left-exactness (also called admissibility in categorical Galois theory) or the stable units property is inherited by the derived reflection. For instance, if one begins with the stable units reflection from the category of all categories into the category of all preorders, then iterating one ends with a stable units reflection from \(nCat\) into \(nPreord\).
In more detail, consider _a base monoidal adjunction_, that is, an adjunction
\[(F,G,\eta,\varepsilon):(\mathbb{C},\otimes,E,\alpha,\gamma,\rho)\to(\mathbb{X},\Diamond,I,\mathfrak{a},t,r)\]
such that:
* \(F\) and \(G\) are (strict) morphisms of (symmetric) monoidal categories,
* the counit morphism of \(I\) equals the identity morphism of \(I\), \(\varepsilon_{I}=1_{I}:FG(I)=I\to I\), and
* for every pair of objects \(X,Y\in\mathbb{X}\), the counit of the tensor product equals the tensor product of the counits, \(\varepsilon_{X}\Diamond\varepsilon_{Y}=\varepsilon_{X\Diamond Y}:FG(X\Diamond Y )\to X\Diamond Y\).
Starting from any base monoidal adjunction as defined (cf. section 2), we will show how to obtain another derived base monoidal adjunction (cf. section 5) between the \(\mathcal{V}\)-categories \(\mathbb{C}-Cat\) and \(\mathbb{X}-Cat\), the category of all \(\mathbb{C}\)-categories and the category of all \(\mathbb{X}\)-categories, respectively,
\[(\mathbb{F},\mathbb{G},\Theta,\Upsilon):(\mathbb{C}-Cat,\bigcirc,\mathfrak{E},\wedge,\Gamma,\Diamond)\to(\mathbb{X}-Cat,\nabla,\mathcal{I},\vee,\top, \Re).\]
This is given and proved in detail from sections 2 to section 6.
In section 7, it is shown that limits in \(\mathbb{C}-Cat\) may be calculated "hom-componentwise" in \(\mathbb{C}\). It follows that the derived monoidal structure is cartesian if it is so for the starting monoidal structure (cf. Remark 7.1).
In subsection 4.2, in order to apply the above results to categorical Galois theory (cf. [3]), when \(\mathbb{X}\) is a full replete subcategory of \(\mathbb{C}\) and \(G:\mathbb{X}\subseteq\mathbb{C}\) is the inclusion functor, we show that it is a special case of the base monoidal adjunction, which we call _base monoidal reflection_, and that using the same process of deriving a new base monoidal adjunction one gets a new base monoidal reflection.
Furthermore, if the initial base monoidal reflection is semi-left-exact, or has the stronger property of having stable units (notions introduced in [2]), the same is valid for the derived monoidal reflection, as shown in section 8.
Hence, the process can be iterated any number of times.
The second part of the paper consists in applying the previous results to the base monoidal reflection \(H\vdash I:Cat\to Preord\), which has stable units, to conclude that there is a reflection with stable units \(nCat\to nPreord\), from the category of all \(n\)-categories (with cartesian monoidal structure), \(n\in\mathbb{N}\). We also conclude that there is a monotone-light factorization associated to every reflection \(nCat\to nPreord\), \(n\in\mathbb{N}\), in a similar way to that used in [10] to obtain the same results for the special case \(n=2\) (cf. sections 9, 10 and 11).
It is also given a needed characterization of a class of effective descent morphisms in \(nCat\) (cf. section 10); and characterizations of the classes of vertical and stably-vertical n-functors, trivial coverings and coverings (cf. section 12, and sections 13 and 14, respectively), for every \(n\in\mathbb{N}\).
## 2. The base monoidal adjunction
Consider an adjunction \((A)\)
\[(F,G,\eta,\varepsilon):\mathbb{C}\to\mathbb{X},\]
such that
\((B)\) both the left-adjoint \(F\) and the right-adjoint \(G\) are (strict) _morphisms of symmetric monoidal categories_, with respect to the _monoidal categories_
\[(\mathbb{C},\otimes,E,\alpha,\gamma,\rho)\;and\;(\mathbb{X},\Diamond,I, \mathfrak{a},t,r),\]
meaning that:
\(E\in\mathbb{C}\), \(I\in\mathbb{X}\), \(F(E)=I\) and \(G(I)=E\);
\(\otimes:\mathbb{C}\times\mathbb{C}\to\mathbb{C}\) and \(\Diamond:\mathbb{X}\times\mathbb{X}\to\mathbb{X}\) are (bi)functors;
\(F\)_preserves_\(\otimes\) and \(G\)_preserves_\(\Diamond\), i.e.,
\[F(A\otimes B\overset{f\otimes g}{\longrightarrow}C\otimes D)=F(A)\Diamond F (B)\overset{Ff\Diamond Fg}{\longrightarrow}F(C)\Diamond F(D),\]
\[G(X\lozenge Y^{u\lozenge v}Z\lozenge W)=G(X)\otimes G(Y)\smash{\mathop{ \longrightarrow}\limits^{Gu\otimes Gv}}G(Z)\otimes G(W),\]
for al morphisms \(f:A\to C\) and \(g:B\to D\) in \(\mathbb{C}\), \(u:X\to Z\) and \(v:Y\to W\) in \(\mathbb{X}\);
\[\alpha_{A,B,C}:(A\otimes B)\otimes C\to A\otimes(B\otimes C),\]
\[\mathfrak{a}_{X,Y,Z}:(X\lozenge Y)\loz Z\to X\lozenge(Y\loz Z),\]
\[\gamma_{A,B}:A\otimes B\to B\otimes A,\ \rho_{A}:A\otimes E\to A,\]
\[t_{X,Y}:X\lozenge Y\to Y\loz X\ and\ r_{X}:X\lozenge I\to X\]
are natural isomorphisms subject to the _coherence axioms1_ expressing the commutativity of the following diagrams \((A,B,C,D\in\mathbb{C}\); \(X,Y,Z,W\in\mathbb{X})\),
Footnote 1: Cf. [7, §VII.2] for more information about the consequences of these axioms.
\[\begin{CD}((D\otimes A)\otimes B)\otimes C\smash{\mathop{\longrightarrow} \limits^{\alpha_{D\otimes A,B,C}}}(D\otimes A)\otimes(B\otimes C)\smash{ \mathop{\longrightarrow}\limits^{\alpha_{D,A,B\otimes C}}}D\otimes(A\otimes(B \otimes C))\\ @V{}V{\alpha_{D,A,B}\otimes 1_{C}}V@V{}V{1_{D}\otimes\alpha_{A,B,C}}V\\ (D\otimes(A\otimes B))\otimes C\smash{\mathop{\longrightarrow}\limits^{\alpha_{D,A\otimes B,C}}}D\otimes((A\otimes B)\otimes C),\end{CD}\]
\[\begin{CD}(A\otimes E)\otimes B\smash{\mathop{\longrightarrow}\limits^{\alpha_{A,E,B}}}A \otimes(E\otimes B)\\ @V{}V{1_{A}\otimes(\rho_{B}\circ\gamma_{E,B})}V@V{}V{1_{A\otimes B}}V\\ A\otimes B\end{CD}\]
and
\[\begin{CD}(A\otimes B)\otimes C\smash{\mathop{\longrightarrow}\limits^{\alpha_{A,B,C}}}A \otimes(B\otimes C)\smash{\mathop{\longrightarrow}\limits^{\gamma_{A,B\otimes C }}}(B\otimes C)\otimes A\\ @V{}V{\gamma_{A,B}\otimes 1_{C}}V@V{}V{\alpha_{B,A,C}}V@V{}V{(\text{in }\mathbb{C})}V\\ (B\otimes A)\otimes C\smash{\mathop{\longrightarrow}\limits^{\alpha_{B,A,C}}}B \otimes(A\otimes C)\smash{\mathop{\longrightarrow}\limits^{\alpha_{B,A,C}}}B \otimes(C\otimes A)\\ ((W\lozenge X)\lozenge Y)\loz Z\smash{\mathop{\longrightarrow}\limits^{ \mathfrak{a}_{W\loz X,Y,Z}}}(W\lozenge X)\lozenge(Y\loz Z)\smash{\mathop{ \longrightarrow}\limits^{\mathfrak{a}_{W,X,Y\loz Z}}}W\lozenge(X\lozenge(Y \loz Z))\\ @V{}V{\mathfrak{a}_{W,X,Y\lozenge}}V@V{}V{1_{W}\lozenge\mathfrak{a}_{X,Y,Z}}V\\ (W\lozenge(X\lozenge Y))\loz Z\smash{\mathop{\longrightarrow}\limits^{ \mathfrak{a}_{W,X\lozenge Y,Z}}}W\lozenge((X\lozenge Y)\loz Z),\end{CD}\]
\[\begin{CD}(X\lozenge I)\lozenge Y\smash{\mathop{\longrightarrow}\limits^{ \mathfrak{a}_{X,I,Y}}}X\lozenge(I\lozenge Y)\\ r_{X}\lozenge 1_{Y}\end{CD}\]
[MISSING_PAGE_POST]
\((X\diamond Y)\diamond Z\)\((Y\diamond X)\diamond Z\)\((Y\diamond X)\diamond Z\)\((Y\diamond X)\)\((Y\diamond X)\diamond Z\)\((Y\diamond X)\diamond
\((\varepsilon_{F(A)}\Diamond\varepsilon_{F(B)})\circ F(\eta_{A}\otimes\eta_{B})=^{ hypothesis}\varepsilon_{F(A)\Diamond F(B)}\circ F(\eta_{A}\otimes\eta_{B})=^{(B)}\)
\(\varepsilon_{F(A\otimes B)}\circ F(\eta_{A}\otimes\eta_{B})\)
imply that \(\eta_{A\otimes B}=\eta_{A}\otimes\eta_{B}\), because
\(\varepsilon_{F(A\otimes B)}\circ F\eta_{A\otimes B}=\varepsilon_{F(A\otimes B )}\circ F(\eta_{A}\otimes\eta_{B})\) and \(\varepsilon_{F(A\otimes B)}\) is a counit morphism in the adjunction \((A)\).
\((ii)\)
\((\Rightarrow)\)\(\varepsilon_{F(E)}\circ F\eta_{E}=1_{F(E)}\), because \((F,G,\eta,\varepsilon)\) is an adjunction (A),
\(\Leftrightarrow\varepsilon_{I}\circ F\eta_{E}=1_{I}\), because \(F(E)=I\) by (B),
\(\Leftrightarrow\varepsilon_{I}=1_{I}\), because \(\eta_{E}=1_{E}\) by hypothesis;
\((\Leftarrow)\)\(G\varepsilon_{I}\circ\eta_{G(I)}=1_{G(I)}\), because \((F,G,\eta,\varepsilon)\) is an adjunction (A),
\(\Leftrightarrow G\varepsilon_{I}\circ\eta_{E}=1_{E}\), because \(G(I)=E\) by (B),
\(\Leftrightarrow\eta_{E}=1_{E}\), because \(\varepsilon_{I}=1_{I}\) by hypothesis.
The data in this section 2 will be called _the base monoidal adjunction_.
## 3. Categories of all \(\mathcal{V}\)-categories (\(\mathcal{V}=\mathbb{C},\mathbb{X}\))
A \(\mathbb{C}\)-category2\(\mathcal{A}\) consists of a set of objects \(ob(\mathcal{A})\), a _hom-object_\(\mathcal{A}(a,b)\in\mathbb{C}\) for each ordered pair \((a,b)\) of objects in \(\mathcal{A}\), a _composition law_\(M^{\mathcal{A}}_{a,b,c}:\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)\to\mathcal{A}(a,c)\) for each triple of objects, and an _identity element_\(j^{\mathcal{A}}_{a}:E\to\mathcal{A}(a,a)\) for each object, subject to the _associativity_ and _unit axioms_ expressed by the commutativity of the following two diagrams
Footnote 2: Usually called \(\mathcal{V}\)-category, cf. [6, §1.2]. We will make \(\mathcal{V}=\mathbb{C}\) or \(\mathcal{V}=\mathbb{X}\) in the present paper.
\(M^{\mathcal{A}}_{a,c,d}\)\(\mathcal{A}(a,b)\)\(M^{\mathcal{A}}_{a,b,d}\)\(M^{\mathcal{A}}_{a,b,d}\)\(M^{\mathcal{A}}_{a,c,d}\)\(\mathcal{A}(b,b)\otimes\mathcal{A}(a,b)\)\(M^{\mathcal{A}}_{b,c,d}\)\(M^{\mathcal{A}}_{a,b,d}\)\(M^{\mathcal{A}}_{a,c,d}\)\(M^{\mathcal{A}}_{a,b,d}\)\(M^{\mathcal{A}}_{a,b,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a,a,b}\)\(M^{\mathcal{A}}_{a
subject to the _compatibility with composition and with the identities_ expressed by the commutativity of the following two diagrams,
It is easy to verify that \(\mathbb{C}\)-categories and \(\mathbb{C}\)-functors constitute the category \(\mathbb{C}-Cat\) of all \(\mathbb{C}\)-categories:
the composition \(S\circ T\) of two \(\mathbb{C}\)-functors \(S:\mathcal{B}\to\mathcal{C}\) and \(T:\mathcal{A}\to\mathcal{B}\) is defined by
\[ob(S\circ T):ob(\mathcal{A})\to ob(\mathcal{C}),\ ob(S\circ T)(a)=obS(obT(a)),\]
and
\[(S\circ T)_{a,b}=S_{T(a),T(b)}\circ T_{a,b}\]
in \(\mathbb{C}\), for every pair \(a,b\in ob(\mathcal{A})\); the following two commutative diagrams show that \(S\circ T\) as defined is a \(\mathbb{C}\)-functor provided \(S\) and \(T\) also are,
\[\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)\]
\[\mathcal{B}(T(b),T(c))\otimes\mathcal{B}(T(a),T(b))\]
\[\mathcal{B}(T(a),T(c))\]
\[\mathcal{C}(S\circ T(b),S\circ T(c))\otimes\mathcal{C}(S\circ T(a),S\circ T(b))\]
\[\mathcal{A}(a,a)\]
\[\mathcal{B}(T(a),T(a))\]
\[\mathcal{B}(T(a),T(c))\]
\[\mathcal{C}(S\circ T(a),S\circ T(c))\]
\[\mathcal{A}(a,a)\]
\[\mathcal{B}(T(a),T(a))\]
\[\mathcal{B}(T(a),T(c))\]
\[\mathcal{B}(T(a),T(c))\]
\[\mathcal{C}(S\circ T(b),S\circ T(c))\]
\[\mathcal{C}(S\circ T(a),S\circ T(c))\]
\[\mathcal{A}(a,a)\]
[MISSING_PAGE_POST]
the two diagrams
\((1_{\mathcal{A}})_{b,c}\otimes(1_{\mathcal{A}})_{a,b}\)\((1_{\mathcal{A}})_{a,c}\)\((1_{\mathcal{A}})_{a,a}\)\((1_{\mathcal{A}})_{a,c}\)\((1_{\mathcal{A}})_{a,a}\)\((1_{\mathcal{
\(\Leftrightarrow M^{\mathcal{A}}_{T^{-1}(c),T^{-1}(d),T^{-1}(e)}\circ(S_{d,e} \otimes S_{c,d})=S_{c,e}\circ M^{\mathcal{B}}_{c,d,e}\);
\(S_{T(a),T(a)}\circ j^{\mathcal{B}}_{T(a)}=j^{\mathcal{A}}_{a}\Leftrightarrow j ^{\mathcal{A}}_{a}=T^{-1}_{a,a}\circ j^{\mathcal{B}}_{T(a)}\Leftrightarrow T_{ a,a}\circ j^{\mathcal{A}}_{a}=j^{\mathcal{B}}_{T(a)}\), for every \(a\in ob(\mathcal{A})\).
In exactly the same way \(\mathbb{X}\)-categories and \(\mathbb{X}\)-functors constitute the category \(\mathbb{X}-Cat\) of all \(\mathbb{X}\)-categories, with analogous unit morphisms and isomorphisms.
## 4. The derived monoidal adjunction
We will now derive a new monoidal adjunction, from the base monoidal adjunction presented in section 2.
In the following subsection 4.1 the derived adjunction is presented, and its monoidal structure will be given in section 5.
### Derived adjunction for the category of all \(\mathbb{C}\)-categories
A new adjunction is going to be defined,
\[(\mathbb{F},\mathbb{G},\Theta,\Upsilon):\mathbb{C}-Cat\rightarrow\mathbb{X}-Cat,\]
from the category of all \(\mathbb{C}\)-categories into the category of all \(\mathbb{X}\)-categories.
\(\mathbb{F}(\mathcal{A})\) is the object of \(\mathbb{X}-Cat\) such that \(ob\mathbb{F}(\mathcal{A})=ob(\mathcal{A})\), that is, the objects of \(\mathbb{F}(\mathcal{A})\) are exactly the objects of \(\mathcal{A}\), for every object \(\mathcal{A}\in\mathbb{C}-Cat\);
the hom-object \(\mathbb{F}(\mathcal{A})(a,b)=F(\mathcal{A}(a,b))\in\mathbb{X}\), for every pair \(a,b\in ob(\mathcal{A})\);
the composition law \(M^{\mathbb{F}(\mathcal{A})}_{a,b,c}=FM^{\mathcal{A}}_{a,b,c}\) in \(\mathbb{X}\), for every triple \(a,b,c\in ob(\mathcal{A})\);
the identity element \(j^{\mathbb{F}(\mathcal{A})}_{a}=Fj^{\mathcal{A}}_{a}\), for every \(a\in ob(\mathcal{A})\).
\(\mathbb{F}T:\mathbb{F}(\mathcal{A})\rightarrow\mathbb{F}(\mathcal{B})\) is an \(\mathbb{X}\)-functor from the \(\mathbb{X}\)-category \(\mathbb{F}(\mathcal{A})\) to the \(\mathbb{X}\)-category \(\mathbb{F}(\mathcal{B})\), for every \(\mathbb{C}\)-functor \(T:\mathcal{A}\rightarrow\mathcal{B}\):
the function \(ob\mathbb{F}T:ob\mathbb{F}(\mathcal{A})\to ob\mathbb{F}(\mathcal{B})\) is \(obT:ob(\mathcal{A})\to ob(\mathcal{B})\), that is, \(ob\mathbb{F}T=obT\);
the map \((\mathbb{F}T)_{a,b}:\mathbb{F}(\mathcal{A})(a,b)\rightarrow\mathbb{F}( \mathcal{B})(T(a),T(b))\) is defined by
\[(\mathbb{F}T)_{a,b}=FT_{a,b},\]
for each \(a,b\in ob(\mathcal{A})\).
(1) First, let's check that \(\mathbb{F}\) as defined is really a functor from \(\mathbb{C}-Cat\) into \(\mathbb{X}-Cat\).
(1.1) \(\mathbb{F}(\mathcal{A})\in\mathbb{X}-Cat\), for any \(\mathcal{A}\in\mathbb{C}-Cat\): consider the commutative diagrams which express associativity and unit axioms for \(\mathcal{A}\) (cf. section 3); their \(F\)-image shows that \(\mathbb{F}(\mathcal{A})\) as defined is an \(\mathbb{X}\)-category, since the functor \(F\) is a monoidal morphism.
(1.2) For every \(T:\mathcal{A}\rightarrow\mathcal{B}\) in \(\mathbb{C}-Cat\), \(\mathbb{F}T\) as defined above is an \(\mathbb{X}\)-functor: consider the commutative diagrams which express the
compatibility with composition and with the identities of the \(\mathbb{C}\)-functor \(T\) (cf. section 3); their \(F\)-image shows that \(\mathbb{F}T\) as defined is an \(\mathbb{X}\)-functor, since the functor \(F\) is a monoidal morphism.
(1.3) \(\mathbb{F}\) is a functor if \(\mathbb{F}1_{\mathcal{A}}=1_{\mathbb{F}(\mathcal{A})}\) and \(\mathbb{F}(T^{\prime}\circ T)=\mathbb{F}T^{\prime}\circ\mathbb{F}T\), for every \(\mathcal{A}\in\mathbb{C}-Cat\) and every pair \(T,T^{\prime}\) of composable \(\mathbb{C}\)-functors.
The unit \(\mathcal{V}\)-functor was characterized in general in section 3 just before Proposition 3.1, so that \(ob1_{\mathbb{F}(\mathcal{A})}=1_{ob\mathbb{F}(\mathcal{A})}\), the identity function, and \((1_{\mathbb{F}(\mathcal{A})})_{a,b}=1_{\mathbb{F}(\mathcal{A})(a,b)}\), the unit morphism in \(\mathbb{X}\), for every pair \(a,b\in ob\mathbb{F}(\mathcal{A})\). Hence, \(\mathbb{F}1_{\mathcal{A}}=1_{\mathbb{F}(\mathcal{A})}\), the unit \(\mathbb{X}\)-functor, because \(ob\mathbb{F}1_{\mathcal{A}}=ob1_{\mathcal{A}}=1_{ob(\mathcal{A})}=1_{ob \mathbb{F}(\mathcal{A})}\) and \((\mathbb{F}1_{\mathcal{A}})_{a,b}=F(1_{\mathcal{A}})_{a,b}=F1_{\mathcal{A}(a, b)}=1_{F(\mathcal{A}(a,b))}\).
\(\mathbb{F}(T^{\prime}\circ T)=\mathbb{F}T^{\prime}\circ\mathbb{F}T\), for every pair \(T,T^{\prime}\) of composable \(\mathbb{C}\)-functors, because4\(ob\mathbb{F}(T^{\prime}\circ T)=ob(T^{\prime}\circ T)=obT^{\prime}\circ obT=ob \mathbb{F}T^{\prime}\circ ob\mathbb{F}T\) and \((\mathbb{F}(T^{\prime}\circ T))_{a,b}=F(T^{\prime}\circ T)_{a,b}=F(T^{\prime} _{T(a),T(b)}\circ T_{a,b})=FT^{\prime}_{T(a),T(b)}\circ FT_{a,b}=(\mathbb{F}T^ {\prime})_{T(a),T(b)}\circ(\mathbb{F}T)_{a,b}=(\mathbb{F}T^{\prime}\circ \mathbb{F}T)_{a,b}\).
Footnote 4: Notice that “\(\circ\)” in this paragraph means either the composition of morphisms in \(\mathbb{C}-Cat\), \(\mathbb{X}-Cat\), \(\mathbb{C}\) or \(\mathbb{X}\), either the composition of maps in \(Set\). We leave to the reader the easy understanding of the meaning of each “\(\circ\)” in what follows.
(2) The definition of \(\mathbb{G}:\mathbb{X}-Cat\to\mathbb{C}-Cat\) is completely analogous to that of \(\mathbb{F}\). Therefore, the proof that \(\mathbb{G}\) is really a functor from \(\mathbb{X}-Cat\) into \(\mathbb{C}-Cat\) is similar to the one done for \(\mathbb{F}\) just above in (1).
(3) Let \(\mathcal{A}\) be a \(\mathbb{C}\)-category, then the unit morphism \(\Theta_{\mathcal{A}}:\mathcal{A}\to\mathbb{G}\mathbb{F}(\mathcal{A})\) is the one such that \(ob\Theta_{\mathcal{A}}\) is the identity function
\[1_{ob(\mathcal{A})}:ob(\mathcal{A})\to ob\mathbb{G}\mathbb{F}(\mathcal{A})= ob\mathbb{F}(\mathcal{A})=ob(\mathcal{A}),\]
and, for every \(a,b\in ob(\mathcal{A})\),
\[(\Theta_{\mathcal{A}})_{a,b}=\eta_{\mathcal{A}(a,b)}:\mathcal{A}(a,b)\to GF (\mathcal{A}(a,b))\]
is the unit morphism for \(\mathcal{A}(a,b)\in\mathbb{C}\) in the base adjunction \(F\dashv G\).
\(\Theta_{\mathcal{A}}\) is a \(\mathbb{C}\)-functor (morphism in \(\mathbb{C}-Cat\)):
(3.1) it is compatible with composition, since
\((\Theta_{\mathcal{A}})_{a,c}\circ M^{\mathcal{A}}_{a,b,c}=\eta_{\mathcal{A}(a,c )}\circ M^{\mathcal{A}}_{a,b,c}\), by definition of \(\Theta\)
\(=GF(M^{\mathcal{A}}_{a,b,c})\circ\eta_{\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)}\), because \(\eta:1_{\mathbb{C}}\to GF\) is natural
\(=M^{\mathbb{GF}(\mathcal{A})}_{a,b,c}\circ\eta_{\mathcal{A}(b,c)\otimes\mathcal{ A}(a,b)}\), by definition of \(\mathbb{F}\) and \(\mathbb{G}\)
\(=M^{\mathbb{GF}(\mathcal{A})}_{a,b,c}\circ(\eta_{\mathcal{A}(b,c)}\otimes\eta _{\mathcal{A}(a,b)})\), by assumption \((C)\) (cf. section 2)
\(=M^{\mathbb{GF}(\mathcal{A})}_{a,b,c}\circ((\Theta_{\mathcal{A}})_{b,c}\otimes( \Theta_{\mathcal{A}})_{a,b})\), by definition of \(\Theta\);
(3.2) it is compatible with the identities, since
\((\Theta_{\mathcal{A}})_{a,a}\circ j^{\mathcal{A}}_{a}=\eta_{\mathcal{A}(a,a)} \circ j^{\mathcal{A}}_{a}\), by definition of \(\Theta\)
\(=GF(j^{\mathcal{A}}_{a})\circ\eta_{E}\), because \(\eta:1_{\mathbb{C}}\to GF\) is natural
\(=j^{\mathbb{GF}(\mathcal{A})}_{a}\circ\eta_{E}\), by definition of \(\mathbb{F}\) and \(\mathbb{G}\)
\(=j_{a}^{\mathbb{GF}(\mathcal{A})}\circ 1_{E}\), by assumption \((D)\) (cf. section 2).
(4) Let \(\mathcal{X}\in\mathbb{X}\)-category, then the counit morphism \(\Upsilon_{\mathcal{X}}:\mathbb{FG}(\mathcal{X})\to\mathcal{X}\) is the one such that \(ob\Upsilon_{\mathcal{X}}\) is the identity function
\[1_{ob(\mathcal{X})}:ob\mathbb{FG}(\mathcal{X})=ob\mathbb{G}(\mathcal{X})=ob( \mathcal{X})\to ob(\mathcal{X}),\]
and, for every \(x,y\in ob(\mathcal{X})\),
\[(\Upsilon_{\mathcal{X}})_{x,y}=\varepsilon_{\mathcal{X}(x,y)}:FG(\mathcal{X}( x,y))\to\mathcal{X}(x,y)\]
is the counit morphism for \(\mathcal{X}(x,y)\in\mathbb{X}\) in the base adjunction \(F\dashv G\).
The proof that \(\Upsilon_{\mathcal{X}}\) is an \(\mathbb{X}\)-functor (morphism in \(\mathbb{X}-Cat\)) is completely analogous to the one for \(\Theta_{\mathcal{A}}\) given just above in (3).
(5) It will now be shown that \(\Theta_{\mathcal{A}}:\mathcal{A}\to\mathbb{GF}(\mathcal{A})\) is a universal arrow from \(\mathcal{A}\) to \(\mathbb{G}\), for every \(\mathcal{A}\in\mathbb{C}-Cat\):5
(5.1) consider any \(\mathbb{C}\)-functor \(T:\mathcal{A}\to\mathbb{G}(\mathcal{X})\); if there is an \(\mathbb{X}\)-functor \(U:\mathbb{F}(\mathcal{A})\to\mathcal{X}\) such that \(\mathbb{G}U\circ\Theta_{\mathcal{A}}=T\), then it must be unique, since \(obT=ob(\mathbb{G}U\circ\Theta_{\mathcal{A}})\)
Footnote 5: Cf. last footnote.
\(=ob\mathbb{G}U\circ ob\Theta_{\mathcal{A}}\), by definition of composition in \(\mathbb{C}-Cat\)
\(=obU\circ 1_{ob(\mathcal{A})}\), by the definitions of \(\mathbb{G}\) and \(\Theta\) (cf. (3))
\(=obU\),
and \(T_{a,b}=GU_{a,b}\circ\eta_{\mathcal{A}(a,b)}\) implies that \(U_{a,b}\) is unique, for \(\eta_{\mathcal{A}(a,b)}\) is a universal arrow from \(\mathcal{A}(a,b)\) to \(G\);
(5.2) let's check finally that \(U\) given in (5.1) is an \(\mathbb{X}\)-functor (a morphism in \(\mathbb{X}-Cat\));
(5.2.1) \(U\) is compatible with composition:
\(G(U_{a,c}\circ FM_{a,b,c}^{\mathcal{A}})\circ\eta_{\mathcal{A}(b,c)\otimes \mathcal{A}(a,b)}=GU_{a,c}\circ(\eta_{\mathcal{A}(a,c)}\circ M_{a,b,c}^{ \mathcal{A}})\), because \(\eta:1_{\mathbb{C}}\to GF\) is natural
\(=T_{a,c}\circ M_{a,b,c}^{\mathcal{A}}\), because \(T=\mathbb{G}U\circ\Theta_{\mathcal{A}}\)
\(=GM_{T(a),T(b),T(c)}^{\mathcal{X}}\circ(T_{b,c}\otimes T_{a,b})\), since \(T\) is a \(\mathbb{C}\)-functor
\(=GM_{T(a),T(b),T(c)}^{\mathcal{X}}\circ((GU_{b,c}\circ\eta_{\mathcal{A}(b,c )})\otimes(GU_{a,b}\circ\eta_{\mathcal{A}(a,b)}))\), because \(T=\mathbb{G}U\circ\Theta_{\mathcal{A}}\)
\(=G(M_{T(a),T(b),T(c)}^{\mathcal{X}}\circ(U_{b,c}\Di U_{a,b}))\circ(\eta_{ \mathcal{A}(b,c)}\otimes\eta_{\mathcal{A}(a,b)})\), because \(\otimes\) is a bifunctor
\(=G(M_{T(a),T(b),T(c)}^{\mathcal{X}}\circ(U_{b,c}\Di U_{a,b}))\circ\eta_{ \mathcal{A}(b,c)\otimes\mathcal{A}(a,b)}\), by assumption (C) (cf. section 2)
\(\Rightarrow\)\(U_{a,c}\circ FM_{a,b,c}^{\mathcal{A}}=M_{T(a),T(b),T(c)}^{\mathcal{X}}\circ(U_{b,c} \Di U_{a,b})\), since \(\eta_{\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)}\) is universal from \(\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)\) to \(G\).
(5.2.2) \(U\) is compatible with the identities:
\(Gj_{T(a)}^{\mathcal{X}}\circ\eta_{E}=T_{a,a}\circ j_{a}^{\mathcal{A}}\circ \eta_{E}\), because \(T\) is a \(\mathbb{C}\)-functor and \(GF(E)=G(I)=E\)
\(=GU_{a,a}\circ(\eta_{\mathcal{A}(a,a)}\circ j_{a}^{\mathcal{A}})\circ\eta_{E}\)
\(=GU_{a,a}\circ(G^{\mathbb{F}(\mathcal{A})}_{a}\circ\eta_{E})\circ\eta_{E}\), because \(\eta:1_{\mathbb{C}}\to GF\) is natural
\(=GU_{a,a}\circ Gj^{\mathbb{F}(\mathcal{A})}_{a}\circ\eta_{E}\), because \(\eta_{E}=1_{E}\) by assumption (D) (cf. section 2)
\(\Rightarrow j^{\mathcal{X}}_{T(a)}=U_{a,a}\circ j^{\mathbb{F}(\mathcal{A})}_{a}\), since \(\eta_{E}\) is a universal arrow from \(E\) to \(G\).
(6) It will now be shown that \(\Upsilon_{\mathcal{X}}:\mathbb{F}\mathbb{G}(\mathcal{X})\rightarrow\mathcal{X}\) is a universal arrow from \(\mathbb{F}\) to \(\mathcal{X}\), for every \(\mathcal{X}\in\mathbb{X}-Cat\):
(6.1) consider any \(\mathcal{X}\)-functor \(T:\mathbb{F}(\mathcal{A})\rightarrow\mathcal{X}\); if there is a \(\mathbb{C}\)-functor \(V:\mathcal{A}\rightarrow\mathbb{G}(\mathcal{X})\) such that \(\Upsilon_{\mathcal{X}}\circ\mathbb{F}V=T\), it must be unique, since
\(obT=ob(\Upsilon_{\mathcal{X}}\circ\mathbb{F}V)=ob\Upsilon_{\mathcal{X}}\circ ob \mathbb{F}V\), by definition of composition in \(\mathbb{X}-Cat\)
\(=obV\), by the definitions of \(\mathbb{F}\) and \(\Upsilon\) (cf. (4)),
and \(T_{a,b}=\varepsilon_{\mathcal{X}(T(a),T(b))}\circ FV_{a,b}\) implies that \(V_{a,b}\) is unique, for \(\varepsilon_{\mathcal{X}(T(a),T(b))}\) is a universal arrow from \(F\) to \(\mathcal{X}(T(a),T(b))\);
(6.2) let's check finally that \(V\) given in (6.1) is an \(\mathbb{X}\)-functor (a morphism in \(\mathbb{X}-Cat\));
(6.2.1) \(V\) is compatible with composition:
\(\varepsilon_{\mathcal{X}(T(a),T(c))}\circ F(V_{a,c}\circ M^{\mathcal{A}}_{a,b,c})=T_{a,c}\circ FM^{\mathcal{A}}_{a,b,c}\), because \(T=\Upsilon_{\mathcal{X}}\circ\mathbb{F}V\)
\(=M^{\mathcal{X}}_{T(a),T(b),T(c)}\circ(T_{b,c}\Diamond T_{a,b})\), because \(T\) is an \(\mathbb{X}\)-functor
\(=M^{\mathcal{X}}_{T(a),T(b),T(c)}\circ((\varepsilon_{\mathcal{X}(T(b),T(c))} \circ FV_{b,c})\Diamond(\varepsilon_{\mathcal{X}(T(a),T(b))}\circ FV_{a,b}))\), because \(T=\Upsilon_{\mathcal{X}}\circ\mathbb{F}V\)
\(=M^{\mathcal{X}}_{T(a),T(b),T(c)}\circ((\varepsilon_{\mathcal{X}(T(b),T(c))} \Diamond\varepsilon_{\mathcal{X}(T(a),T(b))})\circ(FV_{b,c}\Diamond FV_{a,b}))\), because \(\Diamond\) is a bifunctor
\(=M^{\mathcal{X}}_{T(a),T(b),T(c)}\circ(\varepsilon_{\mathcal{X}(T(b),T(c)) \Diamond\mathcal{X}(T(a),T(b))}\circ F(V_{b,c}\Diamond V_{a,b}))\), by assumption (C) and Proposition 2.1
\(=\varepsilon_{\mathcal{X}(T(a),T(c))}\circ FG(M^{\mathcal{X}}_{T(a),T(b),T(c)} )\circ F(V_{b,c}\Diamond V_{a,b})\), because \(\varepsilon:FG\to 1_{\mathbb{X}}\) is natural
\(=\varepsilon_{\mathcal{X}(T(a),T(c))}\circ F(GM^{\mathcal{X}}_{T(a),T(b),T(c)} \circ(V_{b,c}\Diamond V_{a,b}))\)
\(\Rightarrow\)\(V_{a,c}\circ M^{\mathcal{A}}_{a,b,c}\)\(=GM^{\mathcal{X}}_{T(a),T(b),T(c)}\circ(V_{b,c}\Diamond V_{a,b})\), since \(\varepsilon_{\mathcal{X}(T(a),T(c))}\) is universal from \(F\) to \(\mathcal{X}(T(a),T(c))\).
(6.2.2) \(V\) is compatible with the identities:
\(\varepsilon_{\mathcal{X}(T(a),T(a))}\circ FGj^{\mathcal{X}}_{T(a)}=j^{ \mathcal{X}}_{T(a)}\circ\varepsilon_{FG(I)}\), since \(\varepsilon:FG\to 1_{\mathbb{X}}\) is natural
\(=j^{\mathcal{X}}_{T(a)}\circ\varepsilon_{I}\), because \(FG(I)=F(E)=I\)
\(=j^{\mathcal{X}}_{T(a)}\), by assumption (D) and Proposition 2.1
\(=T_{a,a}\circ j^{\mathbb{F}(\mathcal{A})}_{a}=T_{a,a}\circ Fj^{\mathcal{A}}_{a}\), because \(T\) is a \(\mathbb{C}\)-functor
\(=\varepsilon_{\mathcal{X}(T(a),T(a))}\circ(FV_{a,a}\circ Fj^{\mathcal{A}}_{a})\)
\(\Rightarrow Gj^{\mathcal{X}}_{T(a)}(=j^{\mathbb{G}(\mathcal{X})}_{T(a)})=V_{a,a }\circ j^{\mathcal{A}}_{a}\), since \(\varepsilon_{\mathcal{X}(T(a),T(a))}\) is a universal arrow from \(F\) to \(\mathcal{X}(T(a),T(a))\).
(7) It is easy to check that \(\mathbb{G}\Upsilon\cdot\Theta\mathbb{G}=\mathbb{G}\) and \(\Upsilon\mathbb{F}\cdot\mathbb{F}\Theta=\mathbb{F}\), using the assumptions \(G\varepsilon\cdot\eta G=G\) and \(\varepsilon F\cdot F\eta=F\).
### The base monoidal reflection
The results in this paper are to be applied in categorical Galois theory (cf. [3]) to admissible6 reflections of full subcategories. With this in mind, a special case of the base monoidal adjunction will be considered now.
Footnote 6: Also called semi-left-exact as introduced in [2] (cf. [1]).
If \(\mathbb{X}\) is reflective in \(\mathbb{C}\), that is, \(\mathbb{X}\) is a subcategory of \(\mathbb{C}\), \(G\) is the inclusion functor and \(F\) is the reflector, it is obvious that \(E=I\), and that the bifunctor \(\Diamond\) and the natural transformations \(\mathfrak{a}\), \(t\) and \(r\) are just the respective restrictions of \(\otimes\), \(\alpha\), \(\gamma\) and \(\rho\).
It is well known that, provided the inclusion \(G\) is also a full functor then every counit morphism \(\varepsilon_{X}:FG(X)\to X\) is an isomorphism, \(X\in\mathbb{X}\) (cf. [7, SSV.3]). Furthermore, if \(\mathbb{X}\) is a replete subcategory of \(\mathbb{C}\), that is, \(\mathbb{X}\) contains any object of \(\mathbb{C}\) which is isomorphic to some other object of \(\mathbb{X}\) (cf. [1, SS3.1]), then this allows to choose the unit \(\eta:1_{\mathbb{C}}\to GF\) so that the counit is the identity \(\varepsilon:FG=1_{\mathbb{X}}\)\((G\varepsilon_{X}\circ\eta_{G(X)}=1_{G(X)}\Leftrightarrow\eta_{G(X)}= \varepsilon_{X}^{-1}\), provided \(G\) is an inclusion; cf. Theorem 2(ii) in [7, SSV.1]).
**Definition 4.1**.: An adjunction as in section 2, satisfying \((A)\) and \((B)\), will be called _base monoidal reflection_ if \(\mathbb{X}\) is a full replete subcategory of \(\mathbb{C}\) and \(G:\mathbb{X}\subseteq\mathbb{C}\) is the inclusion functor.
In the following Proposition 4.1, it is shown that from a base monoidal reflection another base monoidal reflection is derived, using the process introduced in subsection 4.1.
**Proposition 4.1**.: _Consider a base monoidal reflection as in Definition 4.1, determined by the (symmetric) monoidal category \((\mathbb{C},\otimes,E,\alpha,\gamma,\rho)\) and the reflection \((F,G,\eta,\varepsilon):\mathbb{C}\to\mathbb{X}\). Then, there is a derived adjunction_
\[(\mathbb{F},\mathbb{G},\Theta,\Upsilon):\mathbb{C}-Cat\to\mathbb{X}-Cat,\]
_such that \(\mathbb{G}\) is the inclusion functor and \(\mathbb{X}-Cat\) is a full replete subcategory of \(\mathbb{C}-Cat\)._
Proof.: In order to check that there is a derived adjunction, one has to show that both conditions \((C)\) and \((D)\) (cf. section 2) hold for the base adjunction:
\((C)\) holds because, for every \(X,Y\in\mathbb{X}\),
\(\varepsilon_{X}\Diamond\varepsilon_{Y}=\varepsilon_{X\Diamond Y}\Leftrightarrow \varepsilon_{X}\otimes\varepsilon_{Y}=\varepsilon_{X\Diamond Y}\), since \(\Diamond\) is the restriction of \(\otimes\)
\(\Leftrightarrow 1_{X}\otimes 1_{Y}=1_{X\Diamond Y}\), since \(\varepsilon:FG=1_{\mathbb{X}}\),
which is true simply because \(\otimes\) is a bifunctor, implying condition \((C)\) by the duality given in Proposition 2.1\((i)\);
\((D)\) holds since \(\eta_{E}=1_{E}\Leftrightarrow\varepsilon_{I}=1_{I}\), by Proposition 2.1\((ii)\), and \(\varepsilon:FG=1_{\mathbb{X}}\).
As \(ob\Upsilon_{\mathcal{X}}=1_{ob(\mathcal{X})}\) and \((\Upsilon_{\mathcal{X}})_{a,b}=\varepsilon_{\mathcal{X}(a,b)}=1_{\mathcal{X}( a,b)}\), for every \(\mathcal{X}\in\mathbb{X}-Cat\) and for every pair \(a,b\in ob(\mathcal{X})\), it follows that \(\Upsilon:\mathbb{F}\mathbb{G}=1_{\mathbb{X}-Cat}\)
is the identity.
It is also obvious that \(\mathbb{G}\) is the inclusion functor:
\(ob\mathbb{G}(\mathcal{X})=ob(\mathcal{X})\), \(\mathbb{G}(\mathcal{X})(a,b)=G(\mathcal{X}(a,b))=\mathcal{X}(a,b)\), \(j_{a}^{\mathbb{G}(\mathcal{X})}=Gj_{a}^{\mathcal{X}}=j_{a}^{\mathcal{X}}\), and \(M_{a,b,c}^{\mathbb{G}(\mathcal{X})}=GM_{a,b,c}^{\mathcal{X}}=M_{a,b,c}^{ \mathcal{X}}\), for every \(\mathcal{X}\in\mathbb{X}-Cat\) and \(a,b,c\in ob(\mathcal{X})\);
for every \(\mathbb{X}\)-functor \(T:\mathcal{X}\to\mathcal{Y}\), \(ob\mathbb{G}T=obT\) and \((\mathbb{G}T)_{x,y}=GT_{x,y}=T_{x,y}:\mathcal{X}(x,y)\to\mathcal{Y}(T(x),T(y))\), for every pair \(x,y\in\mathcal{X}\);
so that, if \(\mathbb{G}T=\mathbb{G}S\) then necessarily \(T=S\).
It remains to show that \(\mathbb{X}-Cat\) is a replete subcategory of \(\mathbb{C}-Cat\), which follows from the characterization of isomorphisms in \(\mathbb{C}-Cat\) (cf. Proposition 3.1 in section 3):
if \(\mathcal{A}\in\mathbb{C}-Cat\) is isomorphic to \(\mathcal{X}\in\mathbb{X}-Cat\), then there is a \(\mathbb{C}\)-functor \(T:\mathcal{A}\to\mathcal{X}\) such that \(T_{a,b}:A(a,b)\to X(T(a),T(b))\) is an isomorphism in \(\mathbb{C}\), for every pair \(a,b\in ob(\mathcal{A})\);
as \(\mathbb{X}\) is replete in \(\mathbb{C}\), this means that \(A(a,b)\in\mathbb{X}\), for every pair \(a,b\in ob(\mathcal{A})\), which implies that \(\mathcal{A}\in\mathbb{X}-Cat\) by definition of \(\mathbb{X}-Cat\) (remark that \(M_{a,b,c}^{\mathcal{A}}:\mathcal{A}(b,c)\otimes\mathcal{A}(a,b)=\mathcal{A}(b, c)\Diamond\mathcal{A}(a,b)\to\mathcal{A}(a,c)\) and \(j_{a}^{\mathcal{A}}:E=I\to\mathcal{A}(a,a)\) are morphisms in \(\mathbb{X}\), by the fullness of \(G:\mathbb{X}\subseteq\mathbb{C}\)).
## 5. A monoidal structure for the derived adjunction
In this section, the symmetric monoidal category
\[\mathbb{C}-Cat=(\mathbb{C}-Cat,\bigcirc,\mathfrak{E},\wedge,\Gamma,\mathfrak{U}),\]
will be presented, that is, a monoidal structure for the category of all \(\mathbb{C}\)-categories, giving the definition of all its items, and proving along the way that it obeys the axioms (cf. the first four sections of chapter 1 in [6]).
### A bifunctor for \(\mathbb{C}-Cat\)
Consider the (bi)functor
\[\bigcirc:\mathbb{C}-Cat\times\mathbb{C}-Cat\to\mathbb{C}-Cat,\]
such that \(T\bigcirc S:\mathcal{A}\bigcirc\mathcal{B}\to\mathcal{A}^{\prime}\bigcirc \mathcal{B}^{\prime}\) is the \(\mathbb{C}\)-functor image of \((T,S):(\mathcal{A},\mathcal{B})\to(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\).
(I) Definition of the \(\mathbb{C}\)-category \(\mathcal{A}\bigcirc\mathcal{B}\), for any pair \(\mathcal{A},\mathcal{B}\in\mathbb{C}-Cat\):
\(ob(\mathcal{A}\bigcirc\mathcal{B})=ob(\mathcal{A})\times ob(\mathcal{B})\), the set of objects of a tensor product \(\mathcal{A}\bigcirc\mathcal{B}\) is the cartesian product of the sets of objects of the two \(\mathbb{C}\)-categories;
\(\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{a},\bar{b}))=\mathcal{A}(a,\bar{a} )\otimes\mathcal{B}(b,\bar{b})\), a hom-object of the new tensor product is the old tensor product of the corresponding hom-objects
for the two \(\mathbb{C}\)-categories;
the composition law
\[M^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b}),(\bar{\bar{a}},\bar{ \bar{b}})}:\mathcal{A}\bigcirc\mathcal{B}((\bar{a},\bar{b}),(\bar{\bar{a}},\bar{ \bar{b}}))\otimes\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{a},\bar{b})) \rightarrow\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{\bar{a}},\bar{\bar{b}}))\]
is defined as
\[\begin{array}{l}M^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b}),(\bar{a},\bar{\bar{b}})}=(M^{\mathcal{A}}_{a,\bar{a},\bar{\bar{a}}}\otimes M^ {\mathcal{B}}_{b,\bar{b},\bar{\bar{b}}})\circ m_{\mathcal{A}(\bar{a},\bar{\bar {a}}),\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}),\mathcal{A}(a,\bar{\bar{a}}), \mathcal{B}(b,\bar{b})}:\\ (\mathcal{A}(\bar{a},\bar{\bar{a}})\otimes\mathcal{B}(\bar{\bar{b}},\bar{\bar {b}}))\otimes(\mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b}))\rightarrow (\mathcal{A}(\bar{a},\bar{\bar{a}})\otimes\mathcal{A}(a,\bar{a}))\otimes\\ \otimes(\mathcal{B}(\bar{b},\bar{\bar{b}})\otimes\mathcal{B}(b,\bar{\bar{b}})) \rightarrow\mathcal{A}(a,\bar{\bar{a}})\otimes\mathcal{B}(b,\bar{b}),\end{array}\]
so that, for instance, \(m\) is the natural isomorphism whose components are7
Footnote 7: Remark that the results outside this subsection 5.1 do not need the symmetry in the monoidal structure; cf. [6, I.1.4].
\(m_{A,B,C,D}=\alpha_{A,C,B\otimes D}^{-1}\circ(1_{A}\otimes\alpha_{C,B,D}) \circ(1_{A}\otimes(\gamma_{B,C}\otimes 1_{D}))\circ\)
\(\circ(1_{A}\otimes\alpha_{B,C,D}^{-1})\circ\alpha_{A,B,C\otimes D},\)
for any quadruple \(A,B,C,D\in\mathbb{C};\)
the unit law
\[j^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b)}:E\rightarrow\mathcal{A}\bigcirc \mathcal{B}((a,b),(a,b))\]
is defined as \(j^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b)}=(j^{\mathcal{A}}_{a}\otimes j^{ \mathcal{B}}_{b})\circ\rho_{E}:E\to E\otimes E\rightarrow\mathcal{A}(a,a )\otimes\mathcal{B}(b,b).\)
(II) Definition of the \(\mathbb{C}\)-functor \(T\bigcirc S:\mathcal{A}\bigcirc\mathcal{B}\rightarrow\mathcal{A}^{\prime} \bigcirc\mathcal{B}^{\prime}\), for any pair \(T:\mathcal{A}\rightarrow\mathcal{A}^{\prime}\), \(S:\mathcal{B}\rightarrow\mathcal{B}^{\prime}\) in \(\mathbb{C}-Cat\):
\(T\bigcirc S\) consists of a function \(obT\bigcirc S=obT\times obS\), which is the cartesian product of the two object functions for the two \(\mathbb{C}\)-functors, together with the map
\[T\bigcirc S_{(a,b),(\bar{a},\bar{b})}=T_{a,\bar{a}}\otimes S_{b,\bar{b}}: \mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b})\rightarrow\mathcal{A}^{ \prime}(T(a),T(\bar{a}))\otimes\mathcal{B}^{\prime}(S(b),S(\bar{b})),\]
for each pair \((a,b),(\bar{a},\bar{b})\in ob(\mathcal{A}\bigcirc\mathcal{B})\).
We have to check that:
(1) \(\mathcal{A}\bigcirc\mathcal{B}\) as defined is in fact a \(\mathbb{C}\)-category, that is,
(1.1) the associativity axiom, and
(1.2) the unit axioms hold; and then that,
(2) \(T\bigcirc S:\mathcal{A}\bigcirc\mathcal{B}\rightarrow\mathcal{A}^{\prime} \bigcirc\mathcal{B}^{\prime}\) as defined is in fact a morphism in \(\mathbb{C}-Cat\) (a \(\mathbb{C}\)-functor), that is,
(2.1) there is compatibility with composition, and
(2.2) with the identities; finally, that
(3) \(\bigcirc\) is a (bi)functor, that is,
(3.1) \(\bigcirc\) preserves the identities, and
(3.2) \(\bigcirc\) preserves the composition.
(1) \(\mathcal{A}\bigcirc\mathcal{B}\in\mathbb{C}-Cat\):
(1.1) Is the following equation true?
\[\begin{array}{l}M^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b}),( \bar{\bar{a}},\bar{\bar{b}})}\circ(1_{\mathcal{A}\bigcirc\mathcal{B}((\bar{ \bar{a}},\bar{\bar{b}}),(\bar{\bar{a}},\bar{\bar{b}}))}\otimes M^{\mathcal{A} \bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b}),(\bar{\bar{a}},\bar{\bar{b}})}) \circ\alpha_{\mathcal{A}\bigcirc\mathcal{B}((\bar{\bar{a}},\bar{\bar{b}}),( \bar{\bar{a}},\bar{\bar{b}})),\mathcal{A}\bigcirc\mathcal{B}((\bar{\bar{a}}, \bar{\bar{b}}),(\bar{\bar{a}},\bar{\bar{b}})),\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{\bar{a}},\bar{\bar{b}}))}=\\ =M^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b}),(\bar{\bar{a}}, \bar{\bar{b}})}\circ(M^{\mathcal{A}\bigcirc\mathcal{B}}_{(\bar{a},\bar{b}),( \bar{\bar{a}},\bar{\bar{b}}),(\bar{\bar{a}},\bar{\bar{b}})}\otimes 1_{\mathcal{A} \bigcirc\mathcal{B}((a,b),(\bar{a},\bar{\bar{b}}))})\end{array}\]
which is equivalent to the equation
**(i)**\((M^{\mathcal{A}}_{a,\bar{\bar{a}},\bar{\bar{a}}}\otimes M^{\mathcal{B}}_{b, \bar{b},\bar{\bar{b}}})\circ m_{\mathcal{A}(\bar{a},\bar{\bar{a}}),\mathcal{B }(\bar{b},\bar{\bar{b}}),\mathcal{A}(a,\bar{\bar{a}}),\mathcal{B}(\bar{b}, \bar{b})}\circ(1_{\mathcal{A}(\bar{a},\bar{\bar{a}})\otimes\mathcal{B}(\bar{ \bar{b}},\bar{\bar{b}})}\otimes(M^{\mathcal{A}}_{a,\bar{\bar{a}}}\otimes M^{ \mathcal{B}}_{b,\bar{\bar{b}},\bar{b}}))\circ(1_{\mathcal{A}(\bar{a},\bar{\bar {a}})\otimes\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}})}\otimes m_{\mathcal{A}( \bar{a},\bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}),\mathcal{A} (a,\bar{\bar{a}}),\mathcal{B}(\bar{b},\bar{b})})\circ\alpha_{\mathcal{A}( \bar{\bar{a}},\bar{\bar{a}})\otimes\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}), \mathcal{A}(a,\bar{\bar{a}})\otimes\mathcal{B}(\bar{b},\bar{\bar{b}}), \mathcal{A}(a,\bar{\bar{a}})\otimes\mathcal{B}(b,\bar{b})}=\]
\[=(M^{\mathcal{A}}_{a,\bar{\bar{a}},\bar{\bar{a}}}\otimes M^{\mathcal{B}}_{b, \bar{b},\bar{\bar{b}}})\circ m_{\mathcal{A}(\bar{a},\bar{\bar{a}}),\mathcal{B }(\bar{b},\bar{\bar{b}}),\mathcal{A}(a,\bar{\bar{a}}),\mathcal{B}(\bar{b},\bar {b})}\circ((M^{\mathcal{A}}_{\bar{\bar{a}},\bar{\bar{a}}}\otimes M^{\mathcal{B }}_{\bar{\bar{b}},\bar{\bar{b}}})\otimes 1_{\mathcal{A}(a,\bar{\bar{a}})\otimes\mathcal{B}(b,\bar{b})})\circ\]
\[(m_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}},\bar{ \bar{b}}),\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}}, \bar{\bar{b}})}\otimes 1_{\mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b})});\]
remark that, as \(\mathcal{A}\) and \(\mathcal{B}\) are \(\mathbb{C}\)-categories, the associativity axioms for each are the following two equations, for every \(a,\bar{a},\bar{\bar{a}},\bar{\bar{a}}\in\mathcal{A}\) and \(b,\bar{b},\bar{\bar{b}},\bar{\bar{b}}\in\mathcal{B}\),
\[M^{\mathcal{A}}_{a,\bar{\bar{a}},\bar{\bar{a}}}\circ(1_{\mathcal{A}(\bar{\bar {a}},\bar{\bar{a}})}\otimes M^{\mathcal{A}}_{a,\bar{a},\bar{\bar{a}})}\circ \alpha_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{A}(a,\bar{\bar{a}})}=M^{\mathcal{A}}_{a,\bar{a},\bar{ \bar{a}}}\circ(M^{\mathcal{A}}_{\bar{a},\bar{\bar{a}},\bar{\bar{a}}}\otimes 1_{\mathcal{A}(a,\bar{a})})\]
and
\[M^{\mathcal{B}}_{b,\bar{\bar{b}},\bar{\bar{b}}}\circ(1_{\mathcal{B}(\bar{b}, \bar{\bar{b}})}\otimes M^{\mathcal{B}}_{b,\bar{b},\bar{\bar{b}}})\circ\alpha_{ \mathcal{B}(\bar{b},\bar{\bar{b}}),\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}( \bar{b})}=M^{\mathcal{B}}_{b,\bar{\bar{b}},\bar{\bar{b}}}\circ(M^{\mathcal{B }}_{\bar{b},\bar{\bar{b}},\bar{\bar{b}}}\otimes 1_{\mathcal{B}(b,\bar{b})}),\]
which imply, by tensoring,
**(ii)**\((M^{\mathcal{A}}_{a,\bar{\bar{a}},\bar{\bar{a}}}\otimes M^{\mathcal{B}}_{b, \bar{\bar{b}},\bar{\bar{b}}})\circ((1_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}} )}\otimes M^{\mathcal{A}}_{a,\bar{a},\bar{a}})\otimes(1_{\mathcal{B}(\bar{\bar {b}},\bar{\bar{b}})}\otimes M^{\mathcal{B}}_{b,\bar{b},\bar{\bar{b}}}))\circ( \alpha_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{A}(a,\bar{\bar{a}})}\otimes\)
\[\alpha_{\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}),\mathcal{B}(\bar{b},\bar{b}), \mathcal{B}(b,\bar{b})})=(M^{\mathcal{A}}_{a,\bar{\bar{a}}}\otimes M^{ \mathcal{B}}_{b,\bar{b},\bar{\bar{b}}})\circ((M^{\mathcal{A}}_{\bar{a},\bar{ \bar{a}},\bar{\bar{a}}}\otimes 1_{\mathcal{A}(a,\bar{a})})\otimes(M^{\mathcal{B}}_{\bar{b},\bar{\bar{b}}} \otimes 1_{\mathcal{B}(b,\bar{b})}));\]
the diagram corresponding to **(ii)**, which is a pentagon in which the top edge is \(\alpha_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{A}(a,\bar{a})}\otimes\alpha_{\mathcal{B}(\bar{\bar{b}}, \bar{\bar{b}}),\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}(b,\bar{b})}\), can be extended upwards by the commutative rectangle (whose down edge coincides with the top edge \(\alpha_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{A}(a,\bar{a})}\otimes\alpha_{\mathcal{B}(\bar{\bar{b}}, \bar{\bar{b}}),\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}(b,\bar{b})}\) of **(ii)**) corresponding to the equation
**(iii)**\((\alpha_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{A}(a,\bar{a})}\otimes\alpha_{\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}),\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}(b,\bar{b})})\circ m_{ \mathcal{A}(\bar{\bar{a}},\bar{\bar{a}})\otimes\mathcal{A}(\bar{\bar{a}}, \bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}})\otimes\mathcal{B}( \bar{b},\bar{\bar{b}}),\mathcal{A}(a,\bar{\bar{a}}),\mathcal{B}(b,\bar{b})} \circ\)
\((m_{\mathcal{A}(\bar{\bar{a
applying the functor \(\otimes\) to instances of \(\alpha\), \(\gamma\), \(\rho\), their inverses and \(1\) (Cf. [7, SSVII] for a precise formulation);
the two equations
\[\begin{array}{l}m_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{\bar{a}}})\otimes \mathcal{A}(\bar{a},\bar{\bar{a}}),\mathcal{A}(a,\bar{a}),\mathcal{B}(\bar{b}, \bar{\bar{b}})\otimes\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}(\bar{b})}^{ \circ m}{}_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}})\otimes\mathcal{A}(\bar{a },\bar{\bar{a}}),\mathcal{B}(\bar{b},\bar{\bar{b}})\otimes\mathcal{B}(\bar{b}, \bar{\bar{b}}),\mathcal{A}(a,\bar{a}),\mathcal{B}(\bar{b},\bar{\bar{b}})}^{ \circ}\\ (m_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}},\bar{ \bar{b}}),\mathcal{A}(\bar{a},\bar{\bar{a}}),\mathcal{B}(\bar{b},\bar{\bar{b}} )}\otimes 1_{\mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b})})=m_{\mathcal{A} (\bar{\bar{a}},\bar{\bar{a}}),\mathcal{B}(\bar{\bar{b}},\bar{\bar{b}}), \mathcal{A}(\bar{a},\bar{\bar{a}}),\mathcal{B}(\bar{b},\bar{\bar{b}})}^{ \otimes 1}{}_{\mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b})}\end{array}\]
and
\[\begin{array}{l}m_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}}),\mathcal{A}(a, \bar{a}),\mathcal{B}(\bar{b},\bar{\bar{b}}),\mathcal{B}(b,\bar{b})}^{\circ} \circ((M_{\bar{a},\bar{\bar{a}},\bar{\bar{a}}}^{\mathcal{A}}\otimes M_{\bar{b },\bar{b},\bar{\bar{b}}}^{\mathcal{B}})\otimes(1_{\mathcal{A}(a,\bar{a})} \otimes 1_{\mathcal{B}(b,\bar{b})}))\\ \circ m_{\mathcal{A}(\bar{\bar{a}},\bar{\bar{a}})\otimes\mathcal{A}(\bar{a}, \bar{\bar{a}}),\mathcal{A}(a,\bar{a}),\mathcal{B}(\bar{b},\bar{\bar{b}}) \otimes\mathcal{B}(\bar{b},\bar{b}),\mathcal{B}(b,\bar{b})}=(M_{\bar{a},\bar{ \bar{a}},\bar{\bar{a}}}^{\mathcal{A}}\otimes 1_{\mathcal{A}(a,\bar{a})}) \otimes(M_{\bar{b},\bar{b},\bar{\bar{b}}}^{\mathcal{B}}\otimes 1_{\mathcal{B}(b,\bar{b})})\end{array}\]
both hold (the first one for the same reason that equation **(iii)** holds; the second one holds since \(m\) is natural); hence, the upwards extended pentagon (corresponding to equations **(ii)** and **(iii)**) can now be extended to the left; a similar process can be applied to obtain a right extension, proving finally that the composition law **(i)** holds for \(\mathcal{A}\bigcirc\mathcal{B}\).
(1.2) Let's check now the unit axioms: are the following two equations true? (Cf. section 3)
\[\begin{array}{l}(1.2.1)\ M_{(a,b),(a,b),(\bar{a},\bar{b})}^{\mathcal{A} \bigcirc\mathcal{B}}\circ(1_{\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{a}, \bar{b}))}\otimes j_{(a,\bar{b})}^{\mathcal{A}\bigcirc\mathcal{B}})=\rho_{ \mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{a},\bar{b}))},\\ (1.2.2)\ M_{(a,b),(\bar{a},\bar{b}),(\bar{a},\bar{b})}^{\mathcal{A}\bigcirc \mathcal{B}}\circ(j_{(\bar{a},\bar{b})}^{\mathcal{A}\bigcirc\mathcal{B}} \otimes 1_{\mathcal{A}\bigcirc\mathcal{B}((a,b),(\bar{a},\bar{b}))})\ =\ \rho_{\mathcal{A}\bigcirc \mathcal{B}((a,b),(\bar{a},\bar{b}))}\circ\gamma_{E,\mathcal{A}\bigcirc \mathcal{B}((a,b),(\bar{a},\bar{b}))};\end{array}\]
(1.2.1) as \(\mathcal{A}\) and \(\mathcal{B}\) are \(\mathbb{C}\)-categories, then the two equalities \(M_{a,a,\bar{a}}^{\mathcal{A}}\circ(1_{\mathcal{A}(a,\bar{a})}\otimes j_{a}^{ \mathcal{A}})=\rho_{\mathcal{A}(a,\bar{a})}\) and \(M_{b,b,\bar{b}}^{\mathcal{B}}\circ(1_{\mathcal{B}(b,\bar{b})}\otimes j_{b}^{ \mathcal{B}})=\rho_{\mathcal{B}(b,\bar{b})}\) hold, which imply by tensoring,
\[\textbf{(iv)}\ (M_{a,a,\bar{a}}^{\mathcal{A}}\otimes M_{b,b,\bar{b}}^{ \mathcal{B}})\circ((1_{\mathcal{A}(a,\bar{a})}\otimes j_{a}^{\mathcal{A}}) \otimes(1_{\mathcal{B}(b,\bar{b})}\otimes j_{b}^{\mathcal{B}}))=\rho_{ \mathcal{A}(a,\bar{a})}\otimes\rho_{\mathcal{B}(b,\bar{b})};\]
the triangular diagram corresponding to equation **(iv)** can be extended to the left with the square corresponding to the equation
\[\textbf{(v)}\ ((1_{\mathcal{A}(a,\bar{a})}\otimes j_{a}^{\mathcal{A}}) \otimes(1_{\mathcal{B}(b,\bar{b})}\otimes j_{b}^{\mathcal{B}}))\circ m_{ \mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}),E,E}=m_{\mathcal{A}(a,\bar{a}), \mathcal{B}(b,\bar{b}),\mathcal{A}(a,a),\mathcal{B}(b,b)}^{\circ}\] \[((1_{\mathcal{A}(a,\bar{a})}\otimes 1_{\mathcal{B}(b,\bar{b})}) \otimes(j_{a}^{\mathcal{A}}\otimes j_{b}^{\mathcal{B}})),\]
which holds since \(m\) is natural;
the equation
**(vi)**\((\rho_{{\mathcal{A}}(a,\bar{a})}\otimes\rho_{{\mathcal{B}}(b,\bar{b})})\circ m_{{ \mathcal{A}}(a,\bar{a}),{\mathcal{B}}(b,\bar{b}),E,E^{\circ}}\big{(}1_{{ \mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b,\bar{b})}\otimes\rho_{E})\big{)}= \rho_{{\mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b,\bar{b})}\)
holds, since every diagram of natural transformations commutes, provided each arrow of which is obtained by repeatedly applying the functor \(\otimes\) to instances of \(\alpha\), \(\gamma\), \(\rho\), their inverses and \(1\) (Cf. [7, SSVII] for a precise formulation);
the diagram corresponding to **(vi)** extends downwards the diagram corresponding to the extension of **(iv)** by **(v)**, giving the equation
\[(M^{{\mathcal{A}}}_{a,a,\bar{a}}\otimes M^{{\mathcal{B}}}_{b,b,\bar{b}})\circ m _{{\mathcal{A}}(a,\bar{a}),{\mathcal{B}}(b,\bar{b}),{\mathcal{A}}(a,a),{ \mathcal{B}}(b,b)}\circ(1_{{\mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b, \bar{b})}\otimes(j^{{\mathcal{A}}}_{a}\otimes j^{{\mathcal{B}}}_{b}))\circ\]
\[(1_{{\mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b,\bar{b})}\otimes\rho_{E})) =\rho_{{\mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b,\bar{b})}\]
which is exactly equation (1.2.1);
equation (1.2.2) also holds, whose proof can be obtained by mimicking the proof of (1.2.1) just above.
(2) Is \(T\bigcirc S:{\mathcal{A}}\bigcirc{\mathcal{B}}\to{\mathcal{A}}^{\prime} \bigcirc{\mathcal{B}}^{\prime}\) a \(\mathbb{C}\)-functor?
(2.1) \(T\bigcirc S_{(a,b),(\bar{a},\bar{b})}\circ M^{{\mathcal{A}}\bigcirc{\mathcal{ B}}}_{(a,b),(\bar{a},\bar{b}),(\bar{a},\bar{b})}=M^{{\mathcal{A}}^{\prime} \bigcirc{\mathcal{B}}^{\prime}}_{(T(a),S(b)),(T(\bar{a}),S(\bar{b})),(T(\bar{ a}),S(\bar{b}))}\circ\]
\((T\bigcirc S_{(\bar{a},\bar{b}),(\bar{a},\bar{b})}\otimes T\bigcirc S_{(a,b),( \bar{a},\bar{b})})\), for every \(a,\bar{a},\bar{\bar{a}}\in{\mathcal{A}}\) and \(b,\bar{b},\bar{\bar{b}}\in{\mathcal{B}}\):
as \(T\) and \(S\) are \(\mathbb{C}\)-functors, the following two equations hold,
\[T_{a,\bar{a}}\circ M^{{\mathcal{A}}}_{a,\bar{a},\bar{\bar{a}}}=M^{{\mathcal{ A}}^{\prime}}_{T(a),T(\bar{a}),T(\bar{a})}\circ(T_{\bar{a},\bar{a}}\otimes T_{a, \bar{a}})\]
and
\[S_{b,\bar{b}}\circ M^{{\mathcal{B}}}_{b,\bar{b},\bar{b}}=M^{{\mathcal{B}}^{ \prime}}_{S(b),S(\bar{b}),S(\bar{b})}\circ(S_{\bar{b},\bar{b}}\otimes S_{b, \bar{b}});\]
then, by tensoring,
\[(T_{a,\bar{a}}\otimes S_{b,\bar{b}})\circ(M^{{\mathcal{A}}}_{a, \bar{a},\bar{\bar{a}}}\otimes M^{{\mathcal{B}}}_{b,\bar{b},\bar{\bar{b}}}) \,=\,(M^{{\mathcal{A}}^{\prime}}_{T(a),T(\bar{a}),T(\bar{a})}\otimes M^{{ \mathcal{B}}^{\prime}}_{S(b),S(\bar{b}),S(\bar{b})})\circ\] \[((T_{\bar{a},\bar{a}}\otimes T_{a,\bar{a}})\otimes(S_{\bar{b}, \bar{b}}\otimes S_{b,\bar{b}})),\]
which, together with the following equation,
\[((T_{\bar{a},\bar{a}}\otimes T_{a,\bar{a}})\otimes(S_{\bar{b}, \bar{b}}\otimes S_{b,\bar{b}}))\circ m_{{\mathcal{A}}(\bar{a},\bar{\bar{a}}),{ \mathcal{B}}(\bar{b},\bar{\bar{b}}),{\mathcal{A}}(a,\bar{a}),{\mathcal{B}}(b, \bar{b})}=\] \[m_{{\mathcal{A}}^{\prime}(T(\bar{a}),T(\bar{a})),{\mathcal{B}}^{ \prime}(S(\bar{b}),S(\bar{b})),{\mathcal{A}}^{\prime}(T(a),T(\bar{a})),{ \mathcal{B}}^{\prime}(S(b),S(\bar{b}))}\circ((T_{\bar{a},\bar{a}}\otimes S_{ \bar{b},\bar{b}})\otimes(T_{a,\bar{a}}\otimes S_{b,\bar{b}}))\]
arising from the naturality of \(m\), gives the equation
\[(T_{a,\bar{a}}\otimes S_{b,\bar{b}})\circ(M^{{\mathcal{A}}}_{a, \bar{a},\bar{a}}\otimes M^{{\mathcal{B}}}_{b,\bar{b},\bar{b}})\circ m_{{ \mathcal{A}}(\bar{a},\bar{\bar{a}}),{\mathcal{B}}(\bar{b},\bar{b}),{\mathcal{ A}}(a,\bar{a}),{\mathcal{B}}(b,\bar{b})}=(M^{{\mathcal{A}}^{\prime}}_{T(a),T(\bar{a}) }\otimes\] \[M^{{\mathcal{B}}^{\prime}}_{S(b),S(\bar{b}),S(\bar{b})})\,\circ\, m_{{\mathcal{A}}^{\prime}(T(\bar{a}),T(\bar{a})),{\mathcal{B}}^{\prime}(S(\bar{b}),S(\bar{b})), {\mathcal{A}}^{\prime}(T(a),T(\bar{a})),{\mathcal{B}}^{\prime}(S(b),S(\bar{b})) }\circ((T_{\bar{a},\bar{a}}\otimes\]
\(S_{\bar{b},\bar{b}})\otimes(T_{a,\bar{a}}\otimes S_{b,\bar{b}}))\)
which is exactly the compatibility with composition (2.1), according to the definitions given above.
(2.2) \(j^{{\mathcal{A}}^{\prime}\bigcirc{\mathcal{B}}^{\prime}}_{(T(a),S(b))}=T \bigcirc S_{(a,b),(a,b)}\circ j^{{\mathcal{A}}\bigcirc{\mathcal{DB}}}_{(a,b)}\), for every \(a\in{\mathcal{A}}\) and \(b\in{\mathcal{B}}\):
as \({\mathcal{A}}\) and \({\mathcal{B}}\) are \({\mathbb{C}}\)-categories, the following two equations hold,
\[j^{{\mathcal{A}}^{\prime}}_{T(a)}=T_{a,a}\circ j^{{\mathcal{A}}}_{a},\]
\[j^{{\mathcal{B}}^{\prime}}_{S(b)}=S_{b,b}\circ j^{{\mathcal{B}}}_{b},\]
and so
\[j^{{\mathcal{A}}^{\prime}}_{T(a)}\otimes j^{{\mathcal{B}}^{\prime}}_{S(b)}=(T _{a,a}\otimes S_{b,b})\circ(j^{{\mathcal{A}}}_{a}\otimes j^{{\mathcal{B}}}_{b});\]
composing with the isomorphism \(\rho_{E}:E\to E\otimes E\), one gets
\[(j^{{\mathcal{A}}^{\prime}}_{T(a)}\otimes j^{{\mathcal{B}}^{\prime}}_{S(b)}) \circ\rho_{E}=(T_{a,a}\otimes S_{b,b})\circ(j^{{\mathcal{A}}}_{a}\otimes j^{{ \mathcal{B}}}_{b})\circ\rho_{E}\]
which is the compatibility with identities (2.2), according to the definitions given above.
(3) Is \(\bigcirc\) a (bi)functor?
(3.1) \((\bigcirc 1_{({\mathcal{A}},{\mathcal{B}})}=\bigcirc(1_{{\mathcal{A}}},1_{{ \mathcal{B}}})=)1_{{\mathcal{A}}}\bigcirc 1_{{\mathcal{B}}}=1_{{\mathcal{A}}\bigcirc{ \mathcal{DB}}}\), the image of any identity is an identity (cf. the characterization of identity morphisms in \({\mathbb{C}}-Cat\), given in section 2 just before Proposition 3.1):
\(ob1_{{\mathcal{A}}}\bigcirc 1_{{\mathcal{B}}}=ob1_{{\mathcal{A}}}\times ob1_{{ \mathcal{B}}}=1_{ob({\mathcal{A}})}\times 1_{ob({\mathcal{B}})}\), by definiton of \(1_{{\mathcal{A}}}\) and \(1_{{\mathcal{B}}}\)\(=1_{ob({\mathcal{A}})\times ob({\mathcal{B}})}\), since \(\times\) is a bifunctor for \(Set\)
\((1_{{\mathcal{A}}}\bigcirc 1_{{\mathcal{B}}})_{(a,b),(\bar{a},\bar{b})}=(1_{{ \mathcal{A}}})_{(a,\bar{a})}\otimes(1_{{\mathcal{B}}})_{(b,\bar{b})}=1_{{ \mathcal{A}}(a,\bar{a})}\otimes 1_{{\mathcal{B}}(b,\bar{b})}\), by definition of \(1_{{\mathcal{A}}}\) and \(1_{{\mathcal{B}}}\)
\(=1_{{\mathcal{A}}(a,\bar{a})\otimes{\mathcal{B}}(b,\bar{b})}:{\mathcal{A}}(a, \bar{a})\otimes{\mathcal{B}}(b,\bar{b})\rightarrow{\mathcal{A}}(a,\bar{a}) \otimes{\mathcal{B}}(b,\bar{b})\), because \(\otimes\) is a bifunctor for \({\mathbb{C}}\).
(3.2) \((T^{\prime}\bigcirc S^{\prime})\circ(T\bigcirc S)=(T^{\prime}\circ T)\bigcirc (S^{\prime}\circ S)\):8
Footnote 8: Cf. the footnotes in subsection 4.1.
\(ob((T^{\prime}\bigcirc S^{\prime})\circ(T\bigcirc S))=ob(T^{\prime}\bigcirc S ^{\prime})\circ ob(T\bigcirc S)\)
\(=(obT^{\prime}\times obS^{\prime})\circ(obT\times obS)=(obT^{\prime}\circ obT )\times(obS^{\prime}\circ obS)\), because \(\times\) is a bifunctor for \(Set\)
\(=ob(T^{\prime}\circ T)\times ob(S^{\prime}\circ S)=ob((T^{\prime}\circ T) \bigcirc(S^{\prime}\circ S))\);
\(((T^{\prime}\bigcirc S^{\prime})\circ(T\bigcirc S))_{(a,b),(\bar{a},\bar{b})}=\)
\(=(T^{\prime}\bigcirc S^{\prime})_{(T(a),S(b)),(T(\bar{a}),S(\bar{b}))}\circ(T \bigcirc S)_{(a,b),(\bar{a},\bar{b})}\)
\(=(T^{\prime}_{T(a),T(\bar{a})}\otimes S^{\prime}_{S(b),S(\bar{b})})\circ(T_{a,\bar{ a}}\otimes S_{b,\bar{b}})\)
\(=(T^{\prime}_{T(a),T(\bar{a})}\circ T_{a,\bar{a}})\otimes(S^{\prime}_{S(b),S( \bar{b})}\circ S_{b,\bar{b}})\), because \(\otimes\) is a bifunctor for \(\mathbb{C}\)
\(=(T^{\prime}\circ T)_{a,\bar{a}}\otimes(S^{\prime}\circ S)_{b,\bar{b}}=((T^{ \prime}\circ T)\bigcirc(S^{\prime}\circ S))_{(a,b),(\bar{a},\bar{b})}\).
Conclusion: now, at the end of subsection 5.1, we can state that \(\bigcirc\) is indeed a (bi)functor.
### A natural isomorphism \(\wedge\) for \(\mathbb{C}-Cat\)
We define
\[\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}:(\mathcal{A}\bigcirc\mathcal{B}) \bigcirc\mathcal{C}\to\mathcal{A}\bigcirc(\mathcal{B}\bigcirc\mathcal{C})\]
as
\[ob\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}:(ob(\mathcal{A})\times ob( \mathcal{B}))\times ob(\mathcal{C})\to ob(\mathcal{A})\times(ob(\mathcal{B}) \times ob(\mathcal{C}))\]
the canonical isomorphism in \(Set\), and
\((\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}})_{((a,b),c),((\bar{a},\bar{b}), \bar{c})}=\alpha_{\mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}),\mathcal{C}(c,\bar{c})}:(\mathcal{A}(a,\bar{a})\otimes\mathcal{B}(b,\bar{b}))\otimes \mathcal{C}(c,\bar{c})\to\mathcal{A}(a,\bar{a})\otimes(\mathcal{B}(b,\bar{b}) \otimes\mathcal{C}(c,\bar{c}))\),
for every \(\mathcal{A},\mathcal{B},\mathcal{C}\in\mathbb{C}-Cat\) and every three pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\), \((b,\bar{b})\in ob(\mathcal{B})\times ob(\mathcal{B})\) and \((c,\bar{c})\in ob(\mathcal{C})\times ob(\mathcal{C})\).
Is \(\wedge\) natural? That is, for every triple of \(\mathbb{C}\)-functors \(T:\mathcal{A}\to\mathcal{A}^{\prime}\), \(S:\mathcal{B}\to\mathcal{B}^{\prime}\), \(R:\mathcal{C}\to\mathcal{C}^{\prime}\), does the diagram corresponding to the following equation commute?
\[(T\bigcirc(S\bigcirc R))\circ\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}= \wedge_{\mathcal{A}^{\prime},\mathcal{B}^{\prime},\mathcal{C}^{\prime}}\circ( (T\bigcirc S)\bigcirc R)).\]
The image of this equation by the functor \(ob:\mathbb{C}-Cat\to Set\) is obviously true, since
As \(\alpha\) is natural by assumption, the following equation holds,
\((T_{a,\bar{a}}\otimes(S_{b,\bar{b}}\otimes R_{c,\bar{c}}))\circ\alpha_{ \mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}),\mathcal{C}(c,\bar{c})}=\alpha_ {\mathcal{A}^{\prime}(T(a),T(\bar{a})),\mathcal{B}^{\prime}(S(b),S(\bar{b})), \mathcal{C}^{\prime}(R(c),R(\bar{c}))}\circ\)
\(((T_{a,\bar{a}}\otimes S_{b,\bar{b}})\otimes R_{c,\bar{c}})\),
for every three pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\), \((b,\bar{b})\in ob(\mathcal{B})\times ob(\mathcal{B})\) and \((c,\bar{c})\in ob(\mathcal{C})\times ob(\mathcal{C})\); hence, \(\wedge\) is natural.
The pentagon coherence axiom corresponds to the equation
\[\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}\bigcirc\mathcal{D}}\circ\wedge_{ \mathcal{A}\bigcirc\mathcal{B},\mathcal{C},\mathcal{D}}=(1_{\mathcal{A}} \bigcirc\wedge_{\mathcal{B},\mathcal{C},\mathcal{D}})\circ\wedge_{\mathcal{A},\mathcal{B}\bigcirc\mathcal{C},\mathcal{D}}\circ(\wedge_{\mathcal{A}, \mathcal{B},\mathcal{C}}\bigcirc 1_{\mathcal{D}}),\]
whose image by the functor \(ob:\mathbb{C}-Cat\to Set\) is obviously true, since
and, as \(\alpha\) satisfies the coherence axioms in \(\mathbb{C}\), it follows that the following equation holds,
\(\alpha_{\mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}),\mathcal{C}(c,\bar{c}) \otimes\mathcal{D}(d,\bar{d})}\circ\alpha_{\mathcal{A}(a,\bar{a})\otimes \mathcal{B}(b,\bar{b}),\mathcal{C}(c,\bar{c}),\mathcal{D}(d,\bar{d})}=(1_{ \mathcal{A}(a,\bar{a})}\otimes\alpha_{\mathcal{B}(b,\bar{b}),\mathcal{C}(c, \bar{c}),\mathcal{D}(d,\bar{d})})\circ\alpha_{\mathcal{A}(a,\bar{a}),\mathcal{B }(b,\bar{b})\otimes\mathcal{C}(c,\bar{c}),\mathcal{D}(d,\bar{d})}\circ(\alpha_ {\mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}),\mathcal{C}(c,\bar{c})}\otimes 1_{ \mathcal{D}(d,\bar{d})})\),
for every four pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\), \((b,\bar{b})\in ob(\mathcal{B})\times ob(\mathcal{B})\), \((c,\bar{c})\in ob(\mathcal{C})\times ob(\mathcal{C})\) and \((d,\bar{d})\in ob(\mathcal{D})\times ob(\mathcal{D})\); therefore, the pentagon coherence axiom holds for \(\wedge\).
### A symmetry \(\Gamma\) for \(\mathbb{C}-Cat\)
We define
\[\Gamma_{\mathcal{A},\mathcal{B}}:\mathcal{A}\bigcirc\mathcal{B}\to\mathcal{B} \bigcirc\mathcal{A}\]
as
\[ob\Gamma_{\mathcal{A},\mathcal{B}}:ob(\mathcal{A})\times ob(\mathcal{B}) \to ob(\mathcal{B})\times ob(\mathcal{A}),\]
the canonical isomorphism in \(Set\), and
\[(\Gamma_{\mathcal{A},\mathcal{B}})_{(a,b),(\bar{a},\bar{b})}=\gamma_{ \mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b})}:\mathcal{A}(a,\bar{a})\otimes \mathcal{B}(b,\bar{b})\to\mathcal{B}(b,\bar{b})\otimes\mathcal{A}(a,\bar{a}),\]
for every pair \(\mathcal{A},\mathcal{B}\) of \(\mathbb{C}\)-categories and every \(a,\bar{a}\in ob(\mathcal{A})\) and every \(b,\bar{b}\in ob(\mathcal{B})\).
Is \(\Gamma\) natural? That is, for every pair of \(\mathbb{C}\)-functors \(T:\mathcal{A}\to\mathcal{A}^{\prime}\), \(S:\mathcal{B}\to\mathcal{B}^{\prime}\), does the diagram corresponding to the following equation commute?
\[(S\bigcirc T)\circ\Gamma_{\mathcal{A},\mathcal{B}}=\Gamma_{\mathcal{A}^{ \prime},\mathcal{B}^{\prime}}\circ(T\bigcirc S).\]
The image of this equation by the functor \(ob:\mathbb{C}-Cat\to Set\) is obviously true, since
As \(\gamma\) is natural by assumption, the following equation holds,
\[(S_{b,\bar{b}}\otimes T_{a,\bar{a}})\circ\gamma_{\mathcal{A}(a,\bar{a}), \mathcal{B}(b,\bar{b})}=\gamma_{\mathcal{A}^{\prime}(T(a),T(\bar{a})), \mathcal{B}^{\prime}(S(b),S(\bar{b}))}\circ(T_{a,\bar{a}}\otimes S_{b,\bar{b }}),\]
for every two pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\), \((b,\bar{b})\in ob(\mathcal{B})\times ob(\mathcal{B})\); hence, \(\Gamma\) is natural.
We will check now the coherence axioms corresponding to the two equations
\[(1)\ \Gamma_{\mathcal{B},\mathcal{A}}\circ\Gamma_{\mathcal{A},\mathcal{B}}=1_{ \mathcal{A}\bigcirc\mathcal{B}},\]
and
\[(2)\wedge_{\mathcal{B},\mathcal{C},\mathcal{A}}\circ\Gamma_{\mathcal{A}, \mathcal{B}\bigcirc\mathcal{C}}\circ\wedge_{\mathcal{A},\mathcal{B},\mathcal{C }}=(1_{\mathcal{B}}\bigcirc\Gamma_{\mathcal{A},\mathcal{C}})\circ\wedge_{ \mathcal{B},\mathcal{A},\mathcal{C}}\circ(\Gamma_{\mathcal{A},\mathcal{B}} \bigcirc 1_{\mathcal{C}}),\]
whose images by the functor \(ob:\mathbb{C}-Cat\to Set\) are obviously true, since
\[(1)\ (a,b)\mapsto(b,a)\mapsto(a,b),\]
and
\[(2)\ ((a,b),c)\ \raisebox{-1.5pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig// fig// fig// fig// fig// fig// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig////// fig////// fig////// fig/////// fig/////// fig/////// fig/////// fig/////// fig//////// fig//////// fig///////// fig
\[(\Omega_{\mathcal{A}})_{(a,*),(\bar{a},*)}=\rho_{\mathcal{A}(a,\bar{a})}:\mathcal{ A}(a,\bar{a})\otimes E\to\mathcal{A}(a,\bar{a}),\]
for every \(\mathbb{C}\)-category \(\mathcal{A}\) and \(a,\bar{a}\in ob(\mathcal{A})\).
Is \(\Omega\) a natural isomorphism? That is, does the diagram corresponding to the following equation commute, for every \(\mathbb{C}\)-functor \(T:\mathcal{A}\to\mathcal{B}\)?
\[T\circ\Omega_{\mathcal{A}}=\Omega_{\mathcal{B}}\circ(T\bigcirc 1_{\mathfrak{E}}).\]
The image of this equation by the functor \(ob:\mathbb{C}-Cat\to Set\) is true, since
As \(\rho\) is natural by assumption, the following equation holds,
\[T_{a,\bar{a}}\circ\rho_{\mathcal{A}(a,\bar{a})}=\rho_{\mathcal{B}(b,\bar{b})} \circ(T_{a,\bar{a}}\otimes 1_{E}),\]
for every pair \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\); hence, \(\Omega\) is natural.
We will check now the coherence axiom corresponding to the following equation, for every \(\mathcal{A},\mathcal{B}\in\mathbb{C}-Cat\),
\[\Omega_{\mathcal{A}}\bigcirc 1_{\mathcal{B}}=(1_{\mathcal{A}}\bigcirc( \Omega_{\mathcal{B}}\circ\Gamma_{\mathfrak{E},\mathcal{B}}))\circ\wedge_{ \mathcal{A},\mathfrak{E},\mathcal{B}}:(\mathcal{A}\bigcirc\mathfrak{E}) \bigcirc\mathcal{B}\to\mathcal{A}\bigcirc(\mathfrak{E}\bigcirc\mathcal{B})\to \mathcal{A}\bigcirc(\mathfrak{E}\bigcirc\mathcal{B})\to\mathcal{A}\bigcirc( \mathfrak{E}\bigcirc\mathcal{B})\to\mathcal{A}\bigcirc(\mathfrak{E}\bigcirc \mathcal{B})\to\mathcal{A}\bigcirc\mathcal{B}\]
whose image by the functor \(ob:\mathbb{C}-Cat\to Set\) is obviously true, since \(((a,*),b)\mapsto(a,(*,b))\mapsto(a,(b,*))\mapsto(a,b)\);
as \(\rho\) satisfies the coherence axioms in \(\mathbb{C}\), then the following equation holds
\[\rho_{\mathcal{A}(a,\bar{a})}\otimes 1_{\mathcal{B}(b,\bar{b})}=(1_{ \mathcal{A}(a,\bar{a})}\otimes(\rho_{\mathcal{B}(b,\bar{b})}\circ\gamma_{E, \mathcal{B}(b,\bar{b})}))\circ\alpha_{\mathcal{A}(a,\bar{a}),E,B(b,\bar{b})}:\] \[(\mathcal{A}(a,\bar{a})\otimes E)\otimes\mathcal{B}(b,\bar{b}) \to\mathcal{A}(a,\bar{a})\otimes(E\otimes\mathcal{B}(b,\bar{b}))\to\mathcal{A }(a,\bar{a})\otimes\mathcal{B}(b,\bar{b})\]
for every two pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\), \((b,\bar{b})\in ob(\mathcal{B})\times ob(\mathcal{B})\); therefore, the coherence axiom for \(\Omega\) holds.
## 6. The derived adjunction is (symmetric) monoidal
In section 5 it was shown that
\[(1)\ \mathbb{C}-Cat=(\mathbb{C}-Cat,\bigcirc,\mathfrak{E},\wedge,\Gamma, \Omega)\]
is a symmetric monoidal category, obtained from the symmetric monoidal category \(\mathbb{C}=(\mathbb{C},\otimes,E,\alpha,\gamma,\rho)\).
Analogously,
\[(2)\ \mathbb{X}-Cat=(\mathbb{X}-Cat,\nabla,\mathcal{I},\vee,\top,\Re)\]
can be obtained from the symmetric monoidal category \(\mathbb{X}=(\mathbb{X},\Diamond,I,\mathfrak{a},t,r)\), being also a symmetric monoidal category.
In this section 6, it will be shown that the derived adjunction (cf. 4.1)
\[(\mathbb{F},\mathbb{G},\Theta,\Upsilon):\mathbb{C}-Cat\to\mathbb{X}-Cat\]
is a (symmetric) monoidal adjunction, likewise the adjunction \((F,G,\eta,\varepsilon):\mathbb{C}\to\mathbb{X}\), but with respect to the derived monoidal categories \((1)\) and \((2)\) of the paragraph above.
### \(\mathbb{F}\) preserves the (symmetric) monoidal structure
* \(\mathbb{F}(\mathcal{A}\bigcirc\mathcal{B})=\mathbb{F}(\mathcal{A})\nabla \mathbb{F}(\mathcal{B})\), for every \(\mathcal{A},\mathcal{B}\in\mathbb{C}-Cat\), because: \(ob\mathbb{F}(\mathcal{A}\bigcirc\mathcal{B})=ob(\mathcal{A}\bigcirc\mathcal{ B})=ob(\mathcal{A})\times ob(\mathcal{B})=ob\mathbb{F}(\mathcal{A})\times ob \mathbb{F}(\mathcal{B})=ob(\mathbb{F}(\mathcal{A})\nabla\mathbb{F}(\mathcal{B }))\); \(\mathbb{F}(\mathcal{A}\bigcirc\mathcal{B})((a,b),(\bar{a},\bar{b}))=F(\mathcal{A }\bigcirc\mathcal{B}((a,b),(\bar{a},\bar{b})))=F(\mathcal{A}(a,\bar{a}) \Diamond\)\(\mathcal{B}(b,\bar{b}))=F(\mathcal{A}(a,\bar{a}))\Diamond F(\mathcal{B}(b,\bar{b}))= \mathbb{F}(\mathcal{A})(a,\bar{a})\Diamond\)\(\mathbb{F}(\mathcal{B})(b,\bar{b})=(\mathbb{F}(\mathcal{A})\nabla\mathbb{F}(\mathcal{B}))((a,b),( \bar{a},\bar{b}))\), \(M^{\mathbb{F}(\mathcal{A}\bigcirc\mathcal{B})}_{(a,b),(\bar{a},\bar{b}),( \bar{a},\bar{b})}=F(M^{\mathcal{A}\bigcirc\mathcal{B}}_{(a,b),(\bar{a},\bar{b }),(\bar{a},\bar{b})})\) \(=F((M^{\mathcal{A}}_{a,\bar{a},\bar{a}}\Diamond M^{\mathcal{B}}_{b,\bar{b}, \bar{b}})\circ m_{\mathcal{A}(\bar{a},\bar{a}),\mathcal{B}(\bar{b},\bar{b}), \mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b})})\) \(=F(M^{\mathcal{A}}_{a,\bar{a},\bar{a}}\Diamond M^{\mathcal{B}}_{b,\bar{b}, \bar{b}})\circ Fm_{\mathcal{A}(\bar{a},\bar{a}),\mathcal{B}(\bar{b},\bar{b}), \mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b})}\) \(=(F(M^{\mathcal{A}}_{a,\bar{a},\bar{a}})\Diamond F(M^{\mathcal{B}}_{b,\bar{b}, \bar{b}}))\circ m_{F(\mathcal{A}(\bar{a},\bar{a})),F(\mathcal{B}(\bar{b},\bar{ b})),F(\mathcal{A}(a,\bar{a})),F(\mathcal{B}(b,\bar{b}))}\) \(=(M^{\mathbb{F}(\mathcal{A})\bigcirc M^{\mathbb{F}(\mathcal{B})}}_{a,\bar{a},\bar{a}}\Diamond M^{\mathbb{F}(\mathcal{B})}_{b,\bar{b},\bar{b}})\circ m_{ \mathbb{F}(\mathcal{A})(\bar{a},\bar{a}),\mathbb{F}(\mathcal{B})(\bar{b},\bar{ b}),\mathbb{F}(\mathcal{A})(a,\bar{a}),\mathbb{F}(\mathcal{B})(b,\bar{b})}\) \(=M^{\mathbb{F}(\mathcal{A})\nabla\mathbb{F}(\mathcal{B})}_{(a,b),(\bar{a}, \bar{b})}\), \(j^{\mathbb{F}(\mathcal{A}\bigcirc\mathcal{B})}_{(a,b)}=Fj^{\mathcal{A} \bigcirc\mathcal{B}}_{(a,b)}=F((j^{\mathcal{A}}_{a}\Diamond j^{\mathcal{B}}_{ b})\circ\rho_{E})=F(j^{\mathcal{A}}_{a}\Diamond j^{\mathcal{B}}_{b})\circ r_{I}=\) \((j^{\mathbb{F}(\mathcal{A})}_{a}\Diamond j^{\mathbb{F}(\mathcal{B})}_{b}) \circ r_{I}=j^{\mathbb{F}(\mathcal{A})\nabla\mathbb{F}(\mathcal{B})}_{(a,b)},\) for every three pairs \((a,b),(\bar{a},\bar{b}),(\bar{a},\bar{\bar{b}})\in ob(\mathcal{A})\times ob( \mathcal{B})\).
* \(\mathbb{F}(T\bigcirc S)=\mathbb{F}(T)\nabla\mathbb{F}(S)\), for every two \(\mathbb{C}\)-functors \(T:\mathcal{A}\to\mathcal{A}^{\prime}\) and \(S:\mathcal{B}\to\mathcal{B}^{\prime}\), because: \(ob\mathbb{F}(T\bigcirc S)=ob(T\bigcirc S)=obT\times obS=ob\mathbb{F}(T) \times ob\mathbb{F}(S)=ob(\mathbb{F}(T)\nabla\mathbb{F}(S))\); \((\mathbb{F}(T\bigcirc S))_{(a,b),(\bar{a},\bar{b})}=F(T\bigcirc S)_{(a,b),( \bar{a},\bar{b})}=F(T_{a,\bar{a}}\Diamond S_{b,\bar{b}})=FT_{a,\bar{a}}\Diamond FS _{b,\bar{b}}=(\mathbb{F}T)_{a,\bar{a}}\Diamond(\mathbb{F}S)_{b,\bar{b}}=( \mathbb{F}T\nabla\mathbb{F}S)_{(a,b),(\bar{a},\bar{b})}\), for every two pairs \((a,b),(\bar{a},\bar{b})\in ob(\mathcal{A})\times ob(\mathcal{B})\).
* \(\mathbb{F}(\mathfrak{E})=\mathcal{I}\) because: \(ob\mathbb{F}(\mathfrak{E})=ob(\mathfrak{E})=\{*\}=ob(\mathfrak{I})\);
\(\mathbb{F}(\mathfrak{E})(*,*)=F(\mathfrak{E}(*,*))=F(E)=I=\mathcal{I}(*,*)\); \(M_{*,*,*}^{\mathbb{F}(\mathfrak{E})}=FM_{*,*,*}^{\mathfrak{E}}=F\rho_{E}=r_{I}=M_{ *,*,*}^{\mathcal{I}}:I\lozenge I\to I\), since \(F\) preserves \(\rho\); \(j_{*}^{\cdot\mathbb{F}(\mathfrak{E})}=Fj_{*}^{\bullet}=F1_{E}=1_{F(E)}=1_{I}= j_{*}^{\mathcal{I}}:I\to I\), since \(F(E)=I\).
* \(\mathbb{F}\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}=\vee_{\mathbb{F}( \mathcal{A}),\mathbb{F}(\mathcal{B}),\mathbb{F}(\mathcal{C})}\), for every \(\mathcal{A},\mathcal{B},\mathcal{C}\in\mathbb{C}-Cat\), because: \(ob\mathbb{F}\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}=ob\wedge_{\mathcal{ A},\mathcal{B},\mathcal{C}}:(ob(\mathcal{A})\times ob(\mathcal{B}))\times ob( \mathcal{C})\to ob(\mathcal{A})\times(ob(\mathcal{B})\times ob(\mathcal{C}))\) is the canonical isomorphism in \(Set\), \(ob\vee_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B}),\mathbb{F}(\mathcal{C })}:(ob\mathbb{F}(\mathcal{A})\times ob\mathbb{F}(\mathcal{B}))\times ob \mathbb{F}(\mathcal{C})\to ob\mathbb{F}(\mathcal{A})\times(ob\mathbb{F}( \mathcal{B})\times ob\mathbb{F}(\mathcal{C}))\) is also the canonical isomorphism in \(Set\), and \(ob\vee_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B}),\mathbb{F}(\mathcal{C })}=ob\mathbb{F}\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}}\) since \(ob\mathbb{F}(\mathcal{A})=ob(\mathcal{A})\), \(ob\mathbb{F}(\mathcal{B})=ob(\mathcal{B})\) and \(ob\mathbb{F}(\mathcal{C})=ob(\mathcal{C})\); \((\mathbb{F}\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}})_{((a,b),c),((\bar{a },\bar{b}),\bar{c})}=F(\wedge_{\mathcal{A},\mathcal{B},\mathcal{C}})_{((a,b), c),((\bar{a},\bar{b}),\bar{c})}=F\alpha_{\mathcal{A}(a,\bar{a}),\mathcal{B}(b,\bar{b}), \mathcal{C}(c,\bar{c})}=\mathfrak{a}_{F(\mathcal{A}(a,\bar{a})),F(\mathcal{B }(b,\bar{b})),F(\mathcal{C}(c,\bar{c}))}=\mathfrak{a}_{\mathbb{F}(\mathcal{A} )(a,\bar{a}),\mathbb{F}(\mathcal{B})(b,\bar{b}),\mathbb{F}(\mathcal{C})(c, \bar{c})}=(\vee_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B}),\mathbb{F}( \mathcal{C})})_{((a,b),c),((\bar{a},\bar{b}),\bar{c})}\), for every three pairs \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A}),(b,\bar{b})\in ob( \mathcal{B})\times ob(\mathcal{B})\) and \((c,\bar{c})\in ob(\mathcal{C})\times ob(\mathcal{C})\).
* \(\mathbb{F}\Gamma_{\mathcal{A},\mathcal{B}}=\top_{\mathbb{F}(\mathcal{A}), \mathbb{F}(\mathcal{B})}\), for every \(\mathcal{A},\mathcal{B}\in\mathbb{C}-Cat\), because: \(ob\mathbb{F}\Gamma_{\mathcal{A},\mathcal{B}}=ob\Gamma_{\mathcal{A},\mathcal{B }}:ob(\mathcal{A})\times ob(\mathcal{B})\to ob(\mathcal{B})\times ob( \mathcal{A})\) is the canonical isomorphism in \(Set\), \(ob\top_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B})}:ob\mathbb{F}(\mathcal{ A})\times ob\mathbb{F}(\mathcal{B})\to ob\mathbb{F}(\mathcal{B})\times ob \mathbb{F}(\mathcal{A})\) is also the canonical isomorphism in \(Set\), and \(ob\top_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B})}=ob\mathbb{F}\Gamma_{ \mathcal{A},\mathcal{B}}\) since \(ob\mathbb{F}(\mathcal{A})=ob(\mathcal{A})\) and \(ob\mathbb{F}(\mathcal{B})=ob(\mathcal{B})\); \((\mathbb{F}\Gamma_{\mathcal{A},\mathcal{B}})_{(a,b),(\bar{a},\bar{b})}=F( \Gamma_{\mathcal{A},\mathcal{B}})_{(a,b),(\bar{a},\bar{b})}=F\gamma_{\mathcal{A} (a,\bar{a}),\mathcal{B}(b,\bar{b})}=t_{F(\mathcal{A}(a,\bar{a})),F(\mathcal{B }(b,\bar{b}))}=t_{\mathbb{F}(\mathcal{A})(a,\bar{a}),\mathbb{F}(\mathcal{B}) (b,\bar{b})}=(\top_{\mathbb{F}(\mathcal{A}),\mathbb{F}(\mathcal{B})})_{(a,\bar {b}),(\bar{a},\bar{b})}\), for every two pairs \((a,b),(\bar{a},\bar{b})\in ob(\mathcal{A})\times ob(\mathcal{B})\).
* \(\mathbb{F}\mathds{U}_{\mathcal{A}}=\Re_{\mathbb{F}(\mathcal{A})}\), for every \(\mathcal{A}\in\mathbb{C}-Cat\), because: \(ob\mathbb{F}\mathds{U}_{\mathcal{A}}=ob\mathds{U}_{\mathcal{A}}:ob(\mathcal{A}) \times\{*\}\to ob(\mathcal{A})\) is the canonical projection in \(Set\), \(ob\Re_{\mathbb{F}(\mathcal{A})}=ob\mathbb{F}(\mathcal{A})\times\{*\}\to ob \mathbb{F}(\mathcal{A})\) is also the canonical projection in \(Set\), and \(ob\mathbb{F}\mathds{U}_{\mathcal{A}}=ob\Re_{\mathbb{F}(\mathcal{A})}\) since \(ob\mathbb{F}(\mathcal{A})=ob(\mathcal{A})\); \((\mathbb{F}\mathds{U}_{\mathcal{A}})_{(a,*),(\bar{a},*)}=F(\mathds{U}_{\mathcal{A }})_{(a,*),(\bar{a},*)}=F\rho_{\mathcal{A}(a,\bar{a})}=r_{\mathcal{A}(a,\bar{a} )}=(\Re_{\mathcal{A}})_{(a,*),(\bar{a},*)}\), for every pair \((a,\bar{a})\in ob(\mathcal{A})\times ob(\mathcal{A})\).
### \(\mathbb{G}\) preserves the (symmetric) monoidal structure
Analogously, the same conclusions, drawn for \(\mathbb{F}:\mathbb{C}-Cat\to\mathbb{X}-Cat\) in the previous subsection 6.1, hold as well for \(\mathbb{G}:\mathbb{X}-Cat\to\mathbb{C}-Cat\), which therefore preserves the symmetric monoidal structure.
## 7. Limits in a category \(\mathcal{V}-Cat\) of all \(\mathcal{V}\)-categories
\((\mathcal{V}=\mathbb{C},\mathbb{X})\)
In this section, it will be proved that the existence of limits in a category \(\mathbb{C}\) implies their existence in \(\mathbb{C}-Cat\) as well (cf. the following Proposition 7.1), showing how they can be obtained in \(\mathbb{C}-Cat\) using the limits in \(\mathbb{C}\) and in the category \(Set\) of all sets.
**Proposition 7.1**.: _Given a functor_
\[A:\mathbb{I}\to\mathbb{C}-Cat,\]
\[\tau:i\to j\mapsto T=A\tau:\mathcal{A}_{i}\to\mathcal{A}_{j},\]
_consider:_
* _the underlying object functor_ \[ob:\mathbb{C}-Cat\to Set,\] \[S:\mathcal{A}\to\mathcal{B}\mapsto obS:ob(\mathcal{A})\to ob(\mathcal{B});\]
* _its composition with_ \(A\)__ \[ob\circ A:\mathbb{I}\to\mathbb{C}-Cat\to Set,\] \[\tau:i\to j\mapsto obT=obA\tau:ob(\mathcal{A}_{i})\to ob(\mathcal{A}_{j}),\]
* _and the limiting cone of_ \(ob\circ A\)_,_9__ Footnote 9: \(\Delta:Set\to Set^{\mathbb{I}}\) is the diagonal functor, right-adjoint of \(\underline{Lim}\). \[ob\pi:\Delta:\mathbb{I}\to\mathbb{C}^{\mathbb{I}},\,\text{another diagonal functor.}\] \[ob\pi:\Delta ob(\mathcal{L})\to ob\circ A;\]
* _each projection of this limiting cone_ \(ob\pi\) _in_ \(Set\) _will be denoted by_ \[ob\pi^{i}:ob(\mathcal{L})\to ob(\mathcal{A}_{i});\]
* _if_ \(a\in ob(\mathcal{L})\) _then_ \(a=(a_{i})_{i\in\mathbb{I}}\)_, since_ \(ob(\mathcal{L})\subseteq\prod_{i\in\mathbb{I}}ob(\mathcal{A}_{i})\)_;_
* _for each pair_ \(a=(a_{i})_{i\in\mathbb{I}}\)_,_ \(b=(b_{i})_{i\in\mathbb{I}}\in ob(\mathcal{L})\)_, the functor_ \[A_{a,b}:\mathbb{I}\to\mathbb{C}\] \[\tau:i\to j\mapsto T_{a_{i},b_{i}}:\mathcal{A}_{i}(a_{i},b_{i})\to\mathcal{A}_{j} (a_{j},b_{j}),\] _where_ \(a_{i}=ob\pi^{i}(a)\)_,_ \(b_{i}=ob\pi^{i}(b)\)_,_ \(a_{j}=obT(a_{i})=ob\pi^{j}(a)\)_,_ \(b_{j}=obT(b_{i})=ob\pi^{j}(b)\)_._
_Suppose that \(A_{a,b}\) has a limiting cone in \(\mathbb{C}\),10_
Footnote 10: \(\Delta:\mathbb{I}\to\mathbb{C}^{\mathbb{I}}\), another diagonal functor.
\[\pi_{a,b}:\Delta\mathcal{L}(a,b)\to A_{a,b},\]
_with projections \(\pi^{i}_{a,b}:\mathcal{L}(a,b)\to\mathcal{A}_{i}(a_{i},b_{i})\), \(i\in\mathbb{I}\), for every pair \(a,b\in ob(\mathcal{L})\)._
_Then, the functor \(A\) has a limiting cone \(\pi:\Delta\mathcal{L}\to A\).11_
Footnote 11: \(\Delta:\mathbb{I}\to(\mathbb{C}-Cat)^{\mathbb{I}}\), another diagonal functor.
Proof.: First, the description of \(\mathcal{L}\in\mathbb{C}-Cat\) and of each \(\mathbb{C}\)-functor projection \(\pi^{i}:\mathcal{L}\to\mathcal{A}_{i}\) will be given, \(i\in\mathbb{I}\).
Secondly, it will be shown that \(\pi=(\pi^{i})_{i\in\mathbb{I}}\) is in fact a limiting cone in \(\mathbb{C}-Cat\).
In more detail: we will describe (1) \(ob(\mathcal{L})\);
(2) \(\mathcal{L}(a,b)\), (3) \(M^{\mathcal{L}}_{a,b,c}:\mathcal{L}(b,c)\otimes\mathcal{L}(a,b)\to\mathcal{L} (a,c)\), (4) \(j^{\mathcal{L}}_{a}:E\to\mathcal{L}(a,a)\), for all \(a,b,c\in ob(\mathcal{L})\);
and we will check if (5) \(\mathcal{L}\) is a \(\mathbb{C}\)-category, that is, if (5.1) the associativity axioms and (5.2) unit axioms hold;
(6) if \(\pi^{i}:\mathcal{L}\to\mathcal{A}_{i}\) is a \(\mathbb{C}\)-functor, for each \(i\in\mathbb{I}\);
and finally if (7) \(\pi\) is a limiting cone in \(\mathbb{C}-Cat\).
(1) The set \(ob(\mathcal{L})\) of objects was already given above in the statement, it is the limit of \(ob\circ A\) in \(Set\).
(2) For each pair \(a,b\) of objects of \(\mathcal{L}\), the hom-object \(\mathcal{L}(a,b)\) is the limit of \(A_{a,b}\) in \(\mathbb{C}\), which exists by assumption: \(\Delta\mathcal{L}(a,b)\overset{\pi_{a,b}}{\longrightarrow}A_{a,b}\) (the projections are \(\mathcal{L}(a,b)\overset{\pi_{a,b}^{i}}{\longrightarrow}A_{i}(a_{i},b_{i})\), \(i\in\mathbb{I}\)).
(3) The composition law for each triple \(a,b,c\) of objects is given by the unique morphism
\[M^{\mathcal{L}}_{a,b,c}:\mathcal{L}(b,c)\otimes\mathcal{L}(a,b)\to\mathcal{L} (a,c)\]
in the obvious limit diagram in \(\mathbb{C}\) corresponding to the equation
\[M_{a,b,c}\cdot(\pi_{b,c}\otimes\pi_{a,b})=\pi_{a,c}\cdot\Delta M^{\mathcal{L} }_{a,b,c}:\Delta(\mathcal{L}(b,c)\otimes\mathcal{L}(a,b))\to\Delta\mathcal{L} (a,c)\to A_{a,c},\]
where
\[M_{a,b,c}=(M^{\mathcal{A}_{i}}_{a_{i},b_{i},c_{i}})_{i\in\mathbb{I}}:\otimes \circ<A_{b,c},A_{a,b}>\to A_{a,c}:\mathbb{I}\to\mathbb{C}\times\mathbb{C}\to \mathbb{C}\]
is a natural transformation due to the compatibility with composition of the \(\mathbb{C}\)-functors,
\[\pi_{b,c}\otimes\pi_{a,b}=(\pi_{b,c}^{i}\otimes\pi_{a,b}^{i})_{i\in\mathbb{I }}:\Delta(\mathcal{L}(b,c)\otimes\mathcal{L}(a,b))\to\otimes\circ<A_{b,c},A_{ a,b}>:\mathbb{I}\to\mathbb{C}\times\mathbb{C}\to\mathbb{C}\]
is the cone obtained by tensoring the two cones
\(\pi_{b,c}:\Delta\mathcal{L}_{b,c}\to A_{b,c}:\mathbb{I}\to\mathbb{C}\) and \(\pi_{a,b}:\Delta\mathcal{L}_{a,b}\to A_{a,b}:\mathbb{I}\to\mathbb{C}\).
(4) The identity element is given by the unique morphism
\[j^{\mathcal{L}}_{a}:E\to\mathcal{L}(a,a),\]
for each object \(a\in ob(\mathcal{L})\), in the obvious limit diagram in \(\mathbb{C}\) corresponding to the equation
\[j_{a}=\pi_{a,a}\cdot\Delta j^{\mathcal{L}}_{a}:\Delta E\to\Delta\mathcal{L}(a, a)\to A_{a,a},\]
where
\[j_{a}=(j^{\mathcal{A}_{i}}_{a})_{i\in\mathbb{I}}:\Delta E\to A_{a,a}:\mathbb{I }\to\mathbb{C}\]
is a natural transformation due to the compatibility with identities of the \(\mathbb{C}\)-functors.
It is then a trivial task to check that (5.1) the associative and (5.2) unit axioms hold for \(\mathcal{L}\) so defined,
and therefore that (5) it is a \(\mathbb{C}\)-category:
checking the associativity axiom (5.1) is to ask if the following equation holds,
\[M^{\mathcal{L}}_{a,b,d}\circ(M^{\mathcal{L}}_{b,c,d}\otimes 1_{\mathcal{L}(a,b)} =M^{\mathcal{L}}_{a,c,d}\circ(1_{\mathcal{L}(c,d)}\otimes M^{\mathcal{L}}_{a, b,c})\circ\alpha_{\mathcal{L}(c,d),\mathcal{L}(b,c),\mathcal{L}(a,b)},\]
which is true by the uniqueness due to the limiting cone
\[\pi_{a,d}:\Delta\mathcal{L}(a,d)\to A(a,d),\]
since (cf. (2) and (3) just above for the notations)
\(M_{a,b,d}\cdot(M_{b,c,d}\otimes 1_{A(a,b)}\cdot((\pi_{c,d}\otimes\pi_{b,c}) \otimes\pi_{a,b})\)
\(=M_{a,c,d}\cdot(1_{A(c,d)}\otimes M_{a,b,c})\cdot(\pi_{c,d}\otimes(\pi_{b,c} \otimes\pi_{a,b}))\cdot\Delta\alpha_{\mathcal{L}(c,d),\mathcal{L}(b,c),\mathcal{ L}(a,b)}:\Delta((\mathcal{L}(c,d)\otimes\mathcal{L}(b,c))\otimes\mathcal{L}(a,b))\to A(a,d)\);
consider the commutative diagram in \(\mathbb{C}\)
\[\begin{array}{llll}\mathcal{L}(a,b)\otimes E&\frac{1_{\mathcal{L}(a,b)} \otimes j^{\mathcal{L}}_{a}}{\pi_{a,b}^{i}\otimes 1_{E}}&\mathcal{L}(a,b)\otimes \mathcal{L}(a,a)&\frac{M^{\mathcal{L}}_{a,a,b}}{\pi_{a,b}^{i}\otimes\pi_{a,a} ^{i}}&\mathcal{L}(a,b)\\ \mathcal{A}_{i}(a_{i},b_{i})\otimes E&\frac{1_{\mathcal{A}_{i}(a_{i},b_{i})} \otimes j^{\mathcal{A}_{i}}_{a_{i}}}{\mathcal{A}_{i}(a_{i},b_{i})\otimes \mathcal{A}_{i}(a_{i},a_{i})}&\frac{M^{\mathcal{A}_{i}}_{a_{i},a_{i},b_{i}}}{ \mathcal{A}_{i}(a_{i},b_{i})}&\mathcal{A}_{i}(a_{i},b_{i}),\end{array}\]
as \(\rho_{\mathcal{A}_{i}(a_{i},b_{i})}=M^{\mathcal{A}_{i}}_{a_{i},a_{i},b_{i}} \circ(1_{\mathcal{A}_{i}(a_{i},b_{i})}\otimes j^{\mathcal{A}_{i}}_{a_{i}})\), for every \(i\in\mathbb{I}\),
then
\(\pi_{a,b}^{i}\circ\rho_{\mathcal{L}(a,b)}=\rho_{\mathcal{A}_{i}(a_{i},b_{i})} \circ(\pi_{a,b}^{i}\otimes 1_{E})=M^{\mathcal{A}_{i}}_{a_{i},a_{i},b_{i}}\circ(1_{ \mathcal{A}_{i}(a_{i},b_{i})}\otimes j^{\mathcal{A}_{i}}_{a_{i}})\circ(\pi_{a,b}^{i}\otimes 1_{E})=\pi_{a,b}^{i}\circ M^{\mathcal{L}}_{a,a,b}\circ(1_{ \mathcal{L}(a,b)}\otimes j^{\mathcal{L}}_{a})\), for every \(i\in\mathbb{I}\),
and so, by the uniqueness due to the limiting cone \(\pi_{a,b}:\Delta\mathcal{L}(a,b)\to A(a,b)\), the right unit axiom (5.2.1) \(\rho_{\mathcal{L}(a,b)}=M^{\mathcal{L}}_{a,a,b}\circ(1_{\mathcal{L}(a,b)} \otimes j^{\mathcal{L}}_{a})\) holds;
consider the commutative diagram in \(\mathbb{C}\)
\[\begin{array}{llll}E\otimes\mathcal{L}(a,b)&\frac{j^{\mathcal{L}}_{b} \otimes 1_{\mathcal{L}(a,b)}}{\pi_{b}^{i}\otimes\pi_{a,b}^{i}}&\mathcal{L}(a,b)\\ 1_{E}\otimes\pi_{a,b}^{i}&\frac{j^{\mathcal{A}_{i}}_{b}\otimes 1_{\mathcal{A}_{i} (a_{i},b_{i})}}{\pi_{b}^{i}\otimes\pi_{a,b}^{i}}&\mathcal{A}_{i}(b_{i},b_{i}) \otimes\mathcal{A}_{i}(a_{i},b_{i})&\frac{M^{\mathcal{A}_{i}}_{a_{i},b_{i},b_{i }}}{\mathcal{A}_{i}(a_{i},b_{i})},\end{array}\]
as \(\rho_{\mathcal{A}_{i}(a_{i},b_{i})}\circ\gamma_{E,\mathcal{A}_{i}(a_{i},b_{i}) }=M^{\mathcal{A}_{i}}_{a_{i},b_{i},b_{i}}\circ(j^{\mathcal{A}_{i}}_{b_{i}} \otimes 1_{\mathcal{A}_{i}(a_{i},b_{i})})\), for every \(i\in\mathbb{I}\),
then
\(\pi_{a,b}^{i}\circ\rho_{\mathcal{L}(a,b)}\circ\gamma_{E,\mathcal{L}(a,b)}=\rho_ {\mathcal{A}_{i}(a_{i},b_{i})}\circ\gamma_{E,\mathcal{A}_{i}(a_{i},b_{i})} \circ(1_{E}\otimes\pi_{a,b}^{i})=M^{\mathcal{A}_{i}}_{a_{i},b_{i},b_{i}}\circ( j^{\mathcal{A}_{i}}_{b_{i}}\otimes 1_{\mathcal{A}_{i}(a_{i},b_{i})})\) (\(j^{\mathcal{A}_{i}}_{b_{i}}\otimes 1_{\mathcal{A}_{i}(a_{i},b_{i})})\circ(1_{E} \otimes\pi_{a,b}^{i})=\pi_{a,b}^{i}\circ M^{\mathcal{L}}_{a,b,b}\circ(j^{ \mathcal{L}}_{b}\otimes 1_{\mathcal{L}(a,b)})\), for every \(i\in\mathbb{I}\),
and so, by the uniqueness due to the limiting cone \(\pi_{a,b}:\Delta\mathcal{L}(a,b)\to A(a,b)\), the left unit axiom (5.2.2) \(\rho_{\mathcal{L}(a,b)}\circ\gamma_{E,\mathcal{L}(a,b)}=M^{\mathcal{L}}_{a,b,b} \circ(j^{\mathcal{L}}_{b}\otimes 1_{\mathcal{L}(a,b)})\) holds.
It is also obvious that (6) \(\pi^{i}\) is a \(\mathbb{C}\)-functor, by its definition,
* \(ob\pi^{i}:ob(\mathcal{L})\to ob(\mathcal{A}_{i})\), \(a=(a_{i})_{i\in\mathbb{I}}\mapsto a_{i}\),
* \(\pi^{i}_{a,b}:\mathcal{L}(a,b)\rightarrow\mathcal{A}_{i}(a_{i},b_{i})\),
and by the definitions of \(M^{\mathcal{L}}_{a,b,c}\) and \(j^{\mathcal{L}}_{a}\), for every \(a,b,c\in ob(\mathcal{L})\). In fact, \(\pi^{i}\) is compatible with composition by definition of \(M^{\mathcal{L}}_{a,b,c}\), and \(\pi^{i}\) is compatible with the identities by definition of \(j^{\mathcal{L}}_{a}\), for every \(a,b,c\in ob(\mathcal{L})\).
It remains to show that \(\pi:\Delta\mathcal{L}\to A\) is a universal cone. Let \(\lambda:\Delta\mathcal{A}\to A\) be another cone, from any \(\mathcal{A}(\in\mathbb{C}-Cat)\) into \(A:\mathbb{I}\rightarrow\mathbb{C}-Cat\).
The functor \(ob:\mathbb{C}-Cat\to Set\) takes the cone \(\pi\) into a limiting cone
\[ob\pi:\Delta ob(\mathcal{L})\to ob\circ A\]
in the category \(Set\) of all sets, and it is assumed that there is a limiting cone
\[\pi_{a,b}:\Delta\mathcal{L}(a,b)\to A_{a,b}\]
in \(\mathbb{C}\), for each ordered pair \((a,b)\in ob(\mathcal{L})\times ob(\mathcal{L})\).
The universality of these cones, in \(Set\) and \(\mathbb{C}\) respectively, provides a unique function \(obL:ob(\mathcal{A})\to ob(\mathcal{L})\) and unique morphisms12\(L_{x,y}:\mathcal{A}(x,y)\rightarrow\mathcal{L}(L(x),L(y))\) for each pair of objects \(x,y\in ob(\mathcal{A})\). Hence, if \(L\) is a \(\mathbb{C}\)-functor it must be unique, and \(L=\underleftarrow{Lim}A\) in \(\mathbb{C}-Cat\).
Footnote 12: The notation is going to be simplified as mentioned before; for instance, \(L(x)\) means in fact \(ob(L(x))\).
\(L\) is compatible with the composition, that is,
\[L_{x,z}\circ M^{\mathcal{A}}_{x,y,z}=M^{\mathcal{L}}_{L(x),L(y),L(z)}\circ(L_ {y,z}\otimes L_{x,y})\]
for every \(x,y,z\in ob(\mathcal{A})\), because
\(\pi^{i}_{L(x),L(z)}\circ L_{x,z}\circ M^{\mathcal{A}}_{x,y,z}=\lambda^{i}_{x, z}\circ M^{\mathcal{A}}_{x,y,z}\)
\(=M^{\mathcal{A}_{i}}_{L(x)_{i},L(y)_{i},L(z)_{i}}\circ(\lambda^{i}_{y,z} \otimes\lambda^{i}_{x,y})\), since \(\lambda^{i}\) is a \(\mathbb{C}\)-functor
\(=M^{\mathcal{A}_{i}}_{L(x)_{i},L(y)_{i},L(z)_{i}}\circ(\pi^{i}_{L(y),L(z)} \otimes\pi^{i}_{L(x),L(y)})\circ(L_{y,z}\otimes L_{x,y})\), since \(\otimes\) is a bifunctor
\(=\pi^{i}_{L(x),L(z)}\circ M^{\mathcal{L}}_{L(x),L(y),L(z)}\circ(L_{y,z} \otimes L_{x,y})\), since (6) \(\pi^{i}\) is a \(\mathbb{C}\)-functor, for every \(i\in\mathbb{I}\).
\(L\) is also compatible with the identities, that is,
\[L_{x,x}\circ j^{\mathcal{A}}_{x}=j^{\mathcal{L}}_{L(x)},\]
for every \(x\in ob(\mathcal{A})\), because
\(\pi^{i}_{L(x),L(x)}\circ L_{x,x}\circ j^{\mathcal{A}}_{x}=\lambda^{i}_{x,x} \circ j^{\mathcal{A}}_{x}=j^{\mathcal{A}_{i}}_{L(x)_{i}}\), since \(\lambda^{i}\) is a \(\mathbb{C}\)-functor
\(=\pi^{i}_{L(x),L(x)}\circ j^{\mathcal{L}}_{L(x)}\), since (6) \(\pi^{i}\) is a \(\mathbb{C}\)-functor, for every \(i\in\mathbb{I}\).
The proof that limits in \(\mathbb{C}-Cat\) may be calculated "hom-componentwise" in \(\mathbb{C}\) is done.
**Remark 7.1**.: It is straightforward to confirm that, if \((\mathbb{C},\otimes,E,\alpha,\gamma,\rho)\) is a cartesian monoidal category then also its derived monoidal category \(\mathbb{C}-Cat=(\mathbb{C}-Cat,\bigcirc,\mathfrak{E},\wedge,\Gamma,\mathfrak{I})\) is cartesian. Just check in this section how to obtain the cartesian products in a \(\mathcal{V}\)-category, and compare with the definition of the derived monoidal structure in section 5, provided \(\otimes=\times\).
## 8. Simplicity, semi-left-exactness and the stable units property
In the present section we refer to a base monoidal reflection
\[(F,G,\eta,\varepsilon):\mathbb{C}\to\mathbb{X}\]
as in Definition 4.1, and such that \(\mathbb{C}\) has pullbacks.
Remember that, by definition, in a base monoidal reflection the right adjoint is the inclusion functor of a full and replete subcategory, and the counit is the identity.
The following Proposition 8.1 states that some properties of the left adjoint \(F\), concerning the preservation of certain kinds of pullback diagrams (which are relevant in Galois categorical theory), are inherited by the left adjoint \(\mathbb{F}\) in the derived base monoidal reflection.
Namely, going from the weaker to the stronger property, if \(F\dashv G\) is a _simple reflection_, or _semi-left-exact_ (also called _admissible_), or having _stable units_ in the sense of [2] (cf. [1]), then \(\mathbb{F}\dashv\mathbb{G}\) does have as well the respective property.
**Proposition 8.1**.: _Consider a category \(\mathbb{C}\) with pullbacks, a base monoidal reflection_
\[(F,G,\eta,\varepsilon):\mathbb{C}\to\mathbb{X}\]
_as in Definition 4.1, and its derived adjunction_
\[(\mathbb{F},\mathbb{G},\Theta,\Upsilon):\mathbb{C}-Cat\to\mathbb{X}-Cat.\]
_Then, if \(F\dashv G\) has stable units, if it is semi-left-exact or simple, so is \(\mathbb{F}\dashv\mathbb{G}\) respectively._
Proof.: Notice first that \(\mathbb{F}\dashv\mathbb{G}\) is also a base monoidal reflection (cf. Proposition 4.1), such that \(\mathbb{C}-Cat\) has pullbacks (cf. Proposition 7.1).
1. (stable units) Consider the diagram in \(\mathbb{X}-Cat\), which is the image of a pullback diagram in \(\mathbb{C}-Cat\),
and such that \(\mathcal{A}_{3}\in\mathbb{X}-Cat\). We have to prove that this is also a pullback diagram in \(\mathbb{X}-Cat\), provided that the reflection \(F\dashv G\) has stable units (that is, if \(F\) preserves any pullback diagram in \(\mathbb{C}\) in which the right down object is in the subcategory \(\mathbb{X}\)).
As \(\mathbb{F}\) does not change the sets of objects, the image by \(ob:\mathbb{X}-Cat\to Set\) of diagram (8.1) is the same as the image by \(ob:\mathbb{C}-Cat\to Set\) of the diagram prior to the application of \(\mathbb{F}\), and therefore it is a pullback in \(Set\).
Now, in order to confirm that diagram (8.1) is a pullback in \(\mathbb{X}-Cat\), one needs only to prove that the following square (8.2) is a pullback, according to Proposition 7.1 and the definition of \(\mathbb{F}\), for every pair \(a,b\) of objects in \(\mathcal{L}\),
which is true since \(\mathcal{A}_{3}\in\mathbb{X}-Cat\) implies that \(\mathcal{A}_{3}(a_{3},b_{3})\in\mathbb{X}\), and diagram (8.2) is an image by \(F\) of a pullback diagram in \(\mathbb{C}\), being \(F\) the left-adjoint of a reflection into \(\mathbb{C}\) with stable units.
2. (semi-left-exact) Consider the pullback diagram in \(\mathbb{C}-Cat\)
where \(\Theta_{\mathcal{A}}\) is a unit morphism in the derived reflection.
One has to show that \(\mathbb{F}V:\mathbb{F}(\mathcal{L})\to\mathbb{F}\mathbb{G}(\mathcal{X})= \mathcal{X}\) is an isomorphism in \(\mathbb{X}-Cat\), provided the reflection \(F\dashv G\) is semi-left-exact (that is, \(F\) preserves the pullback diagrams in \(\mathbb{C}\) in which the down arrow is a unit morphism in the base reflection, and the right up object is in \(\mathbb{X}\)).
As the diagram is a pullback in \(\mathbb{C}-Cat\), \(obV\) is a bijection since \(ob\Theta_{\mathcal{A}}\) is the identity. Hence, \(ob\mathbb{F}V:ob(\mathbb{F}(\mathcal{L}))\to ob(\mathcal{X})\) is a bijection as well, because \(\mathbb{F}\) does not change the sets of objects. It remains to show, that the following diagram is a pullback in \(\mathbb{X}\), for every \(a,b\in ob(\mathcal{L})\),
\(F(\mathcal{A}(a_{1},b_{1}))\)\(F(\mathcal{L}(a,b))\)\(F(\mathcal{A}(a_{1},b_{1}))\)\(F(\mathcal{L}(a,b))\)\(F(\mathcal{A}(a_{1},b_{1}))\)\(F(\mathcal{L}(a,b))\)\(F(\mathcal{A}(a_{1},b_{1}))\)
which is true since: it is the image by \(F\) of a pullback diagram in \(\mathbb{C}\) (according to Proposition 7.1), in which the down arrow is a unit morphism in the reflection \(F\dashv G\) and the right up object is in the subcategory \(\mathbb{X}\); hence, as by hypothesis \(F\dashv G\) is semi-left-exact, \(FV_{a,b}\) is an isomorphism in \(\mathbb{X}\) as required.
3. (simple) Consider the following diagram in \(\mathbb{C}-Cat\)
\(\mathcal{A}\)\(\mathcal{B}\)\(\mathcal{C}\)\(\mathcal{C}\)\(\mathcal{C}\)\(\mathcal{A}\)\(\mathcal{B}\)
It remains to show, in the diagram (8.3) immediately above, that the image by \(F\) of \(W_{a_{1},b_{1}}\) is an isomorphism in \(\mathbb{X}\), for every \(a,b\in ob(\mathcal{L})\); which is so since \(F\dashv G\) is a simple reflection to start with.
## 9. The category of all n-categories
Consider the category \(nCat\), with objects all \(n\)-categories and whose morphisms are the (strict) \(n\)-functors (see [7, SSXII.5]). Its definition is going to be stated in a way that suits our purposes.
Consider the category \(n\mathbb{P}\) generated by the following 2-precategory diagrams in the sense of [10, SS2], in number of \(\frac{(n-1)n(n+1)}{6}\), \(0\leq i<j<k\leq n\):13
Footnote 13: This number can easily be obtained by using the counting principle.
\[\begin{array}{c}\includegraphics[]{fig/ncat1.eps}\end{array} \tag{9.1}\]
* Each one of the three horizontal precategory diagrams will be called, respectively upwards, \(P^{ji}\), \(P^{ki}\) and \(P^{kji}\), \(0\leq i<j<k\leq n\).
* Each one of the three vertical precategory diagrams will be called, respectively from the left to the right, \(P^{kiji}\), \(P^{kj}\) and \(P_{i}\), \(0\leq i<j<k\leq n\).
The category \(nCat\) of all \(n\)-categories is the full subcategory of \(\hat{n\mathbb{P}}=Set^{n\mathbb{P}}\), determined by its objects \(C:n\mathbb{P}\to Set\) such that the image by \(C\) of each horizontal and vertical precategory diagram in (9.1) (\(0\leq i<j<k\leq n\)) is a category (cf. [10, SS2]).
* Being \(C:n\mathbb{P}\to Set\) a n-category, the morphisms of the categories \(C(P^{kji})\) and \(C(P^{kiji})\) will be called, respectively, vertically and horizontally composable pairs of k-cells with j and i-objects.
## 10. Effective descent morphisms in nCat
In section 9, if the category \(Set\) of sets is replaced by any category \(\mathcal{C}\) with pullbacks, then one obtains the definition of \(nCat(\mathcal{C})\), the category of internal \(n\)-categories in \(\mathcal{C}\). Hence, \(nCat=nCat(Set)\) for \(\mathcal{C}=Set\). If \(n=1\), \(Cat=1Cat=1Cat(Set)\) and \(\mathbb{P}=1\mathbb{P}\) is to be considered the category generated by the precategory diagram \(P^{10}\), corresponding to the composition of \(1\)-cells, which we will call either vertical or horizontal.
The second part of the ensuing Corollary 10.1 will be needed extensively in this text. Its proof follows trivially from the results in [10, SS3], after an obvious simple generalization.
**Corollary 10.1**.: _If \(\mathcal{C}\) has all limits then \(nCat(\mathcal{C})\) is closed under limits in the functor category \(\mathcal{C}^{n\mathbb{P}}\), \(n\geq 1\), where all limits exist and are calculated pointwise._
_In particular, for \(\mathcal{C}=Set\), \(nCat\) is closed under limits in \(\hat{n\mathbb{P}}=Set^{n\mathbb{P}}\)._
Consider again the category of all categories \(Cat\) and its full inclusion in the category of precategories \(\hat{\mathbb{P}}=Set^{\mathbb{P}}\).
A functor \(p:\mathbb{E}\to\mathbb{B}\) is an effective descent morphism (e.d.m.)14 in \(Cat=1Cat\) if and only if it is surjective on composable triples of morphisms. The proof of this statement can be found in [4, Proposition 6.2]. In a completely analogous way, a class of effective descent morphisms in \(nCat\) is going to be given in the following Proposition 10.1 (cf. [10, SS4], where e.d.m. in \(2Cat\) are obtained similarly for the special case \(n=2\)).
Footnote 14: Also called a _monadic extension_ in categorical Galois theory.
**Proposition 10.1**.: _A n-functor \(np:n\mathbb{E}\to n\mathbb{B}\) is an e.d.m. in the category of all n-categories \(nCat\) (\(n\geq 2\)) if it is surjective both on_
* _vertically composable triples of horizontally composable pairs of n-cells, and on_
* _horizontally composable triples of vertically composable pairs of n-cells,_ _for all_ \(i\) _and_ \(j\) _such that_ \(0\leq i<j<n\)_._15__ Footnote 15: See the following Example 10.1.
_Meaning that, every composable triple of morphisms in the categories \(n\mathbb{B}(P^{ni})\) and \(n\mathbb{B}(P^{nj})\) are the image of some composable triple of morphisms in the categories \(n\mathbb{E}(P^{ni})\) and \(n\mathbb{E}(P^{nj})\), by the functors \(np(P^{ni}):n\mathbb{E}(P^{ni})\to n\mathbb{B}(P^{ni})\) and \(np(P^{nj}):n\mathbb{E}(P^{nj})\to n\mathbb{B}(P^{nj})\), respectively, which are restrictions of the n-functor \(np\) (\(0\leq i<j<n\))._
Proof.: Let \(np:n\mathbb{E}\to n\mathbb{B}\) be surjective on vertically/horizontally composable triples of horizontally/vertically composable pairs of n-cells,
\(0\leq i<j<n\). Then, \(np\) is an e.d.m. in \(\hat{\mathbb{P}}=Set^{n\mathbb{P}}\), since the effective descent morphisms in a category of presheaves are simply those surjective pointwise (which, of course, is implied by either surjectivity on triples of composable n-cells, since k-cells are special "degenerate" n-cells, \(k\leq n\)). Hence, the following instance of [4, Corollary 3.9] can be applied:
if \(np:n\mathbb{E}\to n\mathbb{B}\) in \(nCat\) is an e.d.m. in \(\hat{n\mathbb{P}}=Set^{n\mathbb{P}}\) then \(np\) is an e.d.m. in \(nCat\) if and only if, for every pullback square
\(n\mathbb{D}\)\(n\mathbb{E}\)\(n\mathbb{B}\)\(n\mathbb{B}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{B}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\mathbb{D}\)\(n\
_Then, for each of the above diagrams, built the n-category \(h\mathbf{4}\) as follows:_
_the structure for the k-cells with \(0\leq k<n\) is the same as in \(n\mathbb{B}\);_
_the n-cells constitute the smallest preorder on (n-1)-cells with the same initial and terminal (n-2)-cell such that_
_(1) the one diagram corresponding to the vertically/horizontallyposable triples of horizontally/vertically composable pairs in question is in \(h\mathbf{4}\) and,_
_(2) with such preorder, \(h\mathbf{4}\) is in fact a n-category;_
_notice that there is a trivial preorder satisfying (1) and (2), the one which relates every ordered pair of (n-1)-cells with the same initial and terminal (n-2)-cell, and so the smallest preorder can be obtained intersecting all such preorders satisfying (1) and (2)._
_The n-category_
\[n\mathbb{E}=\coprod_{0\leq i<j<n}\coprod_{h\in H_{ij}}h\mathbf{4},\]
_such that \(H_{ij}\) is the set of all vertically/horizontally composable triples of horizontally/vertically composable pairs of n-cells in \(n\mathbb{B}\), with j-cells as arrows and i-cells as objects in \(n\mathbb{B}\)._
_Then, there is an e.d.m. \(np:n\mathbb{E}\to n\mathbb{B}\) which projects the corresponding copy of \(h\mathbf{4}\), for every \(h\in H_{ij}\), \(0\leq i<j<n\).17_
Footnote 17: Remark that \(h\mathbf{4}\) is really a n-preorder, as defined just below at the beginning of the following section 11.
The reflection of n-categories into n-preorders has stable units and a monotone-light factorization
Let \(nPreord\) be the full subcategory of \(nCat\) determined by the objects \(C:n\mathbb{P}\to\mathit{Set}\) such that \(Cd^{n(n-1)}\) and \(Cc^{n(n-1)}\) are jointly monic (cf. diagram (9.1)), that is,
\[C(P^{n(n-1)})=\!C(P_{n(n-1)})\ \frac{\begin{array}{l}Cq^{n(n-1)}\\ \frac{Cm^{n(n-1)}}{\mathit{Cr}^{n(n-1)}}\end{array}}{\begin{array}{l}Cq^{n(n -1)}\\ \frac{Cc^{n(n-1)}}{\mathit{Cr}^{n(n-1)}}\end{array}}\!C(P_{n})\ \frac{\begin{array}{l}Cd^{n(n-1)}\\ \frac{Cc^{n(n-1)}}{\mathit{Cr}^{n(n-1)}}\end{array}}{\begin{array}{l}Cq^{n (n-1)}\\ \frac{Cc^{n(n-1)}}{\mathit{Cr}^{n(n-1)}}\end{array}}\!C(P_{n-1}) \tag{11.1}\]
is a preordered set.
There is a reflection
\[H\vdash I:nCat\
known reflection \(Cat\to Preord\) from categories into preordered sets (see [9]).
#### Stable units
The reflection \(Cat\to Preord\) from categories into preorders is known to be a _base monoidal reflection_ in the sense of Definition 4.1, considering \(Cat\) equipped with its cartesian symmetric monoidal structure. It has also stable units in the sense of [2] (cf. section 8).
Therefore one can iterate it n times, obtaining exactly the reflection (11.2), also with stable units according to Proposition 8.1:
**Theorem 11.1**.: _The reflection \(H\vdash I:nCat\to nPreord\) has stable units, for every \(n\geq 1\)._
#### Monotone-light factorization for n-categories via n-preorders
**Theorem 11.2**.: _The reflection \(H\vdash I:nCat\to nPreord\) does have a monotone-light factorization, for every \(n\geq 1\)._
Proof.: The statement is a consequence of the central result of [1] (cf. Corollary 6.2 in [8]), because \(H\vdash I\) has stable units (cf. Theorem 11.1) and for every \(n\mathbb{B}\in nCat\) there is an e.d.m. \(np:n\mathbb{E}\to n\mathbb{B}\) with \(n\mathbb{E}\in nPreord\) (cf. Example 10.1).
In the following section 12, it will be proved that the monotone-light factorization system is not trivial, for every \(n\geq 1\). That is, it does not coincide with the reflective factorization system associated to the reflection of \(nCat\) into \(nPreord\).
### 12. Vertical and stably-vertical n-functors
In this section, it will be given a characterization of the class of vertical morphisms \(\mathcal{E}_{I}\) in the reflective factorization system \((\mathcal{E}_{I},\mathcal{M}_{I})\), and of the class of the stably-vertical morphisms \(\mathcal{E}^{\prime}_{I}\) (\(\subseteq\mathcal{E}_{I}\))19 in the monotone-light factorization system \((\mathcal{E}^{\prime}_{I},\mathcal{M}^{*}_{I})\), both associated to the reflection \(I:nCat\to nPreord\). Then, since \(\mathcal{E}^{\prime}_{I}\) is a proper class of \(\mathcal{E}_{I}\), one concludes that \((\mathcal{E}^{\prime}_{I},\mathcal{M}^{*}_{I})\) is a non-trivial monotone-light factorization system.
Footnote 19: \(\mathcal{E}^{\prime}_{I}\) is the largest subclass of \(\mathcal{E}_{I}\) stable under pullbacks. The terminologies “vertical morphisms” and “stably-vertical morphisms” were introduced in [5].
Consider a n-functor \(f:A\to B\), which is obviously determined by the \(n+1\) functions \(f_{0}:A(P_{0})\to B(P_{0})\), \(f_{1}:A(P_{1})\to B(P_{1})\),..., \(f_{n}:A(P_{n})\to B(P_{n})\) (cf. diagram (9.1)), so that we may make the identification \(f=(f_{n},...,f_{1},f_{0})\).
**Proposition 12.1**.: _A n-functor \(f=(f_{n},...,f_{1},f_{0}):A\to B\) belongs to the class \(\mathcal{E}_{I}\) of vertical n-functors if and only if the following two conditions hold:_
1. \(f_{0}\)_,_ \(f_{1}\)_,...,_ \(f_{n-1}\) _are bijections;_
2. _for every two elements_ \(h\) _and_ \(h^{\prime}\) _in_ \(A(P_{n-1})\)_,_ _if_ \(Hom_{B(P^{n(n-1)})}(f_{n-1}h,f_{n-1}h^{\prime})\) _is nonempty then so is_ \(Hom_{A(P^{n(n-1)})}(h,h^{\prime})\)_._
Proof.: The n-functor \(f=(f_{n},...,f_{1},f_{0})\) belongs to \(\mathcal{E}_{I}\) if and only if \(If\) is an isomorphism (cf. [1, SS3.1]), that is, \(If_{0}\), \(If_{1}\),..., \(If_{n}\) are bijections. Since \(If_{0}=f_{0}\),..., \(If_{n-1}=f_{n-1}\), the fact that \(f\in\mathcal{E}_{I}\) implies and is implied by (1) and (2) is trivial.
**Proposition 12.2**.: _A n-functor \(f=(f_{n},...,f_{1},f_{0}):A\to B\) belongs to the class \(\mathcal{E}_{I}^{\prime}\) of stably-vertical n-functors if and only if the following two conditions hold:_
1. \(f_{0}\)_,_ \(f_{1}\)_,...,_ \(f_{n-1}\) _are bijections;_
2. _for every two elements_ \(h\) _and_ \(h^{\prime}\) _in_ \(A(P_{n-1})\)_,_ \(f\) _induces a surjection_ \(Hom_{A(P^{n(n-1)})}(h,h^{\prime})\to Hom_{B(P^{n(n-1)})}(f_{n-1}h,f_{n-1}h^{ \prime})\) _(_\(f\) _is a "full functor on n-cells")._
Proof.: As every pullback \(g^{*}(f)=\pi_{1}:C\times_{B}A\to C\) in \(nCat\) of \(f\) along any n-functor \(g:C\to B\) is calculated pointwise, and being \((f_{n},f_{n-1}):A(P^{n(n-1)})\to B(P^{n(n-1)})\) a stably-vertical functor for the reflection \(Cat\to Preord\), that is, \(f_{n-1}\) a bijection and \((f_{n},f_{n-1})\) a full functor (cf. Propositions 4.4 and 3.2 in [9]), then (1) and (2) imply that \(g^{*}(f)\) belongs to \(\mathcal{E}_{I}\) (cf. last Proposition 12.1).
Hence, \(f\in\mathcal{E}_{I}^{\prime}\) if (1) and (2) hold.
If \(f\in\mathcal{E}_{I}^{\prime}\), then \(f\in\mathcal{E}_{I}\) (\(\mathcal{E}_{I}^{\prime}\subseteq\mathcal{E}_{I}\)), and therefore (1) holds.
Suppose now that (2) does not hold, so that there is \(\theta:f_{n-1}h\to f_{n-1}h^{\prime}\) not in the image of \(f\), and consider the n-category \(C\) generated20 by the diagram \(f_{n-2}(a)\), and let \(g\) be the inclusion of \(C\) in \(B\). Then, \(C\times_{B}A\) contains \(f_{n-2}(a)\),
with no non-identity 2-cells, and so \(g^{*}(f)\) is not in \(\mathcal{E}_{I}\).
Footnote 20: Generated in the sense of Example 10.1, that is, \(C\) has the same structure of \(B\), with the exception of n-cells, whose structure is the least which includes the following diagram.
Hence, if \(f\in\mathcal{E}_{I}^{\prime}\) then (1) and (2) must hold.
It is evident that \(\mathcal{E}_{I}^{\prime}\) is a proper class of \(\mathcal{E}_{I}\), therefore the monotone-light factorization system \((\mathcal{E}_{I}^{\prime},\mathcal{M}_{I}^{*})\) is non-trivial (\(\neq(\mathcal{E}_{I},\mathcal{M}_{I})\)).
## 13. Trivial coverings for n-categories via n-preorders
A n-functor \(f:A\to B\) belongs to the class \(\mathcal{M}_{I}\) of trivial coverings (with respect to the reflection \(H\vdash I:nCat\to nPreord\)) if and only if the following square
is a pullback diagram, where \(\eta_{A}\) and \(\eta_{B}\) are unit morphisms for the reflection \(H\vdash I:nCat\to nPreord\) (cf. [2, Theorem 4.1]).
Since the pullback (as any limit) is calculated pointwise in \(nCat\) (cf. Corollary 10.1), then \(f\in\mathcal{M}_{I}\) if and only if the following squares are pullbacks, corresponding to the pointwise components of \(\eta_{A}\) and of \(\eta_{B}\) (cf. diagram (9.1)), \(0\leq l\leq n\) and \(0\leq i<j<k\leq n\):
The squares \((D_{1}),(D_{2}),...,(D_{n-1})\) are pullbacks since \(\eta_{A(P_{l})}\) and \(\eta_{B(P_{l})}\) are identity maps for \(l<n\) (cf. diagram (9.1) and the definition of the reflection \(H\vdash I:nCat\to nPreord\) in (11.2)).
Notice that if diagram (9.1) is restricted to the precategory diagram \(P^{n(n-1)}\), one obtains from (13.1) the following square in \(Cat\), with unit morphisms of the reflection of all categories into preorders \(Cat\to Preord\) (cf. [9]),
\[\begin{CD}A(P^{n(n-1)})@>{\eta_{A(P^{n(n-1)})}}>{}>I(A)(P^{n(n-1)})\\ f_{P^{n(n-1)}}@V{}V{}V@V{}V{}V\\ B(P^{n(n-1)})@>{\eta_{B(P^{n(n-1)})}}>{}>I(B)(P^{n(n-1)}).\end{CD} \tag{13.2}\]
It is known (cf. [9, Proposition 3.1]) that this square is a pullback in \(Cat\) if and only if, for every two objects \(h\) and \(h^{\prime}\) in \(A(P_{n-1})\) with \(Hom_{A(P^{n(n-1)})}(h,h^{\prime})\) nonempty, the map
\[Hom_{A(P^{n(n-1)})}(h,h^{\prime})\to Hom_{B(P^{n(n-1)})}(f_{n-1}h,f_{n-1}h^{ \prime})\]
induced by \(f\) is a bijection.
A necessary condition for the n-functor \(f\) to be a trivial covering was just stated; the following Lemma 13.1 will help to show that this necessary condition is also sufficient in next Proposition 13.1.
**Lemma 13.1**.: _Consider the following commutative parallelepiped_
(13.3)
_where the five squares \(c^{A}q^{A}=d^{A}r^{A}\), \(c^{B}q^{B}=d^{B}r^{B}\), \(Ic^{A}Iq^{A}=Id^{A}Ir^{A}\), \(If_{0}\eta_{A,0}=\eta_{B,0}f_{0}\) and \(If_{1}\eta_{A,1}=\eta_{B,1}f_{1}\) are pullbacks._
_Then, the square \(If_{2}\eta_{A,2}=\eta_{B,2}f_{2}\) is also a pullback.21_
Footnote 21: The notation used in diagram (13.3) is arbitrary and was chosen to make the application of Lemma 13.1 to this section easily understandable.
Proof.: The proof is obtained by an obvious diagram chase.
**Proposition 13.1**.: _A n-functor \(f:A\to B\) is a trivial covering for the reflection \(H\vdash I:nCat\to nPreord\) (in notation, \(f\in\mathcal{M}_{I}\)) if and only if, for every two objects \(h\) and \(h^{\prime}\) in \(A(P_{n-1})\) with \(Hom_{A(P^{n(n-1)})}(h,h^{\prime})\) nonempty, the map_
\[Hom_{A(P^{n(n-1)})}(h,h^{\prime})\to Hom_{B(P^{n(n-1)})}(f_{1}h,f_{1}h^{\prime})\]
_induced by \(f\) is a bijection._
Proof.: In the considerations just above, it was showed that the statement warrants that the square \((D_{n})\) is a pullback, adding to the fact that \((D_{0}),(D_{1}),...,(D_{n-1})\) are all pullbacks.
Then, all the others squares \((D_{ji}),(D_{kj}),(D_{ki})\) and \((D_{kji})\) must also be pullbacks according to Lemma 13.1, \(0\leq i<j<k\leq n\).
## 14. Coverings for n-categories via n-preorders
A n-functor \(f:A\to B\) belongs to the class \(\mathcal{M}_{I}^{*}\) of coverings (with respect to the reflection \(H\vdash I:nCat\to nPreord\)) if there is some effective descent morphism (also called monadic extension in categorical Galois theory) \(p:C\to B\) in \(nCat\) with codomain \(B\) such that the pullback \(p^{*}(f):C\times_{B}A\to C\) of \(f\) along \(p\) is a trivial covering (\(p^{*}(f)\in\mathcal{M}_{I}\)).
The following Lemma 14.1 can be found in [9, Lemma 4.2], in the context of the reflection of categories into preorders, but for n-categories via n-preorders the proof is exactly the same, since the same conditions hold (cf. Theorem 11.1 and Example 10.1). The next Proposition 14.1 characterizes the coverings for n-categories via n-preorders.
**Lemma 14.1**.: _A n-functor \(f:A\to B\) in \(nCat\) is a covering (for the reflection \(H\vdash I:nCat\to nPreord\)) if and only if, for every n-functor \(\varphi:X\to B\) over \(B\) from any n-preorder \(X\), the pullback \(X\times_{B}A\) of \(f\) along \(\varphi\) is also a n-preorder._
**Proposition 14.1**.: _A n-functor \(f:A\to B\) in \(nCat\) is a covering (for the reflection \(H\vdash I:nCat\to nPreord\)) if and only if it is faithful with respect to n-cells, that is, for every pair \(g,g^{\prime}\in A(P_{n-1})\), the map_
\[Hom_{A(P^{n(n-1)})}(g,g^{\prime})\to Hom_{B(P^{n(n-1)})}(f_{n-1}g,f_{n-1}g^{ \prime})\]
_induced by \(f\) is an injection._
Proof.: Consider again the n-preorder \(T\) generated22 by the diagram
Footnote 22: Cf. the footnote in the proof of Proposition 12.2.
\[a\]
\[\begin{array}{c}\begin{array}{c}h\\ \hline\Downarrow\leq\\ h^{\prime}\end{array}\end{array}\]
\[a^{\prime}\]
If \(f\) is not faithful with respect to n-cells (in the sense of the statement), then, by including \(T\) in \(B\), one could obtain a pullback \(T\times_{B}A\) that is not a n-preorder.
Therefore, \(f\) is not a covering, by the previous Lemma 14.1.
Reciprocally, consider any n-functor \(\varphi:X\to B\) such that \(X\) is a n-preorder.
If \(f\) is faithful with respect to n-cells (in the sense of the statement), then the pullback \(X\times_{B}A\) is a n-preorder, given the nature of \(X\). Hence, \(f\) is a covering, by the previous Lemma 14.1.
## Acknowledgement
This work was supported by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology
(FCT - Fundacao para a Ciencia e a Tecnologia),
references UIDB/04106/2020 and UIDP/04106/2020.
|
2308.12028 | LKPNR: LLM and KG for Personalized News Recommendation Framework | Accurately recommending candidate news articles to users is a basic challenge
faced by personalized news recommendation systems. Traditional methods are
usually difficult to grasp the complex semantic information in news texts,
resulting in unsatisfactory recommendation results. Besides, these traditional
methods are more friendly to active users with rich historical behaviors.
However, they can not effectively solve the "long tail problem" of inactive
users. To address these issues, this research presents a novel general
framework that combines Large Language Models (LLM) and Knowledge Graphs (KG)
into semantic representations of traditional methods. In order to improve
semantic understanding in complex news texts, we use LLMs' powerful text
understanding ability to generate news representations containing rich semantic
information. In addition, our method combines the information about news
entities and mines high-order structural information through multiple hops in
KG, thus alleviating the challenge of long tail distribution. Experimental
results demonstrate that compared with various traditional models, the
framework significantly improves the recommendation effect. The successful
integration of LLM and KG in our framework has established a feasible path for
achieving more accurate personalized recommendations in the news field. Our
code is available at https://github.com/Xuan-ZW/LKPNR. | Chen hao, Xie Runfeng, Cui Xiangyang, Yan Zhou, Wang Xin, Xuan Zhanwei, Zhang Kai | 2023-08-23T09:39:18Z | http://arxiv.org/abs/2308.12028v1 | # Lkpnr: LLM and KG for Personalized News Recommendation Framework
###### Abstract
Accurately recommending candidate news articles to users is a basic challenge faced by personalized news recommendation systems. Traditional methods are usually difficult to grasp the complex semantic information in news texts, resulting in unsatisfactory recommendation results. Besides, these traditional methods are more friendly to active users with rich historical behaviors. However, they can not effectively solve the "long tail problem" of inactive users. To address these issues, this research presents a novel general framework that combines Large Language Models (LLM) and Knowledge Graphs (KG) into semantic representations of traditional methods. In order to improve semantic understanding in complex news texts, we use LLMs' powerful text understanding ability to generate news representations containing rich semantic information. In addition, our method combines the information about news entities and mines high-order structural information through multiple hops in KG, thus alleviating the challenge of long tail distribution. Experimental results demonstrate that compared with various traditional models, the framework significantly improves the recommendation effect. The successful integration of LLM and KG in our framework has established a feasible path for achieving more accurate personalized recommendations in the news field. Our code is available at [https://github.com/Xuan-ZW/LkPNR](https://github.com/Xuan-ZW/LkPNR).
## 1 Introduction
With the exponential growth of the Internet, an increasing number of individuals are opting to access the most current global news via online platforms, including MSN News. However, users often find themselves overwhelmed by the sheer volume of available news content. Consequently, the imperative for an effective news recommendation system becomes evident, as it serves as a crucial tool in assisting users to navigate through this vast information landscape. Such a system not only aids users in filtering copious amounts of news but also employs personalized algorithms to proactively present news items aligning with users' genuine interests, thereby significantly enhancing the fulfillment of their informational requirementsWu et al. (2021).
The current research in this field primarily emphasizes the perspective of representation learningBengio et al. (2012), aiming to enhance the learning of user and news representations separately. To illustrate this, we take the MIND datasetWu et al. (2020) as a case study. The user's behavioral data within this dataset comprises a sequence of historical clicks on news items, each of which is composed of title, abstract, category and other related information. Earlier researches, exemplified by NAMLWu et al. (2019) and NRMSWu et al. (2019), have been employed to achieve improved news representations. The methods often leverage CNN and LSTM for feature extraction, coupled with feature fusion using attention mechanisms. However, these existing studies exhibit certain limitations that deserve attention. One such problem is insufficient news text feature extraction, this deficiency hampers the effectiveness of the news representation process. Neglect of news popularity and interconnections is another challenge, this neglect can potentially lead the news recommender system into the challenging scenario of the long-tail problem.
With the advent of ChatGPT, the realm of natural language processing has seamlessly used in the era of Large Language models (LLMs). Notably, significant models like ChatGLM2Zeng et al. (2022), LLAMA2Touvron et al. (2023), and RWKV Peng et al. (2023) have surfaced within the open-source community. LLMs, having undergone pre-training on vast corpora of textual data, exhibit the ability to swiftly acclimate to the data distribution pertaining to downstream tasks. Leveraging the exceptional language modeling proficiency of LLMs, they adeptly uncover intricate linguistic relationships and semantic nuances inherent within the
Figure 1: Differences between traditional news recommendation model (a) and LkPNR framework (b)
text. This capacity allows for a more robust contextual integration, thereby augmenting text comprehension and facilitating the extraction of information-rich semantic features.
The long-tail problemYin et al. (2012) in recommendation systems is that a significant majority, approximately 80%, of user clicks are concentrated on a mere 20% of popular items. Consequently, this tendency results in recommendation systems favoring these popular items, often overlooking less popular ones, which, over time, detrimentally impacts the overall effectiveness of recommendations. To address this long-tail challenge, recent researchGuo et al. (2020); Wang et al. (2019) has explored the incorporation of knowledge graphs (KGs) as supplementary information for recommender systems. This innovative approach leverages graph-based learningOuyang et al. (2021) to establish meaningful relationships among diverse items within the system. Subsequently, this method harnesses additional item-specific information to enhance the representation of long-tailed items, effectively mitigating the issue of inadequate representation learning for such items.
To address the aforementioned challenges, we introduce LKPNR (LLM and KG for Personalized News Recommendation framework), a personalized news recommendation framework that links general news recommendation models with the integration of KGs and LLMs. Capitalizing on the robust text comprehension abilities of the LLM, we generate news representations imbued with comprehensive semantic information for each news item, thereby mitigating the shortcomings associated with the limited feature extraction capabilities inherent in general news recommendation models. Concurrently, the incorporation of subgraph structural representations, mined through multi-hop inference within KG, serves to alleviate the issue of long-tail distribution prevalent in news recommendation. By harnessing the strengths of both LLM and KG, we observe a substantial enhancement in the model's performance. The differences between our proposed framework and the general news recommendation model are shown in Figure 1. To summarize, our contributions are listed as follows:
* We propose LKPNR, a personalized news recommendation framework that fuses general news recommendation models with KG and LLM. To the best of our knowledge, this is the first work that combines both KG and LLM in the news recommendation domain.
* LKPNR can be flexibly combined with various general news recommendation models. Leveraging LLM's powerful text understanding and the news graph structural relationships contained in the KG to inject additional information into general news recommendation models.
* Experiments on the MIND dataset show that LKPNR can significantly improves the performance of general news recommendation models.
## 2 Related Work
### General News Recommendation Model
General news recommendation models typically involve encoding various aspects of news, such as title and abstract independently. These encoded representations are then interacted with separately to create a comprehensive news representation. In a similar manner, historical news browsing sequences are encoded and integrated to form a user representation. Subsequently, the similarity between this user representation and the representations of candidate news items is computed, enabling the prediction of whether the user would find these candidates interested or not. Okura et al.Okura et al. (2017)extract news features through a denoising self-encoder, while user sequences are derived using RNN to obtain user features. Lian et al. proposed DFMLian et al. (2018)using multi-channel inception blocks and attention mechanism to tackle the issue of data diversity. An et al.An et al. (2019) employed GRU to model both long-term and short-term interests, resulting in improved user representations. Wu et al. extended the attention mechanism Vaswani et al. (2017) paradigm in several waysWu et al. (2019, 2020). Wang et al.Wang et al. (2020) employ fine-grained interest matching using dilated convolution and 3D convolution techniques.
### LLM-Powered News Recommendation Model
With the remarkable performance exhibited by LLM across diverse domains, researchers have embarked on an exploration of its potential within the recommendation domain. Kang et al.Kang et al. (2023) conducted a comparative study encompassing traditional collaborative filtering methods and LLM for user rating prediction, examining zero-shot, few-shot, and fine-tuned scenarios. Their research revealed intriguing insights. Likewise, Hou et al.Hou et al. (2023) devise various prompts to address the ranking predicament in recommender systems. Gao et al.Gao et al. (2023) transform user profiles and historical interactions into prompts, leveraging ChatGPT\({}^{\prime}\)s in-context learning capabilities for recommendation.
The above methods utilize LLM's in-context learning capabilities by constructing prompts to cope with downstream tasks in the general recommendation domain, but the performance is far inferior to that of traditional ID embedding-based fine-tuning methods. In the news recommendation domain, LLM is also beginning to combine with traditional models. Liu et al.Liu et al. (2023) use ChatGPT to generate user profiles and augment the news text, combining with a traditional recommendation model to achieve better results. Li et al.Li et al. (2023) based on the traditional recommendation model, generated news representations by using LLM as a news encoder directly to complete the news recommendation.
### KG-powered News Recommendation Model
Recommendation systems predominantly operate within a graph structure due to the information-rich nature of the data, which leads to the widespread adoption of GNN within the realm of news recommendation. Wang et al.Wang et al. (2018) proposed DKN, which constructs a graph of entities within the input news and was utilized to seek neighboring nodes for expanding news information. Another significant contribution by Mao et alMao et al. (2021). involves the integration of a GCN with the user's browsing his
tory and leverages both LSTM and attention mechanisms to extract textual information. Yang et al.[23] employ a gated GNN, combining global representations from other users with local representations to enhance the personalized recommender system. Furthermore, several research endeavors aim to amplify representation and inference capabilities by linking LLM and KG. Sun et al.[24] utilized attention mechanism to facilitate interaction between all tokens of the input text and KG entities, thus augmenting the output of the language model with domain-specific knowledge from the KG.
It is evident that the KG within the news domain holds substantial value in terms of structured information and domain-specific knowledge. Its inherent incompleteness and limited language understanding are effectively supplemented by the extensive general knowledge encapsulated within LLM. Notably, there is currently a dearth of work that combines LLM and KG within the news recommendation field. Therefore, we propose LKPNR as a solution to bridge this gap.
## 3 Problem Formulation
The personalized news recommendation system proposed takes the user's news clicks and the KG of the entities within the news as input. The user's news clicks refer to a number of pairs of user and news, denoted as \(\mathcal{C}=(u,n)\subseteq\mathcal{U}\times\mathcal{I}\), where \(\mathcal{U}\) is the user set and \(\mathcal{I}\) is the news set. The KG of the entities contained in the news set \(\mathcal{I}\) is a triple \(\mathcal{G}=(h,r,t)\subseteq\mathcal{E}\times\mathcal{R}\times\mathcal{E}\), where \(h,t\in\mathcal{E}\) represent the head and tail entities, and \(r\in\mathcal{R}\) represents the relation between them. Given the training click set \(\mathcal{H}\) and test click set \(\mathcal{T}\), where \(\mathcal{H}\cup\mathcal{T}=\mathcal{C}\) and \(\mathcal{H}\cap\mathcal{T}=\phi\). Then the recommendation task can be formulated as learning a matching function \(\mathcal{T}\) from the training set \(\mathcal{H}\) and predicting the degree of user-news matching in the test set \(\mathcal{T}\) to obtain the score \(\mathcal{I}\mathcal{I}\).
## 4 Framework
The overall recommendation system framework consists of three components: the LLM and KG Augmented News Encoder (LK-Aug News Encoder), the LLM and KG Augmented User Encoder (LK-Aug User Encoder), and the Click Predictor. The overall recommendation framework is illustrated in Figure 2
### LK-Aug News Encoder
The LK-Aug News Encoder is composed of three sub-modules: General News Encoder, LLM-Augmented Encoder and KG-Augmented Encoder.
General News Encoder.The general news encoder is designed to learn a semantic representation of news at the word level, which is achieved through a structure that typically consists of two layers: the word embedding layer and the word fusion layer. The word embedding layer utilizes an embedding matrix \(E\in R^{(|v|\times d)}\) to convert each word \(w\) occurring in the news title and abstract into an embedding vector \(e\). Here, \(v\) denotes the number of words and \(d\) denotes the dimension of the word embedding. The fusion layer is a well-designed component in the baseline experiments that interacts with the embedding vectors of individual words via operations such as linear mapping, weighted summing, and vector concatenation. These operations fuse the vectors to produce a generic representation vector \(r_{GNE}\) for the given news.
LLM-Aug Encoder.We leverage powerful contextual understanding capability of LLM and rich open-world knowledge to build news representations with comprehensive semantic information for the input news to solve the problem of limited text feature extraction capability in general news recommendation. The input news text is created by concatenating news titles, news abstract, category and sub-category through [SEP], as shown the Figure 2.
Figure 2: The framework of LKPNR. The lower left corner of figure is the input news data into LKPNR, where the words marked in yellow indicates the entities. Candidate news is encoded by LK-Aug news encoder that combines LLM and KG with traditional general news encoder to obtain news representation. To obtain user representation, LK-Aug User Encoder contains several LK-Aug News Encoders, which encodes user’s historical click behaviors.
\[S_{HS}=LLM(t) \tag{1}\]
\[S_{HS\_p}=mean\_pooling(S_{HS}) \tag{2}\]
where \(LLM(\cdot)\) is the LLM decoder which returns the hidden states of the last four decoder layers, denoted \(S_{HS}\), and \(S_{HS_p}\) denotes the output after taking mean pooling strategy on the sequence length dimension of the each layer. After that \(S_{HS_p}\) will be weighted and summed to get \(S_{W}\)
\[S_{W}=\sum_{i=1}^{4}(a_{i}S_{HS\_p}^{i}) \tag{3}\]
where \(a_{i}\) denotes the learnable weights, \(S_{HS\_p}^{i}\) denotes the hidden states of the \(i-th\) layer. Finally the weighted hidden state \(S_{W}\) will be projected to the text representation space by nonlinear mapping.
\[r_{LLM}=\sigma(f_{l}(S_{W})) \tag{4}\]
where \(\sigma\) denotes the activation function, \(f_{l}\) denotes the linear transformation, and \(r_{LLM}\) is the enhanced news representation.
KG-Aug EncoderWe feed the enhanced news representation \(r_{LLM}\) into the nonlinear layer, which is used to transform \(r_{LLM}\) from the textual representation space to the entity representation space.
\[q=\sigma(f_{s}(r_{LLM})) \tag{5}\]
where \(f_{s}\) is a linear transformation. The query \(q\) will have extensive interactions with the entities in the KG.
Generally, the title and abstract of a piece of news will contain several source entities. Considering the 1,2,...,n hop adjacent entities of these source entities, we can extract a subgraph \(g^{n}=(V,R)\) from the external KG, where \(V\) is the set of entities in the subgraph, \(R\) is the set of edges connecting entities. Taking the \(k-th\) hop adjacent entity set as an example, \(V^{k}=\{v_{i}^{k}\}_{i=0}^{|M|}\) represents \(M\) entities with \(k\) hops from the source entity. Then we get the embedding vector corresponding to each entity through the wiki KG, denoted as \(X^{k}=\{x_{i}^{k}\}_{i=0}^{|M|}\). Given a query \(q\) and a neighboring entity set \(X^{k}\) with hop count \(k\), the KG-Augmented Encoder interacts the query with each vector \(x_{i}^{k}\) in \(X^{k}\) to generate a news attention score for the entity, denoted as \(\alpha_{i}^{k}\).
\[\alpha_{i}^{k}=Softmax(W_{k}^{T}[q;x_{i}^{k};q\circ x_{i}^{k}]) \tag{6}\]
where \(W_{\alpha k}^{T}\in R^{3l\times 1}\), is a learnable parameter matrix, \(\circ\) is the element multiplication, and \([;]\) denotes the vector connection.
The weighted representation \(\hat{x}^{k}\) of a \(k\)-hop entity can be computed as:
\[\hat{x}^{k}=\sum_{i=1}^{M}\alpha_{i}^{k}x_{i}^{k} \tag{7}\]
After that, we concatenate the weighted representation vectors of each hop and project the concatenated vectors to the news representation space, denoted as the KG representation \(r_{KG}\)
\[r_{KG}=Q^{T}[\hat{x}^{1};...;\hat{x}^{n}] \tag{8}\]
where \(Q^{T}\in R^{nl\times o}\), \(nl\) denotes the dimension after the representation vector connection of each hop and \(o\) represents the dimension of the projected KG representation.
The final news representation vector \(r_{n}\) is obtained by connecting three news representations above.
\[r_{n}=[r_{GNE};r_{KG};r_{LLM}] \tag{9}\]
### LK-Aug User Encoder
The LK-Aug User Encoder learns representations of users based on their click history on news. This module includes a News Embedding Layer (NEL) and a Representation Fusion Layer (RFL). The NEL obtains the representation of the news browsed by the user through the LK-Aug News Encoder, denoted as \([r_{n}^{1},r_{n}^{2},...,r_{n}^{z}]\), where \(z\) represents the length of the historical browsing news sequence. The RFL transforms news representation sequences into user representations \(r_{u}\) through a series of fusion methods, such as concatenation, mapping, attention, etc.
\[[r_{n}^{1},r_{n}^{2},...,r_{n}^{k}]=NEL(h_{1},h_{2},...,h_{k}) \tag{10}\]
\[r_{u}=RFL([r^{1},r^{2},...,r^{k}]) \tag{11}\]
### Click Predictor and Model Training
Given candidate news and user representations \(r_{n}\) and \(r_{u}\), this module is used to get the matching score between user u and the candidate news n. We compute the dot-product\(\hat{y}_{n,u}\) of \(r_{n}\) and \(r_{u}\) as the unnormalized matching scores of users and news.
\[\hat{y}_{n,u}=\langle r_{n},r_{u}\rangle \tag{12}\]
Following previous general work, we use a negative sampling strategy for model training. For the \(i-th\) news exposure, we compute its unnormalized matching score as \(\hat{y}_{i}^{+}\). In addition, randomly select \(K\) pieces of news as the news that the user does not click, and its unnormalized matching score is \([\hat{y}_{i,1}^{-},\hat{y}_{i,2}^{-},...,\hat{y}_{i,K}^{-},]\). We employ softmax function to calculate the normalized matching score.
\[p_{i}=\frac{exp(\hat{y}_{i}^{+})}{exp(\hat{y}_{i}^{+})+\sum_{j=1}^{K}exp(\hat{ y}_{i,j}^{-})} \tag{13}\]
In this way, the click prediction problem can be formulated as a K+1 classification problem. For the training set \(\mathcal{H}\), the loss function \(\mathcal{L}\) of model training is the negative log likelihood of all positive samples, which can be described as follows:
\[\mathcal{L}=-\sum_{i\in\mathcal{H}}log(p_{i}) \tag{14}\]
## 5 Experiments
### Dataset
We conduct experiments on the MIND dataset, which is a dataset constructed based on user click logs on the MSN online news site. We use the same processing method as Mao et al. We randomly sample 200K user click logs from the training and validation sets of the MIND dataset. Given the absence of labels for the test set, we partition the original validation set into two distinct segments: the experimental validation set and the experimental test set. The specific information of the constructed sampled dataset is shown in Table 2.
### Implementation Details and Metrics
We select NAML and NRMS as our baseline models, setting the maximum sequence length for user browsing history to 50 and adopting a negative sampling rate of \(k=4\). We maintain the parameter configurations of the General News Encoder identical to the baseline. In the LLM-augmented encoder, the hidden states of the LLM are projected to 500 dimensions. For the KG-augmented encoder, the dimensions of entity embeddings are set to 100. Furthermore, we limit the maximum number of neighboring nodes to 20 per source node, and the traversal depth is constrained to a maximum of 2 hops. Throughout the training process, learning rate employs 1e-4, batch size is set to 64, and early-stop strategy is implemented. All experiments are conducted on the NVIDIA TESLA V100.
To evaluate the model's performance, we use four widely recognized evaluation metrics, specifically, the Area Under the ROC Curve (AUC), Mean Reciprocal Rank (MRR), and Normalized Discounted Cumulative Gain (nDCG@5 and nDCG@10).
### Performance Comparison
We perform LKPNR on two baseline experiments. To further validate the efficacy of our framework design, we conduct ablation experiments, the experimental results are summarized in Table 1.
The experimental results show that the two baseline models have significant improvement in all evaluation metrics (+2.47%/1.81% AUC, +1.76%/1.66% MRR, +2.25%/1.95% nDCG@5, and +2.08%/1.83% nDCG@10 compared to NRMS/NAML performance) with the augmentation of our framework. This performance improvement derives from the fact that the news encoder of the baseline gains better performance through the enhanced semantic information of the LLM and the collaborative information of the KG. As depicted in Table 1, discernible decrements in performance are across the spectrum of ablation variants compared to our complete model, which demonstrates the efficacy of the different components of our model. The removal of the LLM exhibits a substantial impact on the overall performance of the model, demonstrating the effectiveness of the augmenting semantic information for news representation.
### Performance of Different LLM
The characteristics of LLM can vary due to inconsistencies in the proportion of data categories within the training data and model structural design. Consequently, these differences lead to varying capabilities among LLMs, reflected in their open-world knowledge, performance on diverse tasks and so on, which leads to their distinct understanding of text. For instance, ChatGLM2(Zeng et al., 2022) is trained on the same amount of Chinese and English corpus, and can handle Chinese and English tasks with high quality. LLAMA2(Touvron et al., 2023), trained on an extensive, high-quality English dataset, demonstrates adeptness in handling various English tasks. RWKV(Peng et al., 2023) exhibits a quicker reasoning speed and lower computational complexity. In order to explore the impact of various LLMs on news recommendation, we employ three cutting-edge models: ChatGLM2, LLAMA2, and RWKV to generate enhanced news representations on the basis of the baseline model NRMS. Table 3 shows the performance comparison of the enhanced news representation of different LLMs in the recommenda
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Metric} & \multicolumn{4}{c|}{NRMS} & \multicolumn{4}{c|}{NAML} \\ \cline{2-9} & \multirow{3}{*}{Orig} & \multicolumn{4}{c|}{LKPNR} & \multirow{3}{*}{Orig} & \multicolumn{4}{c|}{LKPNR} \\ \cline{3-4} \cline{6-9} & & LKPNR-NRMS & & w/o KG & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-9} & \multirow{2}{*}{Orig} & \multicolumn{4}{c|}{LKPNR} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-9} & & & & & & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline AUC & 0.6802 & 0.7049 & 0.6997 & 0.6845 & 0.6842 & 0.7023 & 0.6995 & 0.6841 \\ \hline MRR & 0.3316 & 0.3492 & 0.3441 & 0.3308 & 0.3257 & 0.3423 & 0.3411 & 0.3319 \\ \hline nDCG@5 & 0.3661 & 0.3886 & 0.3846 & 0.3680 & 0.3621 & 0.3816 & 0.3810 & 0.3699 \\ \hline nDCG@10 & 0.4306 & 0.4514 & 0.4453 & 0.4310 & 0.4257 & 0.4440 & 0.4426 & 0.4325 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of comparison results (Orig. denotes general news recommendation, LKPNR-NRMS/NAML denotes NRMS/NAML with LKPNR, w/o KG denotes remove the KG-Augmented Encoder, w/o LLM denotes remove the LLM-Augmented Encoder)
\begin{table}
\begin{tabular}{c c|c c} \hline \# users & 20000 & \#user in train set & 189580 \\ \hline \# news & 78602 & \# news in train set & 75963 \\ \hline \# training logs & 595186 & \# training samples & 905297 \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of MIND-200K dataset
tion task.
The outcomes of our experiments utilizing these three diverse LLMs are detailed in the table below, and the results indicate that ChatGLM2 provides the most effective enhancement for news recommendation when compared to both LLAMA2 and RWKV. The reason for this phenomenon is that the training data of ChatGLM2 may contain a certain proportion of English news.
### Effectiveness of KG Entity's Query
We employ comparative experiments to ascertain optimal strategies for the retrieval and integration of adjacent entities in a more efficient manner. The incorporation of neighboring entity vectors in the news coding process can be perceived as a mechanism that augments the collaborative information. By fusing vectors of neighboring entities, the KG-Augmented Encoder is able to gather information from multiple related entities and synergistically integrate them into a single encoded representation. The crux of enhancing news representation through the use of KG lies in the efficient extraction of information from all neighboring entities. In KG-Augmented Encoder, the query is obtained by projecting the textual representation of news into the entity representation space. We have implemented an experiments to compare the retrieval performance of the query converted by general news encoder and query converted by LLM. In addition, direct weighted summation of all entities at each hop may lead to relatively large information loss. We use multi-head query to consider different representation spaces, i.e., the multiple weighted summation of all entities with different weights can capture collaborative information from different perspectives of the news and entities, and thus improves the representational capability of KG-Augmented Encoder. The detailed experimental results are shown in Figure 3.
The experimental results show that the retrieval performance of LLM mapping query outperforms the General News Encoder mapping query. Compared with the news representation produced by the General News Encoder, which is limited to the specifics of the news text, the news representation of the LLMs contains some open world knowledge about the news, and is thus able to understand the information of neighboring entities more effectively. In addition, when num_head=3, the LLM demonstrates its highest proficiency in mapping queries to extract information. It suggests that expanding the representation space of the query vector enhances its capability to gather entity information.
## 6 Case Study
In this section, we demonstrate the characteristics of LLM and KG Augmented News Encoder using visualization. Figure 4 shows a user's click order of historical news and current news candidates, as well as the detailed categories, subcategories, titles and abstracts of these news. Before the integration of KG, the user's expression was transformed from the text and category characteristics of these historical news clicks. This candidate news has little correlation with any historical news clicked by users, and the matching degree between the candidate news and the user vector is also very low. After KG integration, give full consideration to adjacent nodes. Figure 4 contains the source and neighboring entities of the candidate news, and the entities in the historical click news are highlighted in yellow. Figure 5 shows the attention of a query to neighboring nodes, and in Figure 4, the five nodes with the highest attention are highlighted in blue. Although this candidate news has a relatively low correlation with the user's historical click news, it has a large number of adjacent nodes with a high correlation with the historical click news. The historical news sequence contains a number of U.S. locations and celebrities, and the candidate news also includes many of these entities in its adjacent nodes. This shows that the candidate news has a deep potential connection with the historical click news, and the matching degree has been greatly improved after considering the neighbor nodes.
LLM and KG Augmented News Encoder considers potential connections between candidate news and user's history of clicking on news, which makes user-candidate news matching beyond the understanding of news text.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Metrics} & \multicolumn{3}{c|}{LKPNR} \\ \cline{2-4} & ChatGLM2-6B & LLAMA2-13B & RWKV-7B \\ \hline AUC & 0.7049 & 0.6845 & 0.6771 \\ \hline MRR & 0.3492 & 0.3307 & 0.3300 \\ \hline nDCG@5 & 0.3886 & 0.3657 & 0.3631 \\ \hline nDCG@10 & 0.4514 & 0.4307 & 0.4218 \\ \hline \end{tabular}
\end{table}
Table 3: the performance comparison of the enhanced news representation of different LLMs
Figure 3: Comparison of different queries under different num heads(General_query denotes General News Encoder mapping query, LLM_query denotes LLM mapping query)
## 7 Conclusion
In this work, we propose an innovative personalized news recommendation framework LKPNR, which integrates a Large Language Model (LLM) and a Knowledge Graph (KG). While combining the General News Encoder, the robust contextual comprehension capability of the LLM allows us to derive news representations imbued with semantic information. Simultaneously, we harness the news relationship graph structure inherent in the KG to extract supplementary collaborative news information, enhancing the efficacy of the news recommendation system and alleviating the long tail problem to a certain extent. The experimental results demonstrate the outstanding performance of our proposed framework, leading to significant enhancements over the traditional baseline.
|
2308.04892 | Transmission and Color-guided Network for Underwater Image Enhancement | In recent years, with the continuous development of the marine industry,
underwater image enhancement has attracted plenty of attention. Unfortunately,
the propagation of light in water will be absorbed by water bodies and
scattered by suspended particles, resulting in color deviation and low
contrast. To solve these two problems, we propose an Adaptive Transmission and
Dynamic Color guided network (named ATDCnet) for underwater image enhancement.
In particular, to exploit the knowledge of physics, we design an Adaptive
Transmission-directed Module (ATM) to better guide the network. To deal with
the color deviation problem, we design a Dynamic Color-guided Module (DCM) to
post-process the enhanced image color. Further, we design an
Encoder-Decoder-based Compensation (EDC) structure with attention and a
multi-stage feature fusion mechanism to perform color restoration and contrast
enhancement simultaneously. Extensive experiments demonstrate the
state-of-the-art performance of the ATDCnet on multiple benchmark datasets. | Pan Mu, Jing Fang, Haotian Qian, Cong Bai | 2023-08-09T11:43:54Z | http://arxiv.org/abs/2308.04892v1 | # Transmission and Color-guided Network for Underwater Image Enhancement
###### Abstract
In recent years, with the continuous development of the marine industry, underwater image enhancement has attracted plenty of attention. Unfortunately, the propagation of light in water will be absorbed by water bodies and scattered by suspended particles, resulting in color deviation and low contrast. To solve these two problems, we propose an Adaptive Transmission and Dynamic Color guided network (named ATDCnet) for underwater image enhancement. In particular, to exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) to better guide the network. To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced image color. Further, we design an Encoder-Decoder-based Compensation (EDC) structure with attention and a multi-stage feature fusion mechanism to perform color restoration and contrast enhancement simultaneously. Extensive experiments demonstrate the state-of-the-art performance of the ATDCnet on multiple benchmark datasets.
Underwater Image Enhancement, deep learning, color restoration, and contrast enhancement
## I Introduction
Underwater images play an important role in the marine industry, such as underwater archaeology [1] and underwater target detection [2]. However, due to the complexity of the underwater environment and the optical characteristics of the water body (e.g., wavelength, distance-dependent attenuation and scattering), the underwater image will inevitably suffer from degradation (e.g., color deviation and low contrast [3]). Therefore, how to restore a clear underwater image is particularly important for the development of the marine industry.
Traditional underwater image enhancement methods can be divided into physical model-free [4, 5, 6, 7, 8] and physical model-based [9, 10, 11, 12, 13, 14, 15]. The physical model-free methods mainly adjust the pixel value (e.g., histogram equalization-based method [8]) to improve the visual quality of the image. However, they ignore the underwater imaging mechanism, resulting in over-enhancement and over-saturation. The physical model-based methods are mainly based on various prior knowledge (e.g., underwater dark channel prior [13]) to estimate underwater imaging parameters (i.e., medium transmission and atmospheric light [16]), and then invert the physical model to obtain enhanced images. They also have limitations: 1) The estimated parameters are based on various prior conditions, but these prior conditions are not always accurate in different underwater environments (e.g., fuzzy prior [17] does not support clear underwater images). 2) It is a great challenge to estimate underwater imaging parameters accurately with physical methods.
In recent years, researchers try to use deep learning-based methods [18, 19, 20, 21, 22, 23] to enhance underwater images. Perez et al. [18] form a pair of real-world underwater datasets for the first time, and use a simple CNN network to train the mapping relationship between degraded images and reference images. Han et al. [21] propose a deep supervised residual dense network. Wang et al. [22] propose UIEC ~2-Net, which combines HSV and RGB color spaces, providing new ideas for future work. Although these methods are novel and exciting, their effects are not particularly attractive on the whole. There are two main reasons: 1) Most of them regard contrast enhancement and color restoration as the same, without special treatment of color separately. 2) Most of them neglect the underwater imaging mechanism and rely excessively on the feature learning ability of neural networks.
In order to remedy the above shortcomings, we propose an Adaptive Transmission and Dynamic Color guided network (named ATDCnet) for underwater image enhancement. By observing a large number of underwater datasets, we find that most underwater images are dominated by a single color. To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced
image color. It can post-process the image color according to different water areas to restore the image color. Secondly, to exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) to better guide the network. Further, we design an Encoder-Decoder-based Compensation (EDC) structure with attention and a multi-stage feature fusion mechanism to perform color restoration and contrast enhancement simultaneously. The main contributions can be summarized as follows:
* We propose an Adaptive Transmission and Dynamic Color guided network (i.e., ATDCnet) applied for underwater image enhancement. This method focuses on color correction and contrast enhancement of images in different waters.
* To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced image color.
* To exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) guiding the network to better Decoder.
* Extensive experiments on many benchmark datasets demonstrate that our ATDCnet has achieved state-of-the-art in terms of quantitative and visual performance.
## II Proposed Method
The underwater image degradation process can be represented by the modified Koschmieder light scanning model:
\[I_{c}(x)=J_{c}(x)T_{c}(x)+A_{c}(1-T_{c}(x)), \tag{1}\]
where \(I_{c}(x)\) represents the observed image, \(J_{c}(x)\) denotes the scene radiation, \(x\) is the image pixel, \(A_{c}\) defines the global background light, \(c=\{r,g,b\}\) means the color channels. \(T_{c}(x)=e^{-\beta_{c}d(x)}\) represents the transmission value, where \(\beta_{c}\) is the channel-wise attenuation coefficient depending on water quality, \(d(x)\) is the scene depth at pixel \(x\).
In Fig. 1, we show the overall architecture of the proposed network. The network is mainly composed of an enhancement Encoder-Decoder-based Compensation (EDC) structure, Dynamic Color-guided Module (DCM), and Adaptive Transmission-directed Module (ATM). In the following content, we will briefly introduce the essential parts of our proposed network, mainly including the above three modules and related information fusion mechanisms.
### _Adaptive Transmission-directed Module_
Thanks to the powerful feature extraction and representation capabilities of neural networks, we design an Adaptive Transmission-directed module (ATM) to better guide the network. We obtain an initial reverse medium transmission (RMT) map of raw underwater images via a robust general dark channel prior (DCP) [12]. Thus, it is a transmission-guided module which is not necessary to design the loss function of the ATM branch separately. The overall structure is shown in the ATM branch of Fig. 1. It is composed of three cascaded residual blocks, in which the number of channels is 64, 64, and 1 respectively. Using the network optimization method, more accurate transmission information can be obtained.
### _Dynamic Color-guided Module_
We design a Dynamic Color-guided module (DCM) to post-process the enhanced image color. By observing a large number of underwater datasets, we find that most underwater images have a single color. In other words, a single color can reflect the general color information of the image. \((\mu_{1},\mu_{2},\mu_{3})\) respectively represent the mean value of the red, green, and blue channels of the underwater image that are obtained by the following formulas:
\[\mu_{1}=\texttt{mean}(I_{r}(x)),\ \mu_{2}=\texttt{mean}(I_{g}(x)),\ \mu_{3}= \texttt{mean}(I_{b}(x)), \tag{2}\]
where "mean" denotes the mean operator. Indeed, the global average of underwater image channels represents the overall color information. Therefore \((\mu_{1},\mu_{2},\mu_{3})\) characterize the general color information of the observed underwater image. Then
Fig. 1: The overall framework of ATDCnet which composed of three branches. The Residual Block (i.e., RB) is mainly composed of some convolution layer, BatchNorm layer, and LeakyReLU layer. The reverse medium transmission map (denoted as RMT) represents the transmission information of the underwater image. \((\mu_{1},\mu_{2},\mu_{3})\) characterize the general color information of the underwater image.
\((\mu_{1},\mu_{2},\mu_{3})\) enter the DCM. After network optimization, three RGB channel color attenuation coefficients \((\hat{\mu_{1}},\hat{\mu_{2}},\hat{\mu_{3}})\) are obtained. Finally, the color is corrected by multiplying the color attenuation coefficient and the feature map. The overall structure of the DCM is shown in Fig. 2, which is composed of three fully connected layers. We concatenate the original input, the output of the first layer, and the output of the second layer according to the channel. This can effectively increase the available information on the network so that a more accurate color attenuation coefficient can be estimated.
### _Encoder-Decoder-based Compensation Structure_
In order to preserve the data fidelity and solve the problem of gradient disappearance, we take the residual block, which structure is shown in Fig. 1, as the basic component of the structure. In order to avoid unnecessary information loss, the convolution kernel and stride of all convolution layers are \(3\times 3\) and 1 respectively. Such convolution operation will not change the image resolution. In addition, we also introduce the channel-attention (CA) mechanism, by assigning different weights to different channels to highlight more critical features. In the decoder stage, in order to make full use of the features of different stages (\(F_{1},F_{2},F_{3}\) in Fig. 1), we design a multi-stage feature fusion mechanism. With the help of the attention mechanism, multi-stage features can increase more available information for the network, thus improving the network's performance.
### _Loss Function_
In order to achieve an effective balance between visual quality and quantitative scores, we adopt a linear combination of \(\ell_{2}\) loss, perceptual loss, and SSIM loss. Specifically, \(\ell_{2}\) loss measures the \(\ell_{2}\) distance between the reconstructed image \(J\) and the reference image \(\hat{J}\) :
\[L_{\ell_{2}}=\frac{1}{N}\sum_{i=1}^{N}\lVert\hat{J}_{i}-J_{i}\rVert_{2}, \tag{3}\]
where \(J_{i}\) represents the pixel value at the reconstructed image position \(i\), \(\hat{J}_{i}\) represents the pixel value at the reference image position \(i\). Since \(\ell_{2}\) loss is difficult to capture high-level semantics, we introduce perceptual loss to evaluate the visual quality of images. It measures the \(\ell_{1}\) distance between the reconstructed image \(J\) and the reference image \(\hat{J}\) in the feature space defined by VGG-19:
\[L_{perc}=\frac{1}{C_{k}H_{k}W_{k}}\sum_{i=1}^{N}\lVert\phi_{k}(J_{i})-\phi_{k }(\hat{J}_{i})\rVert_{1}, \tag{4}\]
where \(\phi_{k}\) represents the \(k_{th}\) convolutional layer. N is the number of each batch in the training phase. \(C_{k},H_{k},W_{k}\) represents the channel number, height, and width of the feature map at layer k of the VGG-19 network respectively. In our experiment, we calculate the perceptual loss at layer \(relu5\_3\) of the VGG-19 network. In order to maintain the similarity of structure and texture between the reconstructed image \(J\) and the reference image \(\hat{J}\), we introduce SSIM loss:
\[L_{SSIM}=1-\frac{1}{N}\sum_{i=1}^{N}SSIM(J_{i},\hat{J}_{i}), \tag{5}\]
all losses act on the output stage of the network, and the total loss finally used for the training phase is expressed as follows:
\[L_{total}=\alpha L_{\ell_{2}}+\beta L_{perc}+\gamma L_{SSIM}, \tag{6}\]
according to experience, we set \(\alpha\), \(\beta\), and \(\gamma\) as 1, 0.01, and 100 respectively.
## III Experiments
### _Setups_
**Datasets.** We evaluate the performance of the proposed method on two types of datasets: the first type has reference images (e.g., EUVP [24], UIEB [19], LSUI [25], UFO-120 [26]); The other is without reference image (e.g., C60 [19], RUIE [27], SQUID [28]).
**Metrics.** We use three commonly used image evaluation metrics (i.e., Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index (SSIM)) to compare different methods quantitatively. A higher PSNR or SSIM means that the enhanced image is closer to nature in
Fig. 3: Ablation study of CA and \(\hat{F}\) of EDC on UIEB dataset. (a) Input. (b) w/o CA + w/o \(\hat{F}\). (c) w/o CA + with \(\hat{F}\). (d) with CA + w/o \(\hat{F}\). (e) ours. (f) Ground Truth. The first raw is the gradient map of the enhanced image, and the second raw is the enhanced image. The EDC with CA and \(\hat{F}\) produces clearer and more complete image details.
Fig. 2: Details of DCM. The input is the global average of the original underwater image according to the RGB channel, and the corrected attenuation coefficient is obtained after network optimization.
terms of vision and structure. A lower MSE means that the image has a better reconstruction effect. In addition, we also introduce non-reference Underwater Image Quality Measure (UIQM) [29] and Underwater Color Image Quality Evolution (UCIQE) [30]. A higher UIQM or UCIQE represents a better human visual perception.
### _Ablation Study_
In this section, we will conduct ablation experiments in two kinds to study the role of each module. In the first kind, only the enhancement EDC structure is ablated ATM and DCM are fixed; In the second kind, ATM and DCM are ablated (EDC is fixed).
**Channel-attention (CA) + feature (\(\hat{F}:=(\hat{F}_{1},\hat{F}_{3})\)).** EDC is responsible for contrast enhancement and color restoration in our method. In order to improve the performance of this module, we introduce the CA and \(\hat{F}:=(\hat{F}_{1},\hat{F}_{3})\). We conduct ablation experiments on EDC separately, and the experimental results are shown in Table I. Table I A and B show that without CA, \(\hat{F}\) will damage the model's performance. Table I A and C show that the network performance has been greatly improved, which indicates that CA effectively helps the network select more critical features. Table I C and D illustrate that \(\hat{F}\) will retain key features after CA screening. As can be seen, Fig. 3(e) with CA and \(\hat{F}\) has more complete image details.
**EDC + ATM.** We combine the transmission information to better guide the EDC to enhance the image. Observing Table II (i) and (ii), PSNR and SSIM have improved significantly. However, the improvement of the visual effect is not obvious after observing Fig. 4 (EDC) and Fig. 4 (EDC+ATM). In low-level visual tasks, a high PSNR or SSIM does not necessarily represent a good image visual effect. After the introduction of the ATM, although the reconstruction effect has been improved (e.g. PSNR and SSIM have been improved), these reconstructed pixels do not necessarily conform to human visual perception.
**EDC + DCM.** We conduct post-processing on the enhanced image color. In Table II, by comparing Table II (ii) and (iii), we can see that UIQM has been effectively improved. Comparing Fig. 4 (EDC+ATM) and Fig. 4 (EDC+DCM), it can be seen that DCM has the ability to restore image color.
**EDC + ATM + DCM.** According to Table II (iv), after combining the three modules, various metrics have been significantly improved. We believe that this is the result of the assistance between the three modules. After the introduction of the ATM, the image reconstruction effect is improved, but these reconstructed pixels do not conform to human visual perception. When the DCM is added, it can assist the ATM to reconstruct pixels better. The reconstructed pixel has both a better visual and reconstruction effect, so the final image will be more natural. Looking at Fig. 4 (Ours) and Fig. 4 (GT), the enhanced image is very close to the reference image.
### _Comparison Results_
We make quantitative and qualitative comparisons with eight state-of-the-art methods, including traditional methods (e.g., UDCP [13], Fusion [7]), CNN-based methods (e.g., Water-Net [19], Ucolor [31], USUIR [32], PUIE-Net [33]), and GAN-based methods (e.g., UGAN [34], FUnIE-GAN [24]) on seven datasets.
**Quantitative comparison.** We test the supervised metrics in four underwater datasets (i.e., EUVP, UIBB, LSUI, UFO-120). The average scores of MSE, PSNR, and SSIM are presented in Table III. As shown in Table III, in each line, we use black bold to indicate the best and underline to indicate the second. First of all, it is obvious that the average score of our design method is much higher than other methods. Secondly, the robustness of these state-of-the-art methods is uneven (e.g., FUnIE-GAN performs poorly on LSUI but better on the other three datasets). On the contrary, our method can be well generalized on all datasets.
In addition, we also conduct experiments on challenging datasets (i.e., C60, RUIE, SQUID). The results are presented in Table IV. Looking at Table IV, our method obtains two first (UIQM: C60 and RUIE) and two seconds (UCIQE: C60. UIQM: SQUID). In general, our method has better generalization performance than other methods. In addition, We find that although the traditional methods (e.g., Fusion) have lower supervised metrics (e.g., PSNR), they have higher unsupervised metrics (e.g., UCIQE).
**Qualitative Comparisons.** In Fig. 5, we provide the enhanced results of the designed method and the method with relatively high PSNR and SSIM scores (i.e., Fusion [7], FUnIE-GAN [24], Ucolor [31], PUIE-Net [33], USUIR [32]) in Table III. It can be seen from Fig. 5 that the traditional method (e.g., Fusion [7]) has over-enhanced the image; The image contrast enhanced by the GAN-based method (e.g., FUnIE-GAN [24]) is obviously insufficient. The CNN-based methods (e.g., USUIR [32], PUIE-Net [33]) have significantly
Fig. 4: Ablation study of the ATM and DCM on UIEB dataset. It can be seen from the pixel distribution map (The abscissa is the pixel value, and the ordinate is the number of pixels) that the complete model (Ours) with ATM and DCM is closer to Ground Truth.
improved the contrast, but the image color has not been effectively restored. On the contrary, our enhanced image is more attractive in terms of color and contrast. Due to space limitations, we provide more qualitative analysis in the supplementary materials.
## IV Conclusion
We propose a new underwater image enhancement model. On the basis of improving image contrast by the Encoder-Decoder-based Compensation (EDC) structure, the deep processing of color is realized by the Dynamic Color-guided Module (DCM). In addition, domain knowledge is incorporated into the network through the Adaptive Transmission-directed Module (ATM). Extensive experiments on different benchmark datasets have proved the superiority of our solution. The ablation study verified the effectiveness of the key components of our method.
|
2303.17222 | LatentForensics: Towards frugal deepfake detection in the StyleGAN
latent space | The classification of forged videos has been a challenge for the past few
years. Deepfake classifiers can now reliably predict whether or not video
frames have been tampered with. However, their performance is tied to both the
dataset used for training and the analyst's computational power. We propose a
deepfake detection method that operates in the latent space of a
state-of-the-art generative adversarial network (GAN) trained on high-quality
face images. The proposed method leverages the structure of the latent space of
StyleGAN to learn a lightweight binary classification model. Experimental
results on standard datasets reveal that the proposed approach outperforms
other state-of-the-art deepfake classification methods, especially in contexts
where the data available to train the models is rare, such as when a new
manipulation method is introduced. To the best of our knowledge, this is the
first study showing the interest of the latent space of StyleGAN for deepfake
classification. Combined with other recent studies on the interpretation and
manipulation of this latent space, we believe that the proposed approach can
further help in developing frugal deepfake classification methods based on
interpretable high-level properties of face images. | Matthieu Delmas, Amine Kacete, Stephane Paquelet, Simon Leglaive, Renaud Seguier | 2023-03-30T08:36:48Z | http://arxiv.org/abs/2303.17222v3 | # Latentforensics: Towards Lighter Deepfake Detection in the Stylegan Latent Space
###### Abstract
The classification of forged videos has been a challenge for the past few years. Deepfake classifiers can now reliably predict whether or not video frames have been tampered with. However, their performance is tied to both the dataset used for training and the analyst's computational power. We propose a deepfake classification method that operates in the latent space of a state-of-the-art generative adversarial network (GAN) trained on high-quality face images. The proposed method leverages the structure of the latent space of StyleGAN to learn a lightweight classification model. Experimental results on a standard dataset reveal that the proposed approach outperforms other state-of-the-art deepfake classification methods. To the best of our knowledge, this is the first study showing the interest of the latent space of StyleGAN for deepfake classification. Combined with other recent studies on the interpretation and manipulation of this latent space, we believe that the proposed approach can help in developing robust deepfake classification methods based on interpretable high-level properties of face images.
Matthieu Delmas\({}^{\star\dagger}\), Amine Kacete\({}^{\star}\), Stephane Paquelet\({}^{\star}\), Simon Leglaive\({}^{\dagger}\), Renaud Seguier\({}^{\dagger}\)\({}^{\star}\) IRT b\(<\)\(>\)com \({}^{\dagger}\)CentraleSupelec, IETR (UMR CNRS 6164) Deepfakes, Computer Vision, Latent Spaces
## 1 Introduction
Forgery of videos, the creation of so-called deepfakes, has been on the rise for the past few years. Although it yields multiple benefits in different domains (_e.g._ special effects and data generation), its democrisation comes with a few risks. It is now easier than ever for malevolent users to create fake media in order to discredit trusted sources, impersonate powerful political figures, or blackmail individuals. Fortunately, recent work in media forensics have yielded numerous ways to discriminate deepfakes from genuine videos. Most of those methods rely on the use of Convolutional Neural Networks (CNNs)[1] which are tailored to detect specific weaknesses or artefacts left by the forgery process [2][3]. However, this means that as the forgery process improves, the cost of training such discriminators will increase. If we keep going this way, we may have to rely on ever more specific details, and have as such more and more trouble in working towards a general mean of deepfake classification.
A forensic method is proposed here to classify forged face images from genuine ones, which can be implemented without training CNNs or other computationally heavy model and yet can attain better performance.
Recently, artificial data generation through Generative Adversarial Nets (GANs), notably faces, has seen great progress with models such as StyleGAN [4] or CycleGAN [5] in both the quality of results and the understanding of the generation process. We capitalise on those improvements by projecting suspect image data in a space of lower dimension before performing the classification. In particular, after face detection and alignment in suspect frames, we perform a dimensionality reduction of the cropped face image in the latent space of StyleGAN and classify the resulting latent codes using a Random Forest classifier, a logistic regression, or a Multi-Layer Perceptron. The full pipeline is quite simple but its strength resides in its combined efficiency and how easy it is to setup, especially compared to other state-of-the-art models. Our contributions are as follows:
* A benchmark of dimensionality reduction methods in the context of deepfake classification.
* An easy-to-train and effective deepfake classifier that requires fewer examples than other state-of-the-art models.
* An insight on deepfake classification explainability through the latent space of StyleGAN.
## 2 Related Work
There are two main trains of thought when it comes to discerning a deepfake from a genuine video: artefact detection and subject identification. In the former, models trained on images spot where manipulations have been made, these include MesoNet [2] or XceptionNet [3]. Such models rely on inconsistencies between image patches, _i.e._ the presence of visual artefacts. On the other hand, identification methods train a model to try to recognise if the portrayed person is the expected subject or an impersonator, instead of trying to spot if a certain frame has been tampered with.
### Detection of Image Artefacts
CNNs have been achieving top performance in analysing video frames for deepfake classification. MesoNet [2], for instance, is a lightweight convolutional network which relies on the density of details in image patches to detect whether or not the image had been tampered with. Such networks were state of the art for a white, as highlighted by Rossker _et al._ in their FaceForensics++ (FF++) study [6]. More recently, the Deep Fake Detection Challenge [7] (DFDC) allowed multiple teams to compete in providing the best detection methods. Even though such methods hold great performance on given datasets and on raw images, success rates tend to drop when they are faced with unseen, compressed or altered data, as highlighted by the FF++ [6] and DFDC [7] studies. As such, basing the detection process on raw image analysis might not keep holding up in the future.
### Identity-based Classification
To face this growing problem, methods have been put together to circumvent the need of having to rely on ever-subtler image artefacts, and to make the decision at a higher, more semantically robust level. Agarwal _et al._[8] and Cozzolino _et al._[9] combine neural networks and statistical methods to extract a vector representing the identity of the speaker, and then compare it to one from genuine footage. However, while such methods tend to overfit less on particular manipulations, they need access to supplementary information compared to traditional CNN-based methods (_e.g._ footage which is known to be genuine, or temporal data).
In the present paper, the proposed method takes inspiration from these approaches, without needing such additional information.
## 3 Method
The proposed approach is motivated by a theoretical discussion on the classification problem through the curse of dimensionality. Multiple method of face image dimensionality reduction are then introduced.
### Deepfake classification as a decision problem
Classifying a deepfake is making a decision if an observed frame \(x\) belong to either \(X_{g}\) or \(X_{m}\), the sets of genuine and manipulated images respectively. We define \(\pi_{m}\) as a prior on the proportion of deepfakes in the wild (\(\pi_{g}=1-\pi_{m}\), the proportion of genuine images), \(P(x)\) the probability distribution of the nature of images, \(D\) the portion of the space where we decide to label \(x\) as a deepfake, and \(\bar{D}\) its complement. We want to make as few errors as possible, which means minimising \(J\) in equation (1).
\[J=\pi_{g}P(X\in D|X_{g})+\pi_{m}P(X\in\bar{D}|X_{m}) \tag{1}\]
(1) can be seen as a mean error rate, and optimising it leads to the criterion (2) as shown in [10]. With \(p_{g}\) and \(p_{m}\) the probability density functions of \(X\in\bar{D}\) and \(X\in D\) respectively, we define the decision function\(F\):
\[F(x)\triangleq\frac{p_{m}(x)}{p_{g}(x)}-\frac{\pi_{g}}{\pi_{m}}\overset{fake} {\underset{gen.}{\geq}}0 \tag{2}\]
We never have access to the exact form of \(p_{g}(x)\) and \(p_{m}(x)\). However, we can approximate \(F\) with data-driven techniques such as Random Forest classifiers [11], so that \(F\) can be extrapolated on new data at testing or inference time. As explained before, modelling it directly, such as with traditional CNNs architectures provide good results, but bears a few problems. To find both a better and easier-to-train model, we take the approach of working on a relevant lower-dimension version of \(x\) instead of on the data directly. This may alleviate the need for lots of training examples due to the curse of dimensionality [12], on top of hopefully decreasing the error rate (1).
### Dimensionality reduction
Basing the discrimination on a low-dimensional space in which to represent the suspicious frame may ease deepfake classification. Intuitively, after reducing the dimension of the input, details which could mislead the model (_e.g._ background, pose, lighting, compression noise) can be erased. The model could then focus on the most important information (_e.g._ "Are the subject's expressions coherent?"). Furthermore, while CNNs achieve great performance in computer vision tasks, they require a lot of training data. A low-dimensionality model could be lighter, cheaper and easier-to-train if not based on CNN architectures.
A good projection space in this case is an efficient representation of the faces, which would make the distinction between the distribution of real faces and manipulated ones easier. The resulting decision barrier may even be more robust to slight changes in the data distribution (from compression, different context, etc.), but we need the projector to keep sufficient statistics, _i.e._, maintain as much useful information (_e.g._, the subject's identity and expressions) as possible.
Formally, the projector is a function \(P:\mathbb{R}^{1\times h\times w}\rightarrow\mathbb{R}^{d^{\prime}}\), with \(d=l\times h\times w\) the dimension of the original high-dimensional image and \(d^{\prime}\ll d\) the dimension of the lower-dimensional latent representation.
There are many ways to reduce data dimensionality: Principal Component Analysis (PCA) [13] can compress data while keeping as much variance as possible, Autoencoders [14] and Variational Autoencoders (VAEs) [15] are a (regularised) non-linear extension of this technique. Both PCA and VAE methods will be considered in the context of the proposed approach.
On the other hand, the generative network StyleGAN [4] has the advantage of having a unique architecture which generates data via an intermediate space. The structure of the generator network is summarised Figure 1. A normally sampled variable \(z,z\sim\mathcal{N}(0,1)^{512}\) is first transformed to a latent code \(w\) through a mapping network \(G_{\text{suppe}}(z)=w\). In turn, this variable is used at different stages of the proper image generation, done by another neural network \(G_{\text{suppeis}}\). In the latent space \(W\) defined as \(w\in W,G_{\text{suppeis}}(z)=w\), significant semantic directions have already been found [16, 17].The space \(W\) fits the criteria of a good support for the projected data: we already know there is a function from it to the face image space, and it has low cardinality.
To invert suspicious frames in the latent space of StyleGAN, we have to go through an optimisation process, as the generator is highly non-linear and complex. Many solutions exist for this task, such as proposed by [4][18][19] and [20]. Most of these solutions are based on gradient descent-algorithms to minimise a loss such as \(L(w)=L_{p}(G_{\text{synthesis}}(w),x)+\alpha||G_{\text{synthesis}}(w)-x||_{2}^{2}\), with \(L_{p}\) a neural distance such as LPIPS [21]. Once we are satisfied with the projection, we can discriminate out-of-distribution codes as deepfakes. Each dimensionality reduction method \(P\) takes a \(256\times 256\) pixels RGB image as an input and produces a low-dimensionality code. In this study the PCA models have been fitted according to the Incremental PCA algorithm [22], the code is the result of keeping the \(512\) most important dimensions of the image projected in the PCA space, \(P_{\text{PCA}}:\mathbb{R}^{512\times 512\times 3}\rightarrow\mathbb{R}^{512}\). For the VAE, the Vector-Quantized Variational AutoEncoder 2 (VQ-VAE-2) [23] has been chosen because of its performance. After passing the image through the encoder, the resulting latent variables (indices in the VQ-VAE-2 codebook) which are used normally by the decoder, are the chosen codes. The VQ-VAE projector \(P_{\text{VQ-VAE-2}}:\mathbb{R}^{512\times 512\times 3}\rightarrow\mathbb{N}^{512 \times 10}\) is effectively the encoder part of the network. Finally, a StyleGAN inversion which follows [19] is chosen. A first code is obtained from the input image thanks to an encoder \(E_{\text{domain}}\), it is then optimised by gradient descent for a hundred steps. The resulting projector can be described as \(P_{\text{StyleGAN}}:\mathbb{R}^{512\times 512\times 3}\rightarrow\mathbb{R}^{512 \times 14}\). This process is represented in Figure 2. Each of these projectors was taken pretrained (\(P_{\text{PCA}}\) was fitted) on the Flickr-Faces-HQ-256 dataset [4]. Ideally, discriminating real and
Figure 1: The StyleGAN architecture summarised, \(k\) is a constant learned during training.
fake images is easier in the latent space associated with the different projectors \(P_{\text{PCA}}\), \(P_{\text{VQ-VAE-2}}\) and \(P_{\text{StyleGAN}}\), as illustrated in Figure 3.
## 4 Experiments
Binary classification models are trained to determine whether or not a low-dimensionality code is from a deepfake frame. Given the dimensionality of the inputs, using classical machine learning models was the sensible choice for discrimination models. A Random Forest classifier [11] (RF), simple Multi-Layer Perceptrons (MLP) and a logistic regression have been tested for this task. Other models (Support Vector Machines, Gaussian Mixture Models) were tested but their performance (either accuracy or training time) were not the most satisfactory.
Different combinations of projectors and classifiers are compared to state-of-the-art deepfake classifiers (MesoNet[2], XceptionNet[3] and EfficientNet[24]) on the DFDC database [7]. Lastly, as the different channels of the StyleGAN latent code have different impacts on the generation result, a study on their relative importance is made.
### Data preparation
The proposed method allows for a shorter training duration given the same data (the more complex models, such as StyleGAN and the encoder used by Zhu _et al_. [19] are pretrained and frozen). However the inference process as a whole is slowed down by the inversion of each image in the latent space of StyleGAN (about 25 seconds on a NVIDIA QUADRO RTX 3000, and about 2-3 seconds on a NVIDIA A100), or, to a smaller extent, the inference of the VQ-VAE-2 encoder or the projection by the PCA model. As a consequence, one in every 10 frames was uniformly sampled throughout the videos of the DFDC dataset. This frequency was chosen to assure a relatively diverse set of faces, not too correlated to one another. Each frame was processed according to the FFHQ alignment protocol before having their dimensionality reduced. Training and testing sets were separated at the video level, the testing set was composed of around 15k frames, extracted from 301 videos. Finally, only the preview set of DFDC [25] was used for training.
### Results
#### 4.2.1 Reconstruction visualisation
Figure 4 shows two examples of face reconstruction from the low-dimensionality codes, a genuine face (top) and a deepfake (bottom). Underneath is shown the average of the VGG-based LPIPS distance [21] from the original face image taken over 250 samples. VQ-VAE-2 offers the lowest perceptual reconstruction error, while the PCA performs the worst. StyleGAN sits in between, albeit with a higher variance, probably due to the fact that some images may need different optimisation steps to converge properly. It is to be noted that, if needed, the reconstruction fidelity of the StyleGAN method can be adjusted by cutting short or prolonging the iterative inversion process, making a compromise between quality and computing time.
#### 4.2.2 Deepfake classification
To test if there is a dimensionality reduction method better suited for discriminating deepfakes, three instances of a Random Forest classifier [11] are trained on the codes outputted by each projector. Those classifiers were composed of 1500 estimators and were trained to optimise the Gini criterion. They were implemented using the scikit-learn package [26], with other parameters left to default. The results are shown in Table 1, with the methods labelled RF. Even if the VQ-VAE-2 (VAE) allows for the best reconstruction of its inputs (as previously shown in the Figure 4), its latent variables are in the form of discrete values
Figure 4: Reconstruction of two faces after the dimensionality reductions, on 256px images. The top row is a genuine image, and the bottom one a deepfake. The LPIPS shown is calculated as a mean over 250 different faces, and displayed with a 95 % confidence interval.
Figure 3: Data separation is easier after a pass through the projector \(P\).
Figure 2: The StyleGAN pseudo-inversion process. The initialisation is made with the encoder \(E_{\text{domain}}\). The resulting latent code is then optimised with \(L\) calculated between the reconstruction and the original image.
referencing a codebook used by the model. While this format allows for a compact and exhaustive representation of the data, it does not contain easily accessible, discriminating information about deepfakes. To classify deepfakes, the StyleGAN (SG) projection methods obtains the best performance. Still, this analysis shows that a few thousand dimensions, instead of the hundreds of thousands usually used, can contain enough information to discriminate genuine images from deepfakes to a satisfactory degree. This could further motivate the progression towards a more frugal approach to deepfake classification.
To further improve the performance of the StyleGAN dimensionality reduction and Random Forest Classifier pipeline, three other alternative classifiers were tested. The first one being a simple logistic regression model (LR), the second one a simple neural network with one single hidden layer of dimension \(512\) (MLP-2), and finally a five-layer neural network (MLP-5), with four hidden layers of size \(2048\), \(512\), \(512\) and \(512\). The two bigger models use the ReLU as the intermediate activation function, and they all produce a final output of size \(1\) with a Sigmoid activation function. The logistic regression model and the neural networks are trained by optimizing the binary cross-entropy. The models were implemented in PyTorch [27], and were optimised with Stochastic Gradient Descent and a learning step of \(5.10^{-4}\). The proposed methods are compared with the state-of-the-art deepfake detection methods MesoNet[2], XCeptionNet[3] and EfficientNet[24] which were taken pretrained on the complete DFDC database [7], and tested on the same examples as the proposed method.
As can be seen from the binary classification accuracy results in Table 1, the proposed methods can achieve comparable or better performances than other state-of-the-art models. In particular, the proposed method with the RandomForest Classifier performs better than MesoNet (\(+4\%\) of accuracy), another model designed to be light. Accuracy can be further improved by using a simple logistic regression classifier instead of the RadomForest one (another improvement of \(4\%\) in terms of accuracy). The best performance is obtained by combining the StyleGAN dimensionality reduction and the MLP classifiers (SG MLP-2 and SG MLP-5), outperforming the other models by a noticeable margin, including the state-of-the-art XCeptionNet and EfficientNet methods.
#### 4.2.3 Training dataset size
The first column of Table 1 shows the amount of videos the models were trained on. Around 25 times fewer deepfake videos were used to train the proposed models compared to the other off-the-shelf models. An advantage of using dimensionality reduction is the reduced training time, and the lesser need for data compared to other state-of-the-art methods, making it preferable when efficiency is required or when little labelled data is available (_e.g._ when a new modification method arrives). The proposed method is, as such, best suited for small scale scenarios such as when dealing with personal threats, or blackmail on an individual level.
#### 4.2.4 Importance across the different channels of the latent space
Originally, StyleGAN pseudo-inversion was accomplished by optimising the single vector \(w\in\mathbb{R}^{512}\) which was then fed at different stages of the generation process to compute the desired image [4]. It was also found that starting the generation process with a first latent code \(w_{1}\) and finishing it with another \(w_{2}\) resulted in a style mixing of facial features. The first generation steps corresponding to coarser (_e.g._ head shape, age) features and the latter to finer features (_e.g._ hair colour). A more visual explanation on feature mixing is available in the original paper [4]. Meanwhile, when designing higher-fidelity optimisation processes, researchers found that duplicating the latent code in multiple channels and optimising accross them differently yielded better results [18].
In the proposed method, the inversion process of StyleGAN by Zhu _et al_. [19] produces a latent code \(w\) optimised along 14 channels. It was tested whether or not individual channels had a different impact on classification accuracy. Figure 5 shows the performance of different instances of the same model trained separately on each channel. Each model (MLP-5 and RF) was trained under the same conditions (hyper-parameters and data) as in the previous experiments.
The mid-to-late layers (8-11 with the chosen methods) are more useful in deepfake detection than the early ones. Indeed, theoretically perfect deepfake generation can be seen as keeping both the original face's coarsest (_e.g._ head shape and age) and finest (_e.g._ hairstyle and skin texture) features, while changing the ones responsible for expression. Still, RandomForest Classifiers and MultiLayer Perceptrons do not provide their best accuracy on the same channels (9-10-11 for the RF, and 6-8-9 for the MLP). This finding may help in improving the performance and explainability of deepfake detectors in the future.
## 5 Conclusion
We proposed a lightweight deepfake classifier which is able to outperform complex state-of-the-art models with fewer training examples. On top of making deepfake detection more accessible, the proposed approach shows how focusing on key information in the data can help progressing towards more frugal deepfake detectors. It could be interesting in the future to look more closely at other low-dimensionality spaces, in particular other inversion variants for the StyleGAN framework which are already promising.
\begin{table}
\begin{tabular}{l|l|c} Training Set (\# Videos) & Method & Accuracy \\ \hline \multirow{3}{*}{DFDC (124k)} & MesoNet & 0.84 \\ & XCeptionNet & 0.91 \\ & EfficientNet & 0.91 \\ \hline \multirow{6}{*}{DFDC Preview (5k)} & Ours (PCA RF) & 0.78 \\ & Ours (VAE RF) & 0.69 \\ \cline{1-1} & Ours (SG RF) & 0.88 \\ \cline{1-1} & Ours (SG LR) & 0.92 \\ \cline{1-1} & Ours (SG MLP-2) & 0.93 \\ \cline{1-1} & Ours (SG MLP-5) & **0.94** \\ \end{tabular}
\end{table}
Table 1: Deepfake classification accuracy, best performance is in bold.
Figure 5: Deepfake detection accuracy with the Random Forest classifier and MultiLayer Perceptron (5 layers) across different channels of StyleGAN latent code \(w\). |
2309.02112 | Active-travel modelling: a methodological approach to networks for
walking and cycling commuting analysis | Walking and cycling, commonly referred to as active travel, have become
integral components of modern transport planning. Recently, there has been
growing recognition of the substantial role that active travel can play in
making cities more liveable, sustainable and healthy, as opposed to traditional
vehicle-centred approaches. This shift in perspective has spurred interest in
developing new data sets of varying resolution levels to represent, for
instance, walking and cycling street networks. This has also led to the
development of tailored computational tools and quantitative methods to model
and analyse active travel flows.
In response to this surge in active travel-related data and methods, our
study develops a methodological framework primarily focused on walking and
cycling as modes of commuting. We explore commonly used data sources and tools
for constructing and analysing walking and cycling networks, with a particular
emphasis on distance as a key factor that influences, describes, and predicts
commuting behaviour. Our ultimate aim is to investigate the role of different
network distances in predicting active commuting flows.
To achieve this, we analyse the flows in the constructed networks by looking
at the detour index of shortest paths. We then use the Greater London Area as a
case study, and construct a spatial interaction model to investigate the
observed commuting patterns through the different networks. Our results
highlight the differences between chosen data sets, the uneven spatial
distribution of their performance throughout the city and its consequent effect
on the spatial interaction model and prediction of walking and cycling
commuting flows. | Ivann Schlosser, Valentina Marín Maureira, Richard Milton, Elsa Arcaute, Michael Batty | 2023-09-05T10:45:37Z | http://arxiv.org/abs/2309.02112v1 | Active-travel modelling: a methodological approach to networks for walking and cycling commuting analysis.
###### Abstract
Walking and cycling, commonly referred to as active travel, have become integral components of modern transport planning. Recently, there has been growing recognition of the substantial role that active travel can play in making cities more liveable, sustainable and healthy, as opposed to traditional vehicle-centred approaches. This shift in perspective has spurred interest in developing new data sets of varying resolution levels to represent, for instance, walking and cycling street networks. This has also led to the development of tailored computational tools and quantitative methods to model and analyse active travel flows.
In response to this surge in active travel-related data and methods, our study develops a methodological framework primarily focused on walking and cycling as modes of commuting. We explore commonly used data sources and tools for constructing and analysing walking and cycling networks, with a particular emphasis on distance as a key factor that influences, describes, and predicts commuting behaviour. Our ultimate aim is to investigate the role of different network distances in predicting active commuting flows.
To achieve this, we analyse the flows in the constructed networks by looking at the detour index of shortest paths. We then use the Greater London Area as a case study, and construct a spatial interaction model to investigate the observed commuting patterns through the different networks. Our results highlight the differences between chosen data sets, the uneven spatial distribution of their performance throughout the city and its consequent effect on the spatial interaction model and prediction of walking and cycling commuting flows.
## 1 Introduction
Urban mobility, a key component of urban dynamics, intersects with various areas of urban life including economy, environment, and social structures. This critical aspect involves the movement of people and goods within urban areas, and its efficiency and sustainability directly influence the overall functioning of cities. Given its significance, it is crucial to acquire knowledge and understanding that help plan and develop improved mobility networks. This represents an ongoing challenge, as both transport methods and human mobility patterns continue to evolve.
An essential aspect of urban mobility is commuting, which is defined as the regular travel between residence and work places. Observing trends in commuting patterns can provide valuable insights into broader shifts in urban mobility. Nowadays, there is an increased focus on small-scale movement, primarily through cycling and walking. These modes of transportation are often grouped under the term "active-travel". Contemporary planning practices accentuate their importance for reasons of health, sustainability, and safety.
While active travel currently represents a moderate share of commuting methods in England, with about 10% (ONS 2017), this figure is projected to increase. The rise in active travel not only signifies potential reductions in air and noise pollution due to less reliance on cars, but also implies urban redesign opportunities. The decrease in space required for parking and driving could allow for expanded green areas, and increased housing, leisure and commercial spaces (Hidalgo 2020). Given the current context of rising living and transport costs and global supply chain disruptions, active travel represents a practical and cost-effective transport alternative. Therefore, gaining a better understanding of the active travel flow patterns could help us adapt to these trends and meet the evolving needs of the population.
In this paper, we conduct a methodological analysis to study active travel. The study compares different walking and cycling networks to investigate how different models predict real-world flows. The findings of this research have tangible implications for policymakers and urban developers. By providing a more accurate understanding of active travel flows, the results can assist in the formulation of strategies that facilitate and promote walking and cycling.
The paper is structured into three main sections: Networks, Routing, and Modelling. In the first section, the focus is on constructing and comparing different walking and cycling networks from open data sources. In the second section different cost matrices are built based on shortest paths distances for each specific network using different origin-destination centroids. Upon examining the resulting detour index of various networks, we observe substantial disparities between the standard detour values and those that incorporate the real flows between locations. Even in the absence of ground knowledge of paths taken, relying on such metrics can prove helpful to construct efficient links between populations and job distributions. The final section integrates previous insights into a spatial interaction model, assessing the predictive capabilities of the different networks and routing parameters in relation to real walking and cycling flows. Finally, we combine these flows to model active travel commuting as a single phenomenon. We apply this method in London at the Middle Layer Super Output Area (MSOA) scale taking 2011 census data sets as reference for observed commuting flows.
The results show that choosing the appropriate network and distance measure can impact the accuracy of flow estimations, making it key to carefully consider network construction and analysis to ensure more accurate outputs.These findings not only bring attention to the differences between networks, but also stress how results diverge throughout the city, contingent upon the distribution and connectivity of networks as well as the uneven coverage of distinct data sources.
### Walking and cycling
Walking and cycling are active modes of transport because they require physical activity and are recognised for their potential health benefits and low carbon impact. Research has shown that they are associated with improved cardiovascular health, reduced risk of chronic disease, and better mental health outcomes (Saunders et al., 2013; Kelly et al., 2014; Oja et al., 2011). They are considered sustainable modes of transportation due to their low carbon footprint and potential to promote transportation equity. In addition, they are cost-effective alternatives to traditional modes of transport. On the contrary, an increasing amount of literature is pointing at the fact that cars inside cities cause a series of problems that impact our health and well-being, such as noise and air pollution (Fleury et al., 2021), free space availability (Hidalgo, 2020), congestion and heat island effects.
All those problems can be tackled with efficient public transport, active travel and policies. Thus, it is necessary to rethink and adapt infrastructure to better integrate them in the urban layout. Many of the worlds biggest cities are turning towards cycling as a sustainable and efficient mode of transport, incorporating new policies that facilitate the development of cycling infrastructure, and discouraging car driving. New York 1, Chicago2, San Francisco3, Boston 4, Singapore 5 and others mention cycling and walking as central elements of their current and future public transport policies in their latest master plans.
Footnote 1: [https://onemyc.cityofnewyork.us/wp-content/uploads/2019/05/OneJYC-2050-Efficient-Mobility.pdf](https://onemyc.cityofnewyork.us/wp-content/uploads/2019/05/OneJYC-2050-Efficient-Mobility.pdf)
Footnote 2: [https://www.chicago.gov/content/dam/city/depts/cdot/CDOTZ20Projects/Strategic_Plan/Strategic_Plan.%P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n?n=P.Transn?n=P.Transn?n?n=P.Transn?n?n=P.Transn?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?](https://www.chicago.gov/content/dam/city/depts/cdot/CDOTZ20Projects/Strategic_Plan/Strategic_Plan.%P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transportation?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n=P.Transn?n?n=P.Transn?n=P.Transn?n?n=P.Transn?n?n=P.Transn?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?n?)
### Commuting and recreational trips
As with other modes of transport, it is common to distinguish trips by purpose, commuting on one hand, and leisure on the other. The two types of travel have varying influencing factors, and require different approaches for analysis and specific planning and policy interventions (Krizek, Handy, and Forsyth, 2009).
Factors such as a high-quality built environment, urban amenities, infrastructure, land use mix, and street scenery can impact the decision to walk or cycle for both recreational and commuting trips (Schaor and Cao, 2014; Cervero, Denman, and Jin, 2019), but also factors like safe and diverse route options and the availability of destinations within a reasonable walking or cycling distance. Specifically for cycling, there is consistent evidence that the quality, extent, and connectivity of a road or cycle network have a positive correlation with cycling uptake (Cervero, Denman, and Jin, 2019; Dill and Carr, 2003). Under the context of walking and cycling for commuting, time and distance are crucial factors. Recreational trips are typically less frequent, more flexible, and do not prioritise travel time/distance efficiency. Conversely, commuting trips are done regularly, with time constraints being a significant factor in route choice. As a result, commuting trips tend to prioritise distance and be less responsive to other variables (Broach, Gliebe, and Dill, 2011).
With distance being identified as a major barrier to active travel, it is essential for predicting and explaining travel behaviour. As such, analysing shortest paths within cycling and walking routes provide a useful benchmark for identifying optimal commuting routes. While shortest paths may not always align with observed route choices, empirical evidence suggests that people generally do not deviate significantly from the shortest distance route, with most trips being less than 10% longer than the optimal path (Winters et al., 2010; Mahfouz, Arcaute, and Lovelace, 2021).
## 2 Networks
This section considers the aspects directly related to downloading and manipulating open data to construct road networks, modelled as graphs with positive edge lengths. Using different types of networks can further on allow us to build varying distance matrices to examine the potential active travel between locations.
When it comes to calculating the shortest paths between origins and destinations, there are different distance metrics that can be used. On top of considering euclidean distance between locations, it is important to account for the actual distance travelled on the road network between two points. Doing so allows to build a more accurate understanding of the spatial distribution of trip lengths originating from or reaching a place, thus uncovering some connectivity aspects, local network properties and their impact on the magnitude of active travel flows.
### Road networks
Networks representing physical roads have already received a fair amount of attention in the literature and their properties have been observed across geography and even time (Strano, Nicosia, et al., 2012; Masucci, Stanilov, and Batty, 2014; Barthelemy, 2011), to study economic indicators (Porta, Strano, et al., 2009; Porta, Latora, et al., 2012; Law et al., 2013; Piovani, Molinero, and A. Wilson, 2017), to perform comparative analysis between cities (Hillier, T. Yang, and Turner, 2012; Strano, Viana, et al., 2013), to examine social and demographic indicators (Vaughan, 2007; Molinero et al., 2015), and asses accessibility to land-uses (Freira, Tavares, and Pedro Juliao, 2015; Novak and Sullivan, 2014), to name a few. In the context of network science, street networks are commonly represented using a directed or undirected primal and planar graph. In primal graphs, a road network is modelled using links to represent the street segments and nodes to represent street junctions (Porta, Crucitti, and Latora, 2006; Marshall et al., 2018). While planar means that the network is constrained to a two-dimensional space.
The level of detail required to construct street networks for graph analysis can vary depending on the specific analysis objectives, which may have different effects on the outcomes (Marshall et al., 2018; Barthelemy, 2011; Jung, Fengzhong Wang, and Stanley, 2008). For analyses at a very local scale, a more intricate street network is typically used, which is constructed by taking into account pedestrian-specific infrastructure like crosswalks and sidewalks (Palominos and Smith, 2022; Rhoads et al., 2020; Thompson Sargoni and Manley, 2023). In studies focused on spatial configuration and human behaviour, such as those using the Space Syntax theory, street networks are constructed using centre lines that represent angular changes (Penn, 2003). In this paper, we aim to use open data sources and construct an undirected primal and planar graph with an optimal
level of resolution for active travel routing and modelling. To do so, we use the two most detailed available representations of the road network currently available for the UK.
First, England's official provider of digitised infrastructure data, Ordnance Survey 6 covering England, Wales and Scotland, is used. Second, OpenStreetMap (OSM), an open source platform aiming at digitising as much of the physical world as possible. It does so through user contributed map features that have a detailed tagging system associated to them. The high quality of the road networks data makes it highly used in modern research on the built environment. Among the advantages of using OSM are transferability between areas of observation and scalability, making any analysis scalable to the whole planet. It allows large scale road network analysis (Gallotti, Bertagnolli, and De Domenico 2021; Louf and Barthelemy 2014; Barthelemy 2017; Lovelace et al. 2017), going beyond the borders of a single country (Boeing 2021; Liu et al. 2021), while maintaining confidence in the quality of the data.
Footnote 6: [https://www.ordnancesurvey.co.uk/business-government/products/open-map-roads](https://www.ordnancesurvey.co.uk/business-government/products/open-map-roads)
The two sources used present some differences that need to be put in context, as evidenced by the comparison in figure 1. Although both data sets could differ in terms of geographical extent, geometrical shape and attribute accuracy of segments, this comparative study focuses mainly on the completeness of the data sets. We examine to what extend their coverage is suited for the analysis of pedestrian and cycling networks. OS Open Roads considers exclusively roads that are used by cars, while OSM represents a wider range. On figure 1 one can see the two data sets overlayed on two areas: in Fitzrovia (b) and Stratford (c). The blue network being OSM and red being OS. In b), most of the roads can be taken by car, the only differences can be seen in pedestrian paths crossing squares and parks. But when looking at the area of Stratford, the differences become more striking. In this case, the OSM network presents a much greater set of roads, which are primarily dedicated to pedestrians and cyclists, including those around and inside
Figure 1: **OS and OSM road network coverage in London, UK.** a) Percentage of OS over OSM road coverage, based on road linear metres per MSOA level. b) and c) Overlapping of OS and OSM networks showing differences in coverage depending on the urban structure of London areas, Marylebone and Stratford respectively. Table in d) shows the percentage of linear metres from roads on OSM that are not covered by the OS network. Percentages are organised by road types.
the mall and the Olympic park. Further inspection shows that the difference of road linear metres coverage for both networks is unevenly distributed across MSOAs (Figure 0(a)). Knowing in advance that OS is a more high-level representation of roads, the higher coverage percentage of OSM is not a surprise. However, the high difference between MSOAs is a fact to consider when analysing the results in areas with different urban structures. We then examined whether the difference was consistent across street types, so we measured the percentage that are not covered by the OS dataset. Results show that streets classified as footways, service, paths and cycleways in OSM are under covered by OS (Figure 0(d)).
### Network setup
This section introduces some of the commonly used open source tools to download and construct a road network. OSM data can be downloaded from a number of sources, including the OpenStreetMap website and dedicated download servers like Geofabrik 7, which help download raw OSM data in different formats as files with multiple layers of features. Optimal access to this data can be achieved by using many software packages across different programming languages. We mainly consider R and python packages as they are widely used in the community. With each being designed for different purposes, the format and attributes within the data can vary. We considered a non exhaustive list of packages and the main criteria used to select the appropriate one are shown in table 1. **OSMnx8**(Boeing, 2017) is a python package,**osmdata**9(Padgham et al., 2017) and **osmextract**10(Gilardi and Lovelace, 2022) are R packages. We examine and compare different sources for downloading and working with OSM road networks to evaluate which is the most suitable considering the requirements of our analysis (Table 1). Therefore, we will be using OSMnx as the source of OSM data.
Footnote 7: [https://www.geofabrik.de](https://www.geofabrik.de)
Footnote 8: [https://OSMnx.readthedocs.io/en/stable/](https://OSMnx.readthedocs.io/en/stable/)
Footnote 9: [https://docs.ropensel.org/omdata/](https://docs.ropensel.org/omdata/)
Footnote 10: [https://ropensel.github.io/osmextract/index.html](https://ropensel.github.io/osmextract/index.html)
As streets have varying attributes and serve distinct purposes, network modelling for specific modes of transportation, such as buses or private cars, only incorporates streets suitable for those modes. However, when it comes to walking and cycling networks, the definition is less precise as pedestrians and cyclists can move more freely throughout the city. This has promoted the interest for creating alternative network profiles using the OSM network as a base. Therefore, additionally to downloading the desired networks, some packages offer the functionality to filter the roads based on a specific mode of transport, like walking, cycling or driving. The OSMnx package in python provides profiles for different transport modes by filtering out streets based on the tag keys. The dodgr 11(Padgham, 2019) package in R creates alternative profiles by weighting the segments according to different attributes such as speed, preferences and restrictions. Moreover, since both profiles rely on the accuracy of the attribute tags given to each street when it is mapped on the OSM network, we first examined whether some of the streets were miss-classified resulting in the inappropriate exclusion or inclusion of street segments (Figure 2).
Footnote 11: [https://atfutures.github.io/dodgr/](https://atfutures.github.io/dodgr/)
\begin{table}
\begin{tabular}{l c c c} & **osmdata** & **osmextract** & **Geofabrik** & **OSMnx** \\ _Big data_ & no & yes & yes \\ _Optimal for building graphs_ & no & no & yes \\ _High details_ & yes & yes & yes & no \\ \end{tabular}
\end{table}
Table 1: Overview of packages for working with **OSM** road networks. This list is non exhaustive and contains the options considered (Mahfouz, Arcaute, and Lovelace, 2021; Boeing, 2018; Costa, Marques, and Moura, 2021) as well as the main criteria for selection. The first category, _Big data_ implies that it is possible to download region and country scale data sets. _Optimal for building graphs_ means that the downloaded data is structured in a way that allows to build graphs without further engineering. Usually, it would be divided into two data sets, one containing nodes with their id, coordinates and additional optional attributes. The other data set contains edge ids, origin node id, destination node id and extra optional arguments such as length. Finally, _High details_ means that several columns of OSM key\(\sim\)tag values are present additionally to the main classification corresponding to the highway key. Additionally, some even more powerful command line tools are available, and would be useful for a bigger scale of analysis, but are omitted from this work.
By looking at figure 2, the streets that were filtered out (in colors) from each profile (in gray) can be identified. For instance, in the case of walking (a), some routes along parks and The Greenway footpath, which runs from Victoria Park to the southeast are excluded from the network as are classified as cycleways in the OSM dataset. A similar case is that of (b), where the tunnels that cross the Thames are excluded from the cycling network as they are classified as footways. As a result, connectivity costs are increased, which may affect global and local patterns, such as the connectivity within East London or between the north and south areas of the river.
Based on these initial observations, different network profiles are used to create distinct OD cost matrices between MSOAs, including those provided by popular packages, and one where minimal filtering was done (motorways removed as forbidden to cycle or walk on) to serve as a base model for our tests. The Ordnance Survey (OS) data is also used as it is the official and open source reference for the road network in the UK.
## 3 Routing
The next step in developing our analysis consists in using the networks for routing origins and destinations. In the scope of this work, we will only look at the distances that separate OD pairs
Figure 2: **OSMnx walk and cycle profiles overlapping the OSM full network.** The OSM full network in different colours showing the class types that are filtered out in the walk and cycle network. a) In grey the OSMnx walk profile on top of the full network. In cian The Greenway footpath running from Victoria park to the southeast, and classified as cycleway. Other footpaths within parks are also classified as cycle ways only. b) In grey the OSMnx cycle profile, in green, the Woolwich and Greenwich foot tunnels highlighted. Given that these tunnels are classified as footways in the OSM network, they are filtered and not considered within the cycle profile.
in the network by using shortest paths algorithms. To accomplish this, we compared a set of commonly used packages for computing shortest paths. We then examined various approaches to determine origin-destination points for commuting, particularly when data is aggregated at geographical units, such as the MSOA level in this case. As a result of this procedure, different cost matrices are created, one for each network, which will serve as inputs for the following section to obtain the prediction of pedestrian and cycling commuting flows.
### Shortest paths
We consider the distance between an origin and destination in a specified network to be the length of the shortest path that connects them. An alternative approach that can be used and is already implemented with certain packages is to consider time. An extra layer of sophistication can also be added by attributing additional weight factors to each link of the network based on its classification and the willingness of cyclists or pedestrians to use it. For example, a commuter might have a tendency to use a smaller, quieter route of greater length rather than a shorter, but busy and risky one. For the purposes of our work, we prioritise distance as the most critical factor for commuting decision-making, which also provides an approach that is easy to implement for a large scale analysis and can be further developed with more variables in future works. The other methods could be useful to learn more about path choice of commuters, who on the one hand think about their safety and comfort, but on the other aim to minimise their trip distance or duration. The significance of other factors may be challenging to validate and is left out of the scope of this work.
Based on the outputs from the benchmark shown in table 2, we use the cppRouting package in R to perform the one-to-one and many-to-many calculations and to build distance matrices. This package implements fast, multi-threaded processes, relying on the Dijkstra (Stein et al., 2009), A-star and PHAST algorithms as well as a contraction hierarchy optimisation which greatly reduces computational times on big graphs.
### Selecting origins and destinations
The next aspect of routing consists in solving the what to what problem, namely determining the actual locations to use as the origin and destination. This step is particularly important when the data is aggregated by relatively big units of space (MSOA). For smaller units, like output areas (OA), this question might not be as relevant, because they cover only the area of a few houses, and thus a regular centroid will not bear significant difference from any other node within an area. However, for larger areas that span about 1km and include multiple streets, overlap parks, rivers and rails, this can become significant, as is demonstrated later on. Although it seems appropriate to choose the smallest available resolution, breaking down the system into small sub units comes at a great cost when running any model. Aiming to develop a methodology that could further on be scaled up to the country level, the MSOAs were used, with 983 in London.
We further consider 3 types of MSOA centroids that are illustrated in figure 3. The first one is the _geometric_ centroid, that is quite commonly used in research (Bassolas et al., 2019; Zhong et al., 2014). The second one is referred to as _network_ centroid and is the centroid of a sample of nodes that lie within residential roads, this allows to exclude parks, industrial or shopping areas. The
\begin{table}
\begin{tabular}{|c|c|c c|c c|} \cline{3-6} \multicolumn{1}{c}{} & & \multicolumn{2}{|c|}{_1 thread, 50x50 OD_} & _4 threads 100x100 OD_ \\ \hline
**package** & **option** & **mean** & **median** & **mean** & **median** \\ \hline
**tidygraph** & - & 5.13 & 5.13 & - & - \\ \hline
**sf networks** & - & 4.10 & 4.06 & - & - \\ \hline
**dodgr** & \multicolumn{1}{c|}{fheap} & 24.12 & 22.92 & 18.66 & 17.81 \\ \hline
**dodgr** & \multicolumn{1}{c|}{bheap} & 16.87 & 16.89 & 13.91 & 13.90 \\ \hline
**dodgr** & \multicolumn{1}{c|}{triheap} & 26.96 & 26.61 & 20.54 & 19.79 \\ \hline
**cppRouting** & \multicolumn{1}{c|}{phast} & 2.40 & 2.41 & 1.37 & 1.37 \\ \hline
**cppRouting** & \multicolumn{1}{c|}{mch} & 0.35 & 0.34 & 0.37 & 0.37 \\ \hline \end{tabular}
\end{table}
Table 2: Mean and median times in seconds to compute a distance matrix between 50 and 100 randomly chosen locations. Different types of heaps are tested for dodgr and two different algorithms for cppRouting, as well as multiple core run when this functionality is provided. **sf networks**(van der Meer et al., 2022) is a package that, together with **tidy_graph**(Pedersen, 2022) relies on **igraph**(Csardi and Nepusz, 2006) for routing. Tests are done with the **microbenchmark**(Mersmann, 2023) package in R. The hardware used is a 2022 M2 MacBookPro with 16 GB RAM.
third one is a tuple of nodes. The first being the population weighted centroid 12 that accounts for the distribution of residents in space, and the other being the workplace weighted centroid, that is obtained using the definition of workplace zones13 in the UK statistics office. We refer to this pair as the _commute_ centroids. For each centroid point, the nearest network node is assigned and the distance between them is added to the final matrices.
Footnote 12: [https://geoportal.statistics.gov.uk/datasets/ons::msoa-dec-2011-population-weighted-centroids-in-england-and-wales/about](https://geoportal.statistics.gov.uk/datasets/ons::msoa-dec-2011-population-weighted-centroids-in-england-and-wales/about)
### Flows data
In the following, the methods and data described earlier will combined with flows data to develop a metric that helps understand the commuting patterns of active travellers.
The commuting data in this work comes from the 2011 UK census and is accessed through the governmental data portal (See details in the appendix). It is available in its raw format for research purposes to individuals linked to a university. The flows contain information on the origin area, destination area, and method of travel, for more details on the variables refer to the online material14. Different levels of spatial units are used in the UK census, and this study relies on MSOAs, which are constructed to contain around 8000 individuals each. In this work, only the data for cycling and walking is used for the flows. Additionally, the flows on distances beyond 15km are considered as outliers or collection errors and are excluded.
Footnote 14: [https://github.com/ischlo/QUANT_at](https://github.com/ischlo/QUANT_at)
### Detour index
When measuring the performance of networks in terms of distance, as in the context of transport studies, it is useful to look at a measure called the detour index, also referred to as circuity (Barthelemy 2011), which can take different forms if we consider a whole network or a node, or a specific route:
\[\delta_{ij,k}=\frac{l_{ij,k}}{d_{ij}} \tag{1}\]
where \(l_{ij,k}\) is the distance in the network \(k\) between location \(i\) and \(j\), and \(d_{ij}\) is the Euclidean distance between them. If aggregating on an origin node and considering a set of destination nodes, which is not necessarily all the nodes in the network:
Figure 3: **Origin-destination centroids (Beddington MSOA illustrated here).** a)The geometric centroids from every MSOA polygon. b) The network centroid based on the distribution of the street network within the MSOA boundary by taking the centre of mass of a sample of road intersections between “living street”, “primary”, “secondary”, “tertiary”, “trunk” and “residential” OSM road types (in blue). c) The ”commute” centroids, based on the population distribution (origin) in pink and job location (destination) in yellow.
\[\delta_{i,k}=\frac{1}{N_{S}-1}\sum_{j\in S,i\neq j}\delta_{ij,k} \tag{2}\]
which gives the average detour index of the routes from node \(i\) to all other nodes of interest in set \(S\) for a specified network \(k\). And, for the whole network other the set of nodes \(S\):
\[\delta_{k}=\frac{1}{N_{S}(N_{S}-1)}\sum_{ij\in S,i\neq j}\delta_{ij,k} \tag{3}\]
where \(N_{S}\) is the number of nodes in set \(S\). The detour index is greater or equal to 1, and the more efficient, or straight, a network path is the closer the value will be to 1. Typical values are between 1.1 and 1.5 and values over 2 show a poor performance (Levinson and El-Geneidy 2009; H. Yang, Ke, and Ye 2018). This index has been linked to important aspects of the network such as accessibility and efficiency. In (Costa, Marques, and Moura 2021), the detour index was found to decrease with time in cycling networks. This indicates a tendency of the network to ameliorate it's connectivity, by growing new links or changing the routing possibilities. In (Levinson and El-Geneidy 2009) it was observed that users of the network tend to locate in places that minimise detour for their commuting trip compared to a random selection of origin-destination pairs. The authors observe different values of average detour for random selections of OD nodes compared to observed OD flows, indicating that the efficiency of the network is better understood through the mobility patterns occurring in it. Hence, the consideration of a subset \(S\) of the nodes involved in observed OD flows, in our case the MSOA with flows between them, is relevant to better understand commuting trip distributions in the network.
In this section we took the different street networks and consider the geometric centroids as origin-destination points for a straightforward comparison.
We plot the detour distance values in Figure 4, translated into walking and cycling times, to make the comparison of both modes easier. Differences in the cost of the shortest paths between MSOAs are considerable across space. Results are highly determined by the uneven distribution of the street network provision and network impedance. Due to the lack of crossings in the east, for example, MSOAs at each side of the Thames present the higher detour values with commuting
Figure 4: **Detour given walking and cycling times when comparing shortest paths from the OSM and dodgr networks with the euclidian distances between MSOAs.** We converted distance into time cost using 1.3 m/s and 4.2 m/s as the walking and cycling speeds, respectively. We excluded trips wich would take longer than 1 hour as a maximum threshold for commuting by these modes. We visualised differences in time longer than 15 minutes as a way to highlight detours that would have a greater impact on the decision to use active travel
times reaching a difference of up to 3 hours. Higher differences can be found in MSOAs with large green areas, large block sizes and other barriers like train lines. Based on the results, when comparing the different commuting times it is important to consider that the differences between the network distances are not evenly distributed in space, and that the selection of one or the other may affect the results of future analyses in specific areas of the city.
The distribution of all \(\delta_{ij,k}\) is shown in figure 4(b). It is not surprising to see that the resulting \(\mathbf{dodgr}\) networks with a custom weighting profile, which tends to increase the actual distances, show greater values. The results for the remaining networks are more similar, with slightly increasing
Figure 5: **The \(\mathbf{dodgr}\) index** is the ratio of the distance between two points in the network chosen and the shortest, Euclidean distance, \(\delta_{ij}^{k}=\frac{l_{ij,k}}{d_{ij}}\), where \(l_{ij,k}\) is the distance in the network \(k\), and \(d_{ij}\) is the Euclidean distance between those points. Clearly, \(\delta\in[1,\infty[\). One can see that the kind of routing (fig 4(b)) proposed by the \(\mathbf{dodgr}\) package results in longer shortest paths due to the weighting that favours smaller roads to bigger, busier ones for cyclists and pedestrians. The network that approaches the most the theoretical case of \(\frac{\sum_{i,j=1}^{N}\delta_{ij}^{k}}{N(N-1)}=\bar{\delta^{k}}=1\), is the OSM full network, which contains short cuts and links not included elsewhere. The Ordnance Survey network (dashed purple line) is the least efficient of the networks without custom weighting profiles. On 4(a), one can observe the decreasing average detour and shrinking standard deviation as the Euclidean distance between locations increases. On figures 4(c) and 4(d), the detour distribution, weighted (solid line) and non weighted (dashed line) by commuter flow for different distance intervals of Euclidean distance of trips. The table in the appendix, presents the same observation for finer distance intervals, removing the effects of the growing number of destinations with distance.
mean and variance from the whole OSM network, followed by the OSMnx walk and cycle profiles and finally OS. To our knowledge, there is no research linking typical detour values to the density of the nodes and edges of the network, however, it seems intuitively natural for a detour to increase as a consequence of removing edges from a network. Hence, sparser networks are expected to have greater typical detour values.
We further develop the detour index to account for the flows observed across a set of origin-destination pairs. Let \(w_{ij}\) be a flow observed between origin \(i\) and destination \(j\), by weighting the detour values with the flow, we obtain a more accurate representation of the detours that commuters take on average during their commute:
\[\delta_{c}=\frac{1}{W}\sum_{i,j\in S,i\neq j}w_{ij}\delta_{ij,k} \tag{4}\]
where \(W=\sum_{i,j\in S,i\neq j}w_{ij}\) is the total flow between different origins and destinations.
One of the main characteristics of flows is that their magnitude decreases as we get further from an origin. They also show a wider range of values for shorter distances, spanning several orders of magnitude. At the same time, we observe a decreasing average detour index as the Euclidean distance increases (fig. 5a) between origin and destination, while the potential number of reachable destinations from an origin grows approximately as the square of the distance. These different trends introduce some non trivial relations between flows, their magnitude, the distances on which they occur and their detour values. To further uncover some patterns relating these, we look at the distribution of the mean weighted detour (equation 4) of a trip and the mean detour of the network in 300 metres intervals of Euclidean distance between 0 and 15 km. In this way we minimise the effects of a varying Euclidean distance, and focus only on the typical values of the detour weighted and non weighted by flow. Within each resulting interval we test for the significance in the difference between the means, and find that it becomes strong (p-value \(\approx 0\)) inside the interval 900-1200 metres (see table 3) and remains statistically significant for all the distances beyond that. Moreover, the t-statistic gets stronger as the Euclidean distance increases. This indicates that for shorter trips, detour has a less significant or no impact on the trip, but as the distance increases, we observe a significant shift of the average weighted detour towards smaller values. This observation can be useful when designing road infrastructure, where aiming to minimise a weighted detour index of a population can potentially have a positive effect on the number of trips done between locations, especially as the Euclidean distance separating points gets greater. Figure 5a shows an example distribution of detours with wider intervals for better visualisation purposes. We observe that as the Euclidean distances increase, the distribution of detours narrows down and shifts towards a smaller mean. This behaviour seems to be a fundamental property of the detour index and has been observed for other real world road networks. The full results for granular distance intervals and statistical significance is included in the appendix table 4.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{2}{c}{_d (km)_} & \multicolumn{2}{c}{_OS_} & \multicolumn{2}{c}{_OSM_} \\ min & max & \(\delta_{c}\) & \(\delta_{n}\) & t-test & \(\delta_{c}\) & \(\delta_{n}\) & t-test \\ \hline
0.0 & 0.6 & 2.113 & 2.108 & 0.1 & 1.498 & 1.518 & -0.8 \\
0.6 & 0.9 & 1.931 & 1.914 & 0.9 & 1.454 & 1.471 & -1.6 \\
0.9 & 1.2 & 1.780 & 1.791 & -0.7 & 1.384 & 1.423 & -4.9 \\
1.2 & 1.5 & 1.666 & 1.700 & -3.0 & 1.357 & 1.388 & -4.7 \\
1.5 & 1.8 & 1.610 & 1.623 & -1.4 & 1.307 & 1.347 & -8.5 \\
1.8 & 2.1 & 1.538 & 1.535 & 0.5 & 1.266 & 1.313 & -14.0 \\
2.1 & 2.4 & 1.464 & 1.514 & -7.4 & 1.253 & 1.303 & -15.2 \\
2.4 & 2.7 & 1.454 & 1.473 & -3.5 & 1.237 & 1.284 & -16.6 \\
2.7 & 3.0 & 1.402 & 1.457 & -9.5 & 1.232 & 1.274 & -14.8 \\ \end{tabular}
\end{table}
Table 3: This table summarises the detour for commuting trips and for typical network values by distance ranges between origin and destination. It shows how they significantly differentiate, especially as crow fly distance grows (column \(d\)). In this table, we look at granular intervals, within the distance band 0-3km (darkred distributions from figures 5c and 5d), the full table for up to 15km is given in the appendix.
Modelling
This section considers the previous results and integrates the distance matrices produced from the different network profiles and origin-destination centroids into a spatial interaction model. With the available flows data, a doubly constrained gravity model is built and tested.
### Spatial interaction models
This family of models, also referred to as gravity models, has been around for a while and was formalised in the late 1960s by A. G. Wilson 1971. It allows to build good predictions of flows based on input variables that are relatively easy to find. The name originates from the similarity with the gravity law derived by Isaac Newton. This problem has been reformulated and formalised with statistical considerations of the optimal assignment from a set of origins in a grid to a set of available destinations under a cost value that depends on the distance. Intuitively, we consider that there are factors contributing to a greater flow of people between 2 given places and others reducing it. Some parameters will thus be directly proportional to the flow of people while others will be inversely so. The factors contributing to the flow positively are usually taken to be the population at the origin location and the job availability at the destination, while the distance between the two locations has a inverse effect on the flow. Formally, this means that \(T_{ij}\sim O_{i}D_{j}f(d_{ij})\) where \(O_{i}\) is the origin population in location \(i\), \(D_{j}\) is the employment at the destination location \(j\), \(f(d_{ij})\) is a decreasing function of the distance, usually referred to as generalised cost. The known constraints of the system impose that all the flows originating at a location \(i\) must be equal to the local working population, while all the flows arriving to a destination \(j\) be equal to the local number of jobs, thus:
\[\begin{split}& O_{i}=\sum_{j}T_{ij},\\ & D_{j}=\sum_{i}T_{ij}\end{split} \tag{5}\]
The travel cost function can be derived (ibid.) and has the form \(f(d_{ij})=e^{-\beta d_{ij}}\), where \(\beta\) is an exponent that depends on the system and is usually calculated for different modes of transport and geographies. Additionally, the origin and destination weighting parameters are obtained through the Lagrange multipliers method, resulting in a equation of the form:
\[T_{ij}=A_{i}O_{i}B_{j}D_{j}e^{-\beta d_{ij}} \tag{6}\]
which is referred to as the doubly constrained one since the available information about the system allows to set constraints on both the origin and destination locations. Alternatively, if some information is missing, it is possible to use only one constraint.
The parameters \(A_{i},B_{j}\) are derived from the expression:
\[\begin{split}& A_{i}=[\sum_{j}B_{j}D_{j}e^{-\beta d_{ij}}]^{-1}\\ & B_{j}=[\sum_{i}A_{i}O_{i}e^{-\beta d_{ij}}]^{-1}\end{split} \tag{7}\]
with values obtained through an iterative process (Deming and Stephan 1940).
The flows data is obtained via the census table with origin destination pairs and number of commuters by mode of transport. The distance measures can be obtained in different manners as discussed in section 3. The distance matrices obtained are used in the gravity model further on.
### Case Study: active travel commuters in London.
The methods described in the first two sections will now be combined with the formalism of spatial interaction models to have a predictive model of active travel commuters in London based on the different networks and centroid types that were introduced.
The methods and data described earlier allows for a generalised approach to modelling flows, and when applied to active travel commuters, provide some high quality of fit results in the case study of London. Figure 6a shows the results of the calibration of the model to maximise the \(r^{2}\) value. The quality of fit of the models using Euclidean distance between locations as cost is also
computed and, despite being lower than network distance, provides a good estimate of the flow with values reaching \(r^{2}=0.75\). One can see that depending on the network, the maximum is reached for differing values, and certain types of networks provide a better stability of the quality of fit under a varying \(\beta\) exponent than others. While all the networks reach a high and very similar value of around \(r^{2}=0.85\), they do it for different values and the best quality of fit is reached with the OSM and OS networks using _commute_ centroids, indicating the importance of the discussion in section 3.2 on the knowledge of where the routing is being done. The network and geometric centroids appear less adapted for routing based on the quality of fit obtained for all types of network considered.
Figure 6: Quality of fit calibration for different networks, mode, centroids. The most significant impact on the quality of fit occurs when changing centroids on the pedestrian model. Each legend element has first mentioned the network from which the cost matrix (_os_,_osm_,_dodgr_,_norm2_), with _norm2_ indicating the crow-fly distances matrix. And as second the type of centroids used.
Discussion and conclusion
In this work, we first conducted a methodological analysis covering network, routing and modelling steps. By comparing different walking and cycling networks and prioritising distance as a critical factor in commuting route choice for active travel, we explored their capacity to estimate real-world flows using a spatial interaction model. We use London as case study, using the 2011 census flows at MSOA level. Throughout the process, we observe that the use of different data sources, network construction, and routing approaches can lead to diverse outcomes in the analysis, both at a broad city-wide scale and at a more local level in different areas of the city. We illustrate the limitations of the OSM data when attempting to select network profiles for specific modes of transport. To counter these limitations, we keep a network that is filtered only to exclude the types of roads that are illegal to use as an active traveller, those are motorway and motorway links and propose a combined active-travel framework allowing to model cycling and walking trips. The results of the spatial interaction model show that this type of network performs best when associated to what we defined as _commute_ centroids, accounting for the spatial distribution of residents and jobs inside the census areas. The spatial interaction model that we calibrated on this combination of network and centroid was the best fitting and most stable across the cost function exponent values, highlighting the importance not only of the parameters related to the network itself, but also to the routing process. The quality of fit values for all networks were relatively high, and the best predictions were obtained using the unfiltered OSM and OS networks with commute centroids.
We further investigate the commuting trip patterns by using the detour index. It shows to be a useful measure for assessing the performance of different networks at minimising the trip length for a given Euclidean distance. We introduce the weighted detour index and show the tendency of London's active travellers to commute in a way that minimises the detour of their trip when the Euclidean distance to their job is beyond 900-1200 metres. Minimising the detour on the way to work seems to be important to active travel commuting, especially as the Euclidean distance between origin and destination increases. We also observe considerable spatial variation in the potential walking and commuting time between MSOAs, which is determined by the uneven distribution and connectivity of the network, as well as the inconsistencies in data sources, which can impact the models that are built on top of them. These findings emphasise the importance of careful consideration and validation of the input data coming from any kind of sources, whether official or crowd sourced. The network with the lowest average detour index also showed the best fitting in the spatial interaction model. This indicates the possibility to use this index as a guide when designing new infrastructure that aims to connect a population to an employment distribution in a way that minimises the weighted detour of the trips. The main focus of this work is done on active travel as a mean to commute, but the applications and possibilities of the models and methods go beyond that and provide a general framework on open source data for active-travel modelling. Our research has implications for urban planning and policy-making, as it highlights the need for accurate network data and appropriate modelling techniques to inform the development of sustainable transport infrastructure and promote active travel in cities.
## 6 Acknowledgements
This paper has received funding from the QUANT and RUBICON projects from the Alan Turing Institute under grant TU/ASG/R-SPEU-102.
|
2301.01239 | Use of survival analysis and simulation to improve maintenance planning
of high voltage instrument transformers in the Dutch transmission system | This paper describes the use of survival analysis and simulation to model the
lifetime of high voltage instrument transformers in the Dutch transmission
sys-tem. To represent asset aging, the non-parametric Kaplan-Meier method is
used to enable the fitting of Weibull distribution. Such an approach is
implemented on three different voltage levels, namely 110kV, 150kV, and
220/380kV. Real failure and inspection data is used to achieve a realistic
failure model of the instrument trans-formers. Failure and maintenance data
occurring between 1989 and 2021 have been used for this study. In spite of
missing and low-quality data, a rich failure database could still be prepared.
This study also offers insights into factors (i.e., voltage level, in-service
age) influencing the remaining life from both graphical survival function and
parametric Weibull distribution analysis. Based on the derived statistics,
future possible maintenance planning scenarios are simulated under a complex
system modelling framework in a digital twin enabled platform. Eventually, the
scenarios are evaluated in terms of replacement costs (CAPEX), inspection
hours, and unavailability hours. | Swasti R. Khuntia, Fatma Zghal, Ranjan Bhuyan, Erik Schenkel, Paul Duvivier, Olivier Blancke, Witold Krasny | 2023-01-03T17:33:40Z | http://arxiv.org/abs/2301.01239v1 | S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system
###### Abstract
This paper describes the use of survival analysis and simulation to model the lifetime of high voltage instrument transformers in the Dutch transmission system. To represent asset aging, the non-parametric Kaplan-Meier method is used to enable the fitting of Weibull distribution. Such an approach is implemented on three different voltage levels, namely 110kV, 150kV, and 220/380kV. Real failure and inspection data is used to achieve a realistic failure model of the instrument transformers. Failure and maintenance data occurring between 1989 and 2021 have been used for this study. In spite of missing and low-quality data, a rich failure database could still be prepared. This study also offers insights into factors (i.e., voltage level, in-service age) influencing the remaining life from both graphical survival function and parametric Weibull distribution analysis. Based on the derived statistics, future possible maintenance planning scenarios are simulated under a complex system modelling framework in a digital twin enabled platform. Eventually, the scenarios are evaluated in terms of replacement costs (CAPEX), inspection hours, and unavailability hours.
S.R. Khuntia (\(\copyright\)), F. Zghal, R. Bhuyan, E. Schenkel
Asset Management Onshore, TenneT TSO B.V., Arnhem, The Netherlands
e-mail: [email protected]
## 1 Introduction
TenneT, as European transmission system operator, is facing power supply reliability challenges that originate in a globally aging infrastructure and increasing complexity of business operations in the context of energy transition. While power transformers, due to the criticality of their function on the grid have been the focus of many studies, concerns have been raised recently on the lack of focus on long-term asset management of Instrument Transformers (ITs). ITs play an important
role in the metering of electrical quantities and protection of other system components. Due to their importance, any unplanned unavailability due to failures can cause considerable outage costs to utilities. Consequently, it is crucial to properly characterize the aging of ITs using statistical approaches that will enable to predict the evolution of the IT population failure over the next years. In addition, it will yield valuable perspectives in terms of optimizing maintenance and replacement policies accordingly. The reliability analysis of ITs is very much dependent on the defined maintenance strategies which will provide a reliable and safe power supply. By definition, asset management involves strategies to explore, identify, plan, invest, utilize, maintain, replace, and dispose of assets while maximizing their value and performance under some prescribed financial constraint (Khuntia et al., 2016). Since ITs play such an important role, it is expected that statistical failure analysis will give a better insight on actual maintenance planning performance to the asset management team at TenneT. Technically, in the reliability analysis of IT, it is interesting to identify the independence or dependence of the specific covariates that indicate the operation of the IT.
For any kind of data-driven methodology and, in particular, asset reliability characterization, a robust database is needed, both in terms of volumetry and quality (Balzer and Neumann, 2011). However, it can be argued that there should be a preference for robust data and that there are techniques that could be used to cope with data discrepancies. In our case, the historical failure data play an important role in understanding the behavior of ITs. Literature study reveals that explosion is one of the highest reported failure modes. Impact of explosion not only relates to direct cost of IT replacement but also chances of replacement of neighboring equipment damaged in the explosion. CIGRE reports are one of the primary sources for publicly available failure databases of ITs. Three series of CIGRE reports are available online. The first report was published in 1990 which covered failures of ITs (voltage \(>\)72.5kV) in about 15 countries. The survey covered 136033 transformers in the period from 1970 to 1986 (CIGRE, 1990). The second report published results for 131207 ITs (voltage \(>60\)kV) in the period from 1985 to 1995 in the year 2009 (CIGRE, 2009). The third results of a wider international survey was published in 2012. It collected population and failure data for ITs of voltage \(>60\)kV and excluded AIS ring current transformers that were in service during the years 2004 to 2007 inclusive (CIGRE, 2012). Some other failure investigations were reported (Poljak et al., 2010; Raetze et al., 2012; Tee et al., 2021), where authors focus on reduction of IT explosion and better condition monitoring of ITs. Nonetheless, the truth is that failure is probabilistic in nature, and it needs investigations on the relationship with asset data and failure cause. The use of semi-parametric Cox model was reported in (Tee et al., 2021). The authors elaborated the factors influencing the probability of failures through analysis on the lifetime data from both graphical survival function plots and semi-parametric Cox model.
With the use of Simulation Digital Twin technology from Cosmo Tech, TenneT analyzed various maintenance strategies. The Digital Twin has been calibrated
based on the historical failure data that it recorded with statistical technique relying on survival analysis. Literature study shows that survival analysis was used for power transformer reliability studies of around 2000 nos. in the Canadian and around 6000 nos. in the Australian utility (Picher et al., 2014; Martin et al., 2018). Ref. (Picher et al., 2014) described the data of Canadian utility Hydro-Quebec where they adopted a good match using the Kaplan-Meier and Weibull distribution. Finally, the method concluded that Weibull distribution is a better fit and the results looked promising. Similarly, ref. (Martin et al., 2018) followed a similar strategy for Australian data. The authors deduced the choice of Kaplan-Meier or Weibull distribution based on the different voltage classes. In practice, Weibull distribution fitted to empirical failure data are commonly used to calculate life expectancy. However, the challenge in applying such a distribution to electrical assets is that often the root cause of failure is not related to the normal aging of the asset, but rather external factors. The aim of this paper is three-fold: (1) use of real failure data to model a time-varying failure rate based on Weibull parameters obtained from Kaplan-Meier survival analysis, (2) investigate extrapolation methods to maximize value of existing inspection results across IT population, and (3) use digital twin enabled simulation to tune the required resources necessary to realize TenneT's strategy for considered substation equipment maintenance and renewals.
## 2 Data and Methodology
### Description of Data
As of the date of writing this paper, TenneT owns and maintains a large fleet of ITs in the Dutch high voltage AC network (i.e., 110, 150, 220 and 380kV) as shown in Figure 1(a). It is of interest to see the age profile of the existing population, in terms of years since manufacture because reliability is often related to age. However, lifetime data can be complicated as some ITs often extend over several decades. At TenneT, the expected design life of an IT is 45 years. This age is affected and reduced, sometimes substantially, depending on the design or utilization of the IT, i.e. its loading or the environment to which it is exposed. In some cases, a good maintenance scheme can even increase the replacement age. Although there is no prescribed replacement age, it is the responsibility of the asset management department to formulate the maintenance policies based on failure history. For this study, failure data was obtained from various sources, starting from failure records, reports to talking to experts. Fortunately, TenneT did not record a high number of major failures since the 1989. A major failure is defined as a sudden explosive event that has caused an immediate emergency system outage or trip. Figure 1(b) lists the failure events with respect to manufacturer (coded for confidentiality) and IT age.
The failure list was not adequate to come up with a statistical model. In addition, maintenance reports (or work orders) and expert knowledge was used to populate the list and gain utmost information. A work order is a document that provides all
the information about a maintenance task and outlines a process for completing that task. In case of IT, corrective work orders are used (the others being periodic maintenance and inspection work orders). Discussion with experts led us to use the work orders when an IT was out of service for any kind of maintenance. Figure 1(c) shows the total recorded failures for the IT population. In the recent years, one observation worth noticing is that the number of failures has increased significantly.
**Preprint accepted in WCEAM 2022 Seville**
### Survival Analysis and Failure Rate Modelling
Survival analysis is a statistical technique used to estimate the lifespan of a particular population under study. It is an analysis of time-to-event data (Wang et al., 2019). One of the widely used survival analysis technique is the Kaplan-Meier (KM) estimate (Bland and Altman, 1998). The KM estimator uses lifetime data to perform survival analysis. Although it is widely used in medical research to gauge the part of patients living for a specific measure of time after treatment, it has been used in the power systems sector to model the survival of electric assets (Martin et al., 2018). The use of KM estimate is supported by two reasons: one is that it does not assume that the data fits a statistical distribution, and second is that it allows the inclusion of censored data (when an IT had not failed by mid-2021).
For a population, the survival function \(\hat{S}(t)\) is defined as:
\[\hat{S}(t)=\prod_{i\in t_{i}\in t}\left(1-\frac{d_{i}}{n_{i}}\right)\]
where, \(t_{i}\)is the time at least one event happened, \(d_{i}\) is the number of events that happened at time \(t_{i}\) and \(n_{i}\) is the number of individuals known to have survived up to time \(t_{i}\)(Davidson-Pilon, 2019). In our study, the estimates are calculated for three different voltage levels and \(n_{j}\) considers observations that occurred between the oldest IT age and mid-2021. An important aspect in survival analysis is considering the censored data. Censoring occurs when the value of an observation is only known
**Preprint accepted in WCEAM 2022 Seville**
Figure 1: (a) Voltage-based IT population, and (b) Actual failure list until July 2021, (c) Populated failure from work order and expert opinion until July 2021
to some extent. Censored data is often encountered when analysing practical life data, especially in case of electrical power systems where most of the installed equipment is still in-service, and most of the time the exact age of equipment at the moment of failure is unknown (CIGRE, 2017). In this study, a large amount of data falls under the right censored data (suspended data) category. A dataset is termed as right censored or suspended when it is composed of components that did not fail. The term right censored indicates that the event is located to the right of the dataset, which implies that certain components are still operating. In our dataset, we had to deal with right censoring and no left truncation since the year of construction was known to us. Ignoring truncation causes bias in model's estimation.
**Preprint accepted in WCEAM 2022 Seville**
The IT dataset was split into three different families, each one with its own degradation law, based on their voltage level as is shown in Figure 2. A useful statistic in this analysis is calculating the median survival time, which defines the point in time where on average 50% of the population should have failed. For 110kV, the median survival time is 61 years. However, the median survival time for 150, 220 and 380kV is infinity because there have been an insufficient number of failures to determine it. In such cases, the two best options are:
1. use another quantile (e.g. 0.75) to compare the groups;
2. approximate the survival curve by means of a parametric fit and derive the median survival time using the model.
The second option is chosen in our study since all the three voltages can be modelled using the parametric fit assuming that failure times have a Weibull distribution. In other words, Weibull distribution is used to parameterize the KM estimate. The Weibull distribution is a widely used method to analyse the statistical features of failure (Rinne, 2008). The probability \(f(t)\) and cumulative density function \(F(t)\) are defined as: \(f(t)=\beta\frac{t^{\beta-t}}{\eta^{\beta}}e^{-\left(\frac{t}{\beta}\right)^{\beta}}\) and \(F(t)=1-e^{-\left(\frac{t}{\beta}\right)^{\beta}}\); where, \(t\) is the time, \(\beta\) is the shape and \(\eta\) is the scale parameter. Table 1 shows the different parameters calculated for our study from the corresponding survival function.
**Preprint accepted in WCEAM 2022 Seville**
\begin{table}
\begin{tabular}{l l l l l l} \hline
**Voltage (kV)** & **No. of ITs** & **No. of tens** & \(\mathbf{\beta}\) & \(\mathbf{\eta}\) & **median** \\ & & **censored** & & & \\ \hline
110 & 3168 & 255 & 6.67 & 63.79 & 61 \\ \hline
150 & 10058 & 298 & 6.42 & 74.20 & infinity \\ \hline
220 and 380 & 2982 & 25 & 5.65 & 77.05 & infinity \\ \hline \end{tabular}
\end{table}
Table 1: Statistics and Weibull parameters.
Figure 2: Kaplan-Meier estimate of all different voltage levels.
S.R.Khunta - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system
## 3 Modelling in Cosmo Tech Asset and Simulations
Founded in 2010, Cosmo Tech is a technology company pioneer in the modeling of complex systems ([https://cosmotech.com/](https://cosmotech.com/)). Relying on its industry-validated modeling and simulation software platform, Cosmo Tech has developed a solution called Cosmo Tech Asset, henceforth called CTA. CTA allows to build digital twins of asset portfolios with their full complexity such as network dependencies, operative strategies, or dynamical resources allocations.
### Cosmo Tech Asset Platform
The different steps involved in the CTA platform are:
1. Experiment the CTA platform's pre-built health assessment methods and compare the results with internal initiatives. For health assessment, the asset health index is a key simulation variable, and it is described in the next sub-section.
2. Demonstrate the calibration of reliability law (using Weibull distribution) for simulations against up-to-date condition of ITs, but also historical IT related data, such as field observation or inspection data and measurement inputs.
3. Investigate the functional possibilities that would allow to leverage existing inspection results across ITs using extrapolation methods when applicable, therefore maximize inspection result value.
4. Finally, based on the achieved health assessment technique, use the simulation platform to tune the required resources necessary to realize TenneT's strategy for considered IT maintenance and replacements.
### TenneT Asset Health Index
For health assessment, the TenneT asset health index (AHI) is considered and is shown in Table 1(a) (TenneT, 2021). The AHI is based on asset age and failure probability, and it is used to drive short-term maintenance and long-term replacement strategies. It provides a consistent way to compare the overall asset health of TenneT's assets.
The evaluation of the AHI is based on two metrics:
1. _probability of failure_ of IT in the coming years for AHI score of 1 to 6, and
2. _age_ of IT for AHI score of 7 to 10.
In addition to AHI, the study of IT uses reliability law over which failures are drawn during the simulations. The reliability law corresponds to the KM survival function and the Weibull estimates that are described in section 2. These laws have a cumulative distribution function which represent the probability for a failure to occur before a certain age. And the probability of failure over the next year can be evaluated using the following formula:
**Preprint accepted in WCEAM 2022 Seville**
S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system
\[P(X<t+3\mid X>t) =\ 1\ -\ P(X>t+3\mid X>t)\] \[=\ 1\ -\frac{P(X>t+3\ 0\ X>t)}{P(X>t)}=\ 1\ -\frac{P(X>t+3)}{P(X>t)}\] \[=\ 1\ -\frac{1-P(X<t+3)}{1-P(X<t)}\ =\ 1\ -\frac{1-F(t+3)}{1-F(t)}\]
where,
\(\bullet\)\(F\)is the cumulative distribution function of the reliability law
\(\bullet\)\(X\) is a random variable representing the occurrence of a failure.
### Simulation
The reliability law was used to evaluate the different scenarios for an efficient maintenance planning. A simulation period of 100 years is chosen for this study since it is assumed that the most recent IT replacements will be in operation until the end of this century. Time-based scenario is the current maintenance planning at TenneT. It is compared against a condition-based scenario. Both the scenarios are explained in detail in Table 2. The resources are listed in Table 2(b).
In principle, both scenarios are very similar in the sense that the same simulation model dataset is used. The difference lies in the trigger for the replacement activities of the 110/150kV assets. In fact, in time-based scenario, which represents the current way of working, the trigger is based on the real age of the asset. As soon as the
\begin{table}
\begin{tabular}{l l l l l l l l} \hline & & & **Condition-based** & **Time-based** \\ \hline \multirow{3}{*}{Replacement} & 220/380kV & 45 years & 45 years \\ \cline{2-7} & 110/150kV & \begin{tabular}{l} AHI score red or \\ purple \\ \end{tabular} & 45 years \\ \hline \multirow{3}{*}{Inspections on bay every 3,6,12 months} & 220/380kV & No inspections & No inspections \\ \cline{2-7} & 110/150kV & \begin{tabular}{l} Time-based start- \\ ing at 25 years \\ \end{tabular} &
\begin{tabular}{l} Time-based start- \\ ing at 25 years \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 2: Different Scenarios under Study.
asset reaches 45 years of age, replacement is triggered, and action is performed as resources are unlimited. On the other hand, in the condition-based scenario, the trigger is based on the apparent age of the asset. The apparent age is an attribute of every asset that reflects its degradation rate and it can be different from the real age of the asset. If the apparent age is higher than the real age, the asset degrades faster than normal. If the apparent age is lower than the real age, the asset degrades slower than normal. When the apparent age of the asset reaches 50 or 54, it means that the asset is reaching AHI score of respectively 6 or 3 that is red or purple (see Table 1(a)), and the replacement action is triggered.
**Preprint accepted in WCEAM 2022 Seville**
Figure 4: 40 FTE constrained Scenarios Simulation.
Figure 3: Unconstrained Scenarios Simulation.
From the figures, two conclusions can be made: (1) replacement activities are the major cost driver in the TOTEX (_Total Expenses_), and (2) Human resources (HR) costs are the major cost driver in the replacement costs. Simulation results show that in case HR availability is restricted, there is no significant difference between the time-based and condition-based replacement strategies. In fact, switching to a condition-based strategy might not be beneficial in that case since it comes with change and investments for little to no reward. If HR availability is guaranteed for the foreseeable future, then it is highly beneficial to switch from a time-based replacement strategy to a condition-based strategy as this would contribute to flattening the curve. Also, this would represent a lot of work at the beginning to prepare the necessary processes and investments for the new strategy but would lead to significant gains on the long term.
## 4 Conclusion
Maintenance planning of high voltage ITs using real data from the Dutch transmission system operator was illustrated in this study. The study aimed at understanding how digital twin enabled technology along with failure data can help TenneT to make better future maintenance strategies. The strategies aimed at easing financial decisions related to replacements (in terms of flattening the replacement curve) and unavailability of ITs in the network. Working on real data uncovered several challenges including missing data (both quantity and quality) and outliers. The non-parametric Kaplan-Meier survival analysis helped in parameter estimation of Weibull distribution. TenneT data could be translated to the data format to be used in the digital twin CTA tool, meaning that our data could be easily adapted to other software platforms. It is worth to mention that in this study, both data ownership as well as data confidence did not hinder the progress. Data confidence was built upon although multiple data sources had to be aligned together. TenneT partnered with Cosmo Tech to build the data ownership philosophy for successful digital twin implementation for maintenance planning.
**Preprint accepted in WCEAM 2022 Seville**
Figure 5: 60 FTE constrained Scenarios Simulation. |
2310.18029 | Observing galaxy clusters and the cosmic web through the Sunyaev
Zel'dovich effect with MISTRAL | Galaxy clusters and surrounding medium, can be studied using X-ray
bremsstrahlung emission and Sunyaev Zel'dovich (SZ) effect. Both astrophysical
probes, sample the same environment with different parameters dependance. The
SZ effect is relatively more sensitive in low density environments and thus is
useful to study the filamentary structures of the cosmic web. In addition,
observations of the matter distribution require high angular resolution in
order to be able to map the matter distribution within and around galaxy
clusters. MISTRAL is a camera working at 90GHz which, once coupled to the
Sardinia Radio Telescope, can reach $12''$ angular resolution over $4'$ field
of view (f.o.v.). The forecasted sensitivity is $NEFD \simeq 10-15mJy \sqrt{s}$
and the mapping speed is $MS= 380'^{2}/mJy^{2}/h$. MISTRAL was recently
installed at the focus of the SRT and soon will take its first photons. | E. S. Battistelli, E. Barbavara, P. de Bernardis, F. Cacciotti, V. Capalbo, A. Carbone, E. Carretti, D. Ciccalotti, F. Columbro, A. Coppolecchia, A. Cruciani, G. D'Alessandro, M. De Petris, F. Govoni, G. Isopi, L. Lamagna, E. Levati, P. Marongiu, A. Mascia, S. Masi, E. Molinari, M. Murgia, A. Navarrini, A. Novelli, A. Occhiuzzi, A. Orlati, E. Pappalardo, A. Paiella, G. Pettinari, F. Piacentini, T. Pisanu, S. Poppi, I. Porceddu, A. Ritacco, M. R. Schirru, G. P. Vargiu | 2023-10-27T10:07:25Z | http://arxiv.org/abs/2310.18029v1 | # Observing galaxy clusters and the cosmic web through the Sunyaev Zel'dovich effect with MISTRAL
###### Abstract
Galaxy clusters and surrounding medium, can be studied using X-ray bremsstrahlung emission and Sunyaev Zel'dovich (SZ) effect. Both astrophysical probes, sample the same environment with different parameters dependance. The SZ effect is relatively more sensitive in low density environments and thus is useful to study the filamentary structures of the cosmic web. In addition, observations of the matter distribution require high angular resolution in order to be able to map the matter distribution within and around galaxy clusters. MISTRAL is a camera working at 90GHz which, once coupled to the Sardinia Radio Telescope, can reach \(12^{\prime\prime}\) angular resolution over \(4^{\prime}\) field of view (f.o.v.). The forecasted sensitivity is \(NEFD\simeq 10-15mJy\sqrt{s}\) and the mapping speed is \(MS=380^{\prime 2}/mJy^{2}/h\). MISTRAL was recently installed at the focus of the SRT and soon will take its first photons.
## 1 Introduction
The Cosmic Microwave Background (CMB) represents one the most unique source of cosmological information. Studying the primary anisotropies and the polarization of the CMB is allowing us to enter into the so called precision cosmology. Within this framework, we can derive the cosmological parameters with extreme precision and know the energy content of our universe to a fraction of a percent [1; 17].
On the other hand, the nature and the physics of most of the energy content of our universe are still unknown. 68.3% of the energy content of our universe is in the form of dark energy which is responsible for the acceleration of the universe. 26.8% of it is in the form of dark matter which can only interact gravitationally with the remaining baryonic matter. In addition, the observed baryonic matter in the local universe is still small compared to what
is predicted by nucleosynthesis and by measurements of the CMB power spectrum (see, e.g., [17]). A diffuse baryonic dark matter (missing baryons) could explain, at least in part, the apparent discrepancy between observations and cosmological estimation [12].
Hydrodynamical simulations of large-scale structures (see, e.g., [5]) show that at low redshifts, these missing baryons should lie in the temperature range of \(10^{5}\)\(<\)T\(<\)\(10^{7}\) K in a state of warm-hot gas not yet observed through their soft X-ray emission. This warm-hot intergalactic medium (WHIM) is arranged in the form of filamentary structures of low-density intergalactic medium connecting (and around) the clusters of galaxies into the so called cosmic web.
## 2 The Sunyaev Zel'dovich effect in galaxy clusters and in filaments
### Thermal Sunyaev Zel'dovich effect
It is well known that the CMB has an almost perfect black body spectrum. However, when the CMB photons scatter off hot electrons present in the Inter Cluster Medium (ICM) present in galaxy clusters, they undergo inverse Compton scattering resulting in a distortion of its frequency spectrum.
This effect (the Sunyaev Zel'dovich, SZ, effect [18]) is due to the energy injection originated by the hot electron gas in galaxy clusters and the surrounding medium. This secondary anisotropy effect produces a brightness change in the CMB that can be detected at millimeter and submillimeter wavelengths, appearing as a negative signal (with respect to the average CMB temperature) at frequencies below \(\simeq\)217GHz and as a positive signal at higher frequencies. The SZ intensity change directly depends on the electron density of the scattering medium, \(n_{e}\), and on the electron temperature \(T_{e}\), both integrated over the line of sight \(l\), and its spectrum can be described by the following differential intensity:
\[\frac{\Delta I(x)}{I_{0}}=y\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}\left(x\coth\frac{x }{2}-4\right)=yg(x) \tag{1}\]
where: \(I_{0}=\frac{2h}{c^{2}}\left(\frac{hkT_{CMB}}{h}\right)^{3}\), \(T_{CMB}\) is the CMB temperature, \(x=\frac{h\nu}{k_{B}T_{CMB}}\) is the adimensional frequency, and \(y=\int n_{e}\sigma T_{e}\frac{k_{B}T_{e}}{m_{e}c^{2}}dl\) is the Comptonization parameter, \(\sigma_{T}\) is the Thomson cross section, \(k_{B}\) is the Boltzman constant, \(m_{e}\) is the electron mass, and \(c\) is the speed of light in vacuum. The Comptonization parameter \(y\) is the integral along the line of sight \(l\) of the electron density \(n_{e}\) weighted by the electron temperature \(T_{e}\) and is the quantity that quantifies the SZ effect: it can be seen as the integrated pressure over the galaxy clusters.
It turns out that the same electrons that scatter off the CMB photons in galaxy clusters, also emit in the X-ray by bremsstrahlung. The bremsstrahlung emission depends on \(n_{e}\) and on \(T_{e}\) with different dependencies with respect to the SZ effect. In particular, X-ray emission is proportional to \(n_{e}^{2}\) and thus, the SZ effect, which is proportional to \(n_{e}\), is more sensitive to low density regions. For this reason, it was proposed to use the SZ for low density environments such as the outskirts of galaxy clusters and the filamentary structures between them.
### Matter distribution
Matter distribution in our universe is clearly non uniform and hydrodynamical simulations predict that matter is distributed in a so-called cosmic web distribution. Simulations can test how structures form and thus investigate the interplay between baryonic matter, dark matter and dark energy. Focussing on a few \(Mpc\) scale, allows us to track the progenitor of a group of galaxies or galaxy clusters. Small-mass objects form first at z\(>\)5, and quickly grow in size
and violently merge with each other, creating increasingly larger and larger system. Hydrodynamical simulation of pre-merging pair adapted to Comptonization parameter \(y\) observable, show observable over-densities at angular resolution ranging from _arcmin_ to tens' of _arcsec_[19]. This drives to the necessity to observe SZ with high angular resolution, without loosing large scales, and with high sensitivity (10\({}^{\prime\prime}\) resolution with few _arcmin_ f.o.v.).
## 3 MISTRAL receiver
The MIIlimeter Sardinia radio Telescope Receiver based on Array of Lumped elements kids (MISTRAL), is a cryogenic camera working at 90GHz between 74GHz and 103GHz. It takes radiation from the 64\(m\) Sardinia Radio Telescope. MISTRAL hosts an array of 415 Kinetic Inductance Detectors (KIDs) and will measure the sky with 12\({}^{\prime\prime}\) angular resolution over 4\({}^{\prime}\) f.o.v.. MISTRAL has recently started its commissioning phase and in 2024, it will start its operations as part of the renewed SRT receiver fleet, as facility instrument.
The Sardinia Radio Telescope (SRT) [3], is a Gregorian configured, fully steerable, 64m-primary mirror radio-telescope which can work from 300MHz to 116GHz. It is a multipurpose instrument with a wide variety of applications which started its scientific programs in 2016. In 2018, a National Operational Program (PON) grant was assigned to INAF with the aim to exploit to the full the SRT capability to reach mm wavelenghts up to 116GHz[10]. Among other scientific upgrades, one of the working packages includes a millimetric camera working, in continuum, at 90GHz\(\pm\)15GHz: MISTRAL receiver, which was built at Sapienza University[9; 2].
### MISTRAL cryogenic system
MISTRAL is a cryogenic camera hosting refocussing optics and an array of Kinetic Inductance Detectors (KIDs). Our KIDs are superconducting detectors made out of Titanium-Aluminium (Ti-Al) bilayer. The critical temperature \(T_{c}\) of this alloy is 945mK and thus the detectors have to be cooled down to temperatures well below \(T_{c}\). This, in addition to the necessity to cool down the detectors to reduce noise, makes MISTRAL a fairly complicated cryogenic camera. MISTRAL employs a Sumitomo 1.5W Pulse Tube cryocooler1 and a twin Helium 10 close cycle refrigerator2 and was assembled in UK by QMC instruments3.
Footnote 1: [https://www.shicryogenics.com](https://www.shicryogenics.com)
Footnote 2: [https://www.chasecryogenics.com/](https://www.chasecryogenics.com/)
Footnote 3: [http://www.qmcinstruments.co.uk/](http://www.qmcinstruments.co.uk/)
One of the biggest challenges of MISTRAL, is the necessity to work in the Gregorian room of the SRT. This implies that the receiver will move with the telescope and thus the cryostat will not be steady nor in the vertical position as usually cryogenic equipment need to stay. This has two consequences: a) the insertion of the Pulse Tube head and the refrigerator into the cryostat, is such that they both work in the nominal vertical position when the telescope points at an elevation of 57.5\({}^{\circ}\). b) The compressor which ensures the operation of the cryocooler has to be put in a position which does not change its inclination. This is possible only in the compressor room which is 120m apart from the Gregorian room. The possibility to have the cryocooler working at such a distance with 120m flexible helium lines was previously tested and proved to be feasible although with some loss of efficiency [7]. In such a way, MISTRAL has been tested to work properly in the inclination range +/-25\({}^{\circ}\), resulting in elevation range of 32.5-82.5\({}^{\circ}\) with no degradation of the thermal performance.
### MISTRAL optics
The optical design of MISTRAL includes two Anti Reflection Coated (ARC) Silicon lenses able to image the Gregorian Focus on the array of detectors. Detectors are coupled to radiation through open space (filled array) so, a cryogenic cold stop placed in between the two lenses, is needed to reduce background and avoid stray-light. The bandwidth of operations, as well as the reduction of the load onto the different stages of the cryostat, is provided by a set of radiation quasi-optical filters produced by QMC instruments4, anchored at the different thermal stages of the cryostat (see Fig. 1).
Footnote 4: [http://www.terahertz.co.uk/qmc-instruments-ltd](http://www.terahertz.co.uk/qmc-instruments-ltd)
The two Silicon lenses allow to report 4\({}^{\prime}\) of the SRT focus onto the array of 415 KIDs. They are anti-reflection coated with Rogers RO30035. Their diameter is 290mm and 240mm respectively while the aperture cold stop diameter of 125mm. All the lenses+cold stop system is kept at 4K. The in-band average simulations report excellent values with a Strehl Ratio from 0.97 to 0.91 for on-axis and edge positions. Analogously, the FWHM is 12.2\({}^{\prime\prime}\) on axis, and 12.7\({}^{\prime\prime}\) at 45mm off axis (which corresponds to 2' in the sky).
Footnote 5: [https://www.rogerscorp.com/](https://www.rogerscorp.com/)
### MISTRAL detectors
MISTRAL takes advantage of the high sensitivity of Kinetic Inductance Detectors (KIDs) as well as the capability to Frequency Domain Multiplexing such resonators[4, 14, 15]. MISTRAL KIDs are Ti-Al bilayer of thickness 10 + 30 \(nm\) with critical temperature \(T_{c}=945mK\) and are fabricated at CNR-IFN6 on 4" silicon wafer [6, 13] (see Fig. 2). The feedline is made of Aluminium of 21nm with a critical temperature \(T_{c}=1.1K\). This was done to reduce its sensitivity to millimetric radiation. The 415-detectors array is arranged in such a way the each KID samples the focal plane with angular spacing of 10.6\({}^{\prime\prime}\), lower than the each pixel angular resolution, thus oversampling the f.o.v.. We use ROACH2 based electronics7 to man
Figure 1: A cut of MISTRAL cryostat highlighting the optics of the receiver: a Ultra High Molecular Weight (UHMW) polyethylene window [8] is followed by two infrared (IR) filters. The band selection is actuated by a sequence of Low Pass Edge filters (LPE) and a final Band Pass (BP) filter.
age the Frequency Domain Multiplexing readout, and to send the tones to bias each of the resonators.
## 4 MISTRAL calibration, installation, and sensitivity forecast
MISTRAL has undergone diffuse laboratory calibration, noise measurements, pixel recognition, which certified the health of the instrument. The electric characterization has started with the tuning of the KIDs, the choice of the resonant frequencies and the adjustment of the power to be sent to each KID. Our KIDs are designed to work between 200MHz and 800MHz. The resulting tones are spaced with an averaged separation of 0.96MHz (see Fig. 2, right panel).
The optical performance have then been measured using an artificial source and a custom designed optical system which sends to MISTRAL KIDs, millimetric radiation with the same beam divergence (i.e. same f/#) it receives from the SRT. 84% of MISTRAL detectors are alive and usable. The average optical efficiency of the receiver was measured to be \(\simeq 35\%\). The figure of merit for the sensitivity of the KIDs is their Noise Equivalent Power (NEP) which represents the incoming power which produces a signal equal to the noise power spectrum of the KIDs. In Fig. 3 (right panel) we show an histogram of the resulting measurement which shows a median value of \(8.07\times 10^{-16}W/\sqrt{Hz}\).
MISTRAL receiver was transported and installed at the focus of the SRT between May and June 2023 (see Fig. 3, left panel). The aforementioned NEP's, nominally would translate into a \(NEFD\simeq 2.8mJy\sqrt{s}\)[16]. However, what is not taken into account in this estimate is the telescope efficiency and the noise added by the atmospheric fluctuation. We thus have undertaken a realistic simulation which assumes an arbitrary telescope efficiency of 30%, and takes into account the real atmospheric noise at the SRT observatory at 22GHz, and then extrapolated it to 90GHz using \(am\) code8. This results into an approximate \(NEFD\simeq 10-15mJy\sqrt{s}\). Assuming the definition reported by Perotto et al. 2020 [16], we extracted a mapping speed of \(MS=380^{\prime 2}/mJy^{2}/h\)[11].
Footnote 8: [https://lweb.cfa.harvard.edu/](https://lweb.cfa.harvard.edu/) spaine/am/
## 5 Conclusions
The full comprehension of the matter distribution around the universe is crucial both for cosmology and for astrophysics. The Sunyaev Zel'dovich effect is a powerful tool to study low density environments and search for bridges and filaments in the cosmic web. High angular resolution is crucial to understand and map galaxy clusters and the surrounding medium. We
Figure 2: Left: MISTRAL array of KIDs in its holder[14]. Right: An image of the response of the KIDs to the tones generated by the ROACH2 electronics sent to MISTRAL.
developed MISTRAL, which, coupled with the SRT, is an ideal instrument to map the sky, at 90GHz, with \(12^{\prime\prime}\) angular resolution. MISTRAL is a cryogenic camera with an array of KIDs cooled down at \(\simeq 200mK\). We recently installed the camera at the Gregorian room of the SRT and soon we expect to open it to the sky for the first light.
|
2307.03978 | Separable MV-algebras and lattice-groups | General theory determines the notion of separable MV-algebra (equivalently,
of separable unital lattice-ordered Abelian group). We establish the following
structure theorem: An MV-algebra is separable if, and only if, it is a finite
product of algebras of rational numbers, i.e., of subalgebras of the MV-algebra
$[0,1]\cap\mathbb{Q}$. Beyond its intrinsic algebraic interest, this research
is motivated by the long-term programme of developing the algebraic geometry of
the opposite of the categroy of MV-algebras, in analogy with the classical case
of commutative $K$-algebras over a field $K$. | Vincenzo Marra, Matías Menni | 2023-07-08T13:54:48Z | http://arxiv.org/abs/2307.03978v2 | # Separable MV-algebras and lattice-groups
###### Abstract.
General theory determines the notion of separable MV-algebra (equivalently, of separable unital lattice-ordered Abelian group). We establish the following structure theorem: An MV-algebra is separable if, and only if, it is a finite product of algebras of rational numbers--i.e., of subalgebras of the MV-algebra \([0,1]\cap\mathbb{Q}\). Beyond its intrinsic algebraic interest, this research is motivated by the long-term programme of developing the algebraic geometry of the opposite of the category of MV-algebras, in analogy with the classical case of commutative \(K\)-algebras over a field \(K\).
Key words and phrases:MV-algebra, lattice-ordered Abelian group, strong order unit, separable algebra, extensive category, decidable object 2020 _Mathematics Subject Classification._ Primary: 06D35. Secondary: 06F20, 18B50, 12F10
instance, a topological space is decidable if, and only if, it is discrete. For any ring \(R\), and any \(R\)-algebra \(A\), let \(\operatorname{Spec}A\) be the corresponding object in the extensive category \((R/\mathsf{Ring})^{\operatorname{op}}\). Then \(\operatorname{Spec}A\) is decidable if, and only if, \(A\) is separable as an \(R\)-algebra. In other words, the separable \(R\)-algebras are precisely those for which the associated affine scheme is decidable.
Let us say that a category is _coextensive_ if its opposite is extensive. In light of the above comments, an object in a coextensive category \(\mathcal{A}\) is called _separable_ if the corresponding object in \(\mathcal{A}^{\operatorname{op}}\) is decidable.
The category \(\mathsf{MV}\) of MV-algebras is coextensive. This provides the notion of separable MV-algebra that is the topic of the present paper. Explicitly, the MV-algebra \(A\) is separable if, and only if, there is a homomorphism \(f\colon A+A\to A\) such that the span
is a product diagram, where \(\nabla\colon A+A\to A\) denotes the codiagonal map.
The geometry of \(\mathsf{MV}^{\operatorname{op}}\) has long been the subject of intensive hands-on study because of its striking connections with several areas of classical mathematics, from piecewise-linear topology to the geometry of numbers. The characterisation of decidable objects in \(\mathsf{MV}^{\operatorname{op}}\) that we present here was motivated by our ongoing long-term project to study of the 'gros Zariski' topos determined by the theory of MV-algebras as the domain of a pre-cohesive geometric morphism [24]. We postpone the topos-theoretic consequences of separability to further publications; no Topos Theory is required for the proof of the purely algebraic results in the present paper.
The plan of the paper is as follows. In Sections 2, 3, and 4 we introduce the necessary material to prove a sufficient condition for an extensive category with finite products to have the property that every decidable object is a finite coproduct of connected subterminals. In Section 5 we verify that \(\mathsf{MV}\) is coextensive. In Theorem 6.9 we characterise the subterminal objects of \(\mathsf{MV}^{\operatorname{op}}\) as, in \(\mathsf{MV}\), the subalgebras of \([0,1]\cap\mathbb{Q}\). In order to extend Theorem 6.9 to a characterisation of separable MV-algebras we need to introduce the Pierce functor for \(\mathsf{MV}\), an analogue of the standard ring-theoretic functor by the same name. The key fact is that the Pierce functor preserves coproducts. To prove it, in Section 7 we develop the required material on the connected-component functor \(\pi_{0}\) in \(\mathsf{Top}\). Using the theory of spectra of MV-algebras recalled in Section 8 along with the topological \(\pi_{0}\) functor, we are able to show in Theorem 9.9 that the Pierce functor does preserve all coproducts. Theorems 6.9 and 9.9 are combined in Section 10 to obtain our main result, the mentioned characterisation of separable MV-algebras. We conclude Section 10 with a discussion that points to further research aimed at enriching the connected-component functor on \(\mathsf{MV}^{\operatorname{op}}\) to an 'arithmetic connected-component functor'; this functor, we submit, arises out of locally finite MV-algebras. Finally, in Appendix A we collect the translation of our main results to lattice-groups.
## 2. Extensive categories and connected objects
In this section we recall the definition of extensive category and of connected object. For more details about extensive categories see, for example, [23, 6] and references therein.
A category \(\mathcal{C}\) with finite coproducts is called _extensive_ if for every \(X\) and \(Y\) in \(\mathcal{C}\) the canonical functor \(\mathcal{C}/X\times\mathcal{C}/Y\to\mathcal{C}/(X+Y)\) is an equivalence. Examples of extensive categories are \(\mathsf{Set}\) (sets and functions), \(\mathsf{fSet}\) (finite sets and functions), any topos, \(\mathsf{Top}\), \(\mathsf{KHaus}\) (compact Hausdorff spaces and continuous maps), \(\mathsf{Stone}\)
(Stone1 spaces and continuous maps). The categories of rings, of Boolean algebras and of distributive lattices2 are coextensive. See [25] and [7] for further examples.
Footnote 1: By a _Stone space_ we mean a compact Hausdorff zero-dimensional space. Such spaces are often called _Boolean_ in the literature.
Footnote 2: Throughout the paper, with the exception of Appendix A, we assume distributive lattices to have top and bottom elements preserved by homomorphisms.
In extensive categories coproduct injections are regular monomorphisms, coproducts of monomorphisms are monomorphisms, and the initial object is _strict_ in the sense that any map \(X\to 0\) is an isomorphism. Also, extensive categories are closed under slicing.
**Definition 2.1**.: A coproduct \(in_{0}\colon X\to X+Y\gets Y:in_{1}\) is
1. _disjoint_ if the coproduct injections are monic and the commutative square is a pullback;
2. _universal_ if for every arrow \(Z\to X+Y\) the two pullback squares below exist and the top cospan is a coproduct diagram.
The following result is essentially [6, Proposition 2.14].
**Proposition 2.2**.: _A category with finite coproducts is extensive if, and only if, coproducts are universal and disjoint._
Assume from now on that \(\mathcal{C}\) is an extensive category.
A monomorphism \(u\colon U\to X\) in \(\mathcal{C}\) is called _complemented_ if there is a \(v\colon V\to X\) such that the cospan \(u\colon U\to X\gets V:v\) is a coproduct diagram. In this case, \(v\) is the _complement_ of \(u\). Notice that complemented monomorphisms are regular monomorphisms because they are coproduct injections. In the next definition, and throughout, we identify monomorphisms and subobjects whenever convenient.
**Definition 2.3**.: An object \(X\) in \(\mathcal{C}\) is _connected_ if it has exactly two complemented subobjects.
In \(\mathsf{KHaus}\) or \(\mathsf{Top}\), an object is connected if and only if it has exactly two clopens. An object \(A\) in \(\mathsf{Ring}\) is connected as an object in \(\mathsf{Ring}^{\mathrm{op}}\) if and only if \(A\) has exactly two idempotents. We remark that, in general, connected objects are not closed under finite products.
For each \(X\) in \(\mathcal{C}\) we let \(\mathrm{B}X\) denote the poset of complemented subobjects of \(X\). We stress that if \(u\colon U\to X\) and \(v\colon V\to X\) are two complemented monomorphisms in \(\mathcal{C}\) and \(f\colon U\to V\) is such that \(vf=u\) then \(f\) is complemented [15, Lemma 3.2]. So for any two complemented subobjects \(u,v\) of \(X\), there is no ambiguity in writing \(u\leqslant v\) since it means the same for \(u\), \(v\) considered as subobjects, or as complemented subobjects.
Extensivity easily implies that the poset \(\mathrm{B}X\) has finite infima, a bottom element, and an involution. This structure may be used to prove that \(\mathrm{B}X\) is actually a
Boolean algebra which interacts well with pullbacks in the sense that, for any map \(f\colon X\to Y\) in \(\mathcal{C}\), pulling back along \(f\) determines a Boolean algebra homomorphism \(\mathrm{B}Y\to\mathrm{B}X\). So, assuming that \(\mathcal{C}\) is well-powered, the assignment \(X\mapsto\mathrm{B}X\) extends to a functor \(\mathcal{C}\to\mathsf{BA}^{\mathrm{op}}\) between extensive categories that preserves finite coproducts.
We will use the following simple equivalences.
**Lemma 2.4**.: _For any object \(X\) in \(\mathcal{C}\) the following are equivalent._
1. \(X\) _is connected._
2. \(X\) _is not initial and, for every complemented subobject_ \(u\colon U\to X\)_,_ \(U\) _is initial or_ \(u\) _is an isomorphism._
3. \(X\) _is not initial and, for every coproduct diagram_ \(U\to X\gets V\)_,_ \(U\) _is initial or_ \(V\) _is initial._
## 3. Finite-coproduct preserving functors
Let \(\mathcal{C}\) and \(\mathcal{S}\) be extensive categories, and let \(L\colon\mathcal{C}\to\mathcal{S}\) preserve finite coproducts. Such a functor preserves complemented monomorphisms so, for any \(X\) in \(\mathcal{C}\), \(L\) induces a function \(\mathrm{B}X\to\mathrm{B}(LX)\) which is actually a map in \(\mathsf{BA}\), natural in \(X\). (It is relevant to remark such a functor also preserves pullbacks along coproduct injections. See [15, 3.8].)
We will say that \(L\) is _injective_ (_surjective/bijective_) _on complemented subobjects_ if and only if \(\mathrm{B}X\to\mathrm{B}(LX)\) has the corresponding property for every \(X\) in \(\mathcal{C}\).
**Lemma 3.1**.: _The functor \(L\colon\mathcal{C}\to\mathcal{S}\) is injective on complemented subobjects if and only if it reflects \(0\). In this case, \(L\) also reflects connected objects._
Proof.: Assume first that \(L\) is injective on complemented subobjects and let \(X\) in \(\mathcal{C}\) be such that \(LX=0\). Then \(\mathrm{B}(LX)\) is the terminal Boolean algebra and, as \(\mathrm{B}X\to\mathrm{B}(LX)\) is injective by hypothesis, \(\mathrm{B}X\) is also trivial. For the converse notice that if \(L\) reflects \(0\) then the map \(\mathrm{B}X\to\mathrm{B}(LX)\) in \(\mathsf{BA}\) has trivial kernel for every \(X\) in \(\mathcal{C}\).
To prove the second part of the statement assume that \(X\) in \(\mathcal{C}\) is such that \(LX\) is connected in \(\mathcal{S}\). If \(X\) were initial then so would \(LX\) because \(L\) preserves finite coproducts and, in particular, the initial object. So \(X\) is not initial. Now assume that \(U\to X\gets V\) is a coproduct diagram. Then so is \(LU\to LX\gets LV\). Since \(LX\) is connected, either \(LU\) or \(LV\) is initial by Lemma 2.4. As \(L\) reflects \(0\), either \(U\) or \(V\) is initial, so \(X\) is connected by the same lemma. (Alternatively, if \(\mathrm{B}X\to\mathrm{B}(LX)\) is injective and its codomain is the initial Boolean algebra then so is the domain.)
We will be particularly interested in extensive categories wherein every object is a finite coproduct of connected objects. For example, \(\mathsf{fSet}\) satisfies this property, but neither \(\mathsf{Set}\) nor \(\mathsf{Stone}\) does. If \(\mathcal{A}\) is the category of finitely presentable \(K\)-algebras for a field \(K\), then \(\mathcal{A}^{\mathrm{op}}\) also satisfies this property.
**Proposition 3.2**.: _If \(L\colon\mathcal{C}\to\mathcal{S}\) is bijective on complemented subobjects then the following hold._
1. _The functor_ \(L\) _preserves connected objects._
2. _For any object_ \(X\) _in_ \(\mathcal{C}\)_, if_ \(LX\) _is a finite coproduct of connected objects then so is_ \(X\)_._
3. _If every object in_ \(\mathcal{S}\) _is a finite coproduct of connected objects then so is the case in_ \(\mathcal{C}\)_._
4. _Assume that_ \(\mathcal{C}\) _and_ \(\mathcal{S}\) _have finite products and that_ \(L\) _preserves them. If_ \(\mathcal{S}\) _is such that finite products of connected objects are connected then so is the case in_ \(\mathcal{C}\)
Proof.: To prove the first item just notice that, by hypothesis, \(\mathrm{B}X\to\mathrm{B}(LX)\) is an isomorphism for each \(X\) in \(\mathcal{C}\). Hence if \(X\) has exactly two complemented subobjects then so does \(LX\).
Before proving the second item we establish an auxiliary fact. Let \(X\) be in \(\mathcal{C}\) and let \(u\colon U\to LX\) be a complemented subobject in \(\mathcal{S}\) with connected \(U\). Then, as \(L\) is surjective on complemented objects by hypothesis, there exists a complemented subobject \(v\colon V\to X\) in \(\mathcal{C}\) such that \(Lv=u\) as subobjects of \(LX\). Then \(LV\cong U\) is connected, so \(V\) is connected by Lemma 3.1. Thus, we have lifted the 'connected component' \(u\) of \(LX\) to one of \(X\).
To prove the second item let \(\left(u_{i}\mid i\in I\right)\) be a finite family of pairwise-disjoint complemented subobjects of \(LX\) with connected domain whose join is the whole of \(LX\). For each \(i\in I\), let \(v_{i}\) be the complemented subobject of \(X\) induced by \(u_{i}\) as in the previous paragraph. As \(L\) reflects \(0\), the family \(\left(v_{i}\mid i\in I\right)\) is pairwise disjoint. Also, \(L\bigvee_{i\in I}v_{i}=\bigvee_{i\in I}Lv_{i}=\bigvee_{i\in I}u_{i}\) is the whole of \(LX\). As \(L\) is injective on complemented subobjects, \(\bigvee_{i\in I}v_{i}\) must be the whole of \(X\). In summary, we have lifted the finite coproduct decomposition of \(LX\) to one of \(X\).
The third item follows at once from the second.
For the fourth item, let \(X\) be the product of a finite family \(\left(X_{i}\mid i\in I\right)\) of connected objects in \(\mathcal{C}\). Then \(LX\) is the product of \(\left(LX_{i}\mid i\in I\right)\) because \(L\) preserves finite products. Each \(LX_{i}\) is connected because \(L\) preserves connected objects by the first item, so \(LX\) is connected by our hypothesis on \(\mathcal{S}\). Hence \(X\) is connected by Lemma 3.1.
We next prove a sufficient condition for a functor \(L\) as above to be bijective on complemented subobjects.
**Lemma 3.3**.: _If \(L\colon\mathcal{C}\to\mathcal{S}\) has a finite-coproduct preserving right adjoint, then \(L\) is bijective on complemented subobjects._
Proof.: Let \(R\) be the right adjoint to \(L\) and let \(\sigma\) and \(\tau\) be the unit and counit of \(L\dashv R\). We show that \(L\) is both injective and surjective on complemented subobjects.
To prove injectivity it is enough to show that \(L\) reflects \(0\) (Lemma 3.1). So let \(X\) be an object in \(\mathcal{C}\) such that \(LX\) is initial. Then we may transpose the isomorphism \(LX\to 0\) in \(\mathcal{S}\) to a map \(X\to R0\), but \(R0=0\) because \(R\) is assumed to preserve finite coproducts. Since the initial object is strict, \(X\) is initial.
We next show that \(L\) is surjective on complemented subobjects. Let \(u\colon U\to LX\) be a complemented monomorphism. Then \(Ru\) is complemented so the left pullback square below exists
by extensivity of \(\mathcal{C}\). Then the two squares on the right above obviously commute, and the bottom composite is the identity. Moreover, [15, Lemma 3.7] implies that both squares are pullbacks, so \(u\) and \(Lv\) coincide as subobjects of \(LX\).
Combining Lemma 3.3 and Proposition 3.2 we obtain the following.
**Corollary 3.4**.: _Assume that \(L\colon\mathcal{C}\to\mathcal{S}\) has a finite-coproduct preserving right adjoint. If every object in \(\mathcal{S}\) is a finite coproduct of connected objects then so is the case in \(\mathcal{C}\)._
## 4. Decidable objects
Let \(\mathcal{C}\) be an extensive category with finite products. In particular, \(\mathcal{C}\) has a terminal object \(1\). An object \(X\) is called _subterminal_ if the unique map \(X\to 1\) is monic.
**Lemma 4.1**.: _For any object \(X\) in \(\mathcal{C}\), the following are equivalent._
1. _The object_ \(X\) _is subterminal._
2. _The diagonal_ \(\Delta\colon X\to X\times X\) _is an isomorphism._
3. _The projections_ \(\operatorname{pr}_{0},\operatorname{pr}_{1}\colon X\times X\to X\) _are equal._
Proof.: The first item implies the second because for any monomorphism \(X\to 1\) the following diagram
is a pullback. The second item implies the third because any map has at most one inverse. To prove that the third item implies the first, let \(f,g\colon Y\to X\). Then there exists a unique map \(\langle f,g\rangle\colon Y\to X\times X\) such that \(\operatorname{pr}_{0}\langle f,g\rangle=f\) and \(\operatorname{pr}_{1}\langle f,g\rangle=g\). So \(f=\operatorname{pr}_{0}\langle f,g\rangle=\operatorname{pr}_{1}\langle f,g\rangle=g\). That is, for any object \(Y\) there is a unique map \(Y\to X\). This means that the unique map \(X\to 1\) is monic.
We stress that extensivity plays no role in Lemma 4.1, which is a general fact about categories with finite products.
**Definition 4.2**.: An object \(X\) in \(\mathcal{C}\) is _decidable_ if the diagonal \(\Delta\colon X\to X\times X\) is complemented.
**Remark 4.3**.: Lemma 4.1 shows that subterminal objects in \(\mathcal{C}\) are decidable, and that they may be characterised as those decidable objects \(X\) such that the diagonal \(\Delta\colon X\to X\times X\) not only is complemented, but is actually an isomorphism.
The full subcategory of decidable objects will be denoted by \(\operatorname{Dec}\mathcal{C}\to\mathcal{C}\). If \(\mathcal{C}\) is _lextensive_ (i.e. extensive and with finite limits) it follows from [4] that \(\operatorname{Dec}\mathcal{C}\) is extensive and that the inclusion \(\operatorname{Dec}\mathcal{C}\to\mathcal{C}\) preserves finite limits, finite coproducts and that it is closed under subobjects. Moreover, for any \(X\), \(Y\) in \(\mathcal{C}\), \(X+Y\) is decidable if, and only if, both \(X\) and \(Y\) are decidable. On the other hand, arbitrary coproducts of decidable objects need not be decidable--consider, for instance, an infinite copower of the terminal object in \(\mathsf{KHaus}\) or \(\mathsf{Stone}\).
**Proposition 4.4**.: _For any object \(X\) in \(\mathcal{C}\) the following are equivalent:_
1. \(X\) _is subterminal and connected._
2. \(X\) _is decidable and_ \(X\times X\) _is connected._
Proof.: If \(X\) is subterminal and connected then \(\Delta\colon X\to X\times X\) is an isomorphism by Lemma 4.1. So \(X\) is decidable and \(X\times X\) is as connected as \(X\).
For the converse assume that \(X\) is decidable and that \(X\times X\) is connected. Decidability means that the subobject \(\Delta\colon X\to X\times X\) is complemented; as \(X\times X\) is connected, \(X\) is initial or \(\Delta\colon X\to X\times X\) is an isomorphism by Lemma 2.4. But \(X\) is not initial (because \(X\times X\) is connected) so \(\Delta\colon X\to X\times X\) is an isomorphism. Then \(X\) is as connected as \(X\times X\), and \(X\) is subterminal by Lemma 4.1.
Let \(\mathcal{S}\) be another extensive category with finite products and let \(L\colon\mathcal{C}\to\mathcal{S}\) preserve finite products and finite coproducts.
**Lemma 4.5**.: _Assume that \(L\) reflects \(0\) and that \(1\) is connected in \(\mathcal{S}\). Then the following hold for every \(X\) in \(\mathcal{C}\)._
1. _If_ \(LX=1\) _then_ \(X\) _is connected._
2. _If_ \(X\) _in_ \(\mathcal{C}\) _is decidable and_ \(LX=1\) _then_ \(X\) _is subterminal._
Proof.: The functor \(L\) reflects \(0\) so it reflects connected objects by Lemma 3.1. As \(1\) is connected in \(\mathcal{S}\) by hypothesis, \(LX=1\) implies \(X\) connected.
If \(LX=1\) then \(L(X\times X)=LX\times LX=1\). So \(X\times X\) is connected by the first item. Therefore \(X\) is subterminal by Proposition 4.4.
It easily follows from the definition of decidable object that \(L\) preserves decidable objects. In more detail, the preservation properties of \(L\) imply that the left-bottom composite below
factors uniquely through the right inclusion and, moreover, \(\operatorname{Dec}\mathcal{C}\to\operatorname{Dec}\mathcal{S}\) preserves finite products and finite coproducts. In fact, \(\operatorname{Dec}\mathcal{C}\to\operatorname{Dec}\mathcal{S}\) preserves all the finite limits that \(L\) preserves (because the subcategories of decidable objects are closed under finite limits).
Additionally assume from now on that \(L\colon\mathcal{C}\to\mathcal{S}\) has a finite-coproduct preserving right adjoint \(R\colon\mathcal{S}\to\mathcal{C}\).
Notice that under the present hypotheses both \(L\) and \(R\) preserve finite products and finite coproducts. It follows that the adjunction \(L\dashv R\) restricts to one between \(\operatorname{Dec}\mathcal{S}\) and \(\operatorname{Dec}\mathcal{C}\).
**Corollary 4.6**.: _If every decidable object in \(\mathcal{S}\) is a finite coproduct of connected objects then so is the case in \(\mathcal{C}\)._
Proof.: The adjunction \(L\dashv R\colon\mathcal{S}\to\mathcal{C}\) restricts to one \(L^{\prime}\dashv R^{\prime}\colon\operatorname{Dec}\mathcal{S}\to\operatorname {Dec}\mathcal{C}\), and every object in \(\operatorname{Dec}\mathcal{S}\) is a finite coproduct of connected objects by hypothesis. So we may apply Corollary 3.4 to \(L^{\prime}\colon\operatorname{Dec}\mathcal{C}\to\operatorname{Dec}\mathcal{S}\)
Because \(\mathcal{S}\) is lextensive, there exists an essentially unique coproduct preserving functor \(\mathsf{fSet}\to\mathcal{S}\) that also preserves the terminal object. The functor sends a finite set \(I\) to the copower \(I\cdot 1\) in \(\mathcal{S}\). The categories \(\mathsf{fSet}\), \(\mathsf{Stone}\), and other examples have the property that this functor \(\mathsf{fSet}\to\mathcal{S}\) coincides with \(\operatorname{Dec}\mathcal{S}\to\mathcal{S}\). Notice that if this condition holds then \(1\) is connected in \(\mathcal{S}\), because \(\mathsf{fSet}=\operatorname{Dec}\mathcal{S}\to\mathcal{S}\) is closed under subobjects and preserves \(1\).
**Proposition 4.7**.: _If the canonical functor \(\mathsf{fSet}\to\mathcal{S}\) coincides with \(\operatorname{Dec}\mathcal{S}\to\mathcal{S}\) then every decidable object in \(\mathcal{C}\) is a finite coproduct of connected subterminals._
Proof.: By Corollary 4.6 every decidable object in \(\mathcal{C}\) is a finite coproduct of connected objects. So it is enough to prove that every connected decidable object in \(\mathcal{C}\) is subterminal. For this, let \(X\) be connected and decidable. Then \(LX\) is decidable, because \(L\) preserves finite products and finite coproducts, and it is connected by Lemma 3.3 and Proposition 3.2. By hypothesis, the canonical \(\mathsf{fSet}\to\mathcal{S}\) coincides with \(\operatorname{Dec}\mathcal{S}\to\mathcal{S}\) so \(LX=1\). Hence \(X\) is decidable and \(LX=1\). Therefore \(X\) is subterminal by Lemma 4.5.
For a lextensive category \(\mathcal{E}\) we have considered several conditions.
1. Every decidable object is a finite coproduct of connected objects.
2. Every decidable object is a finite coproduct of connected subterminals.
3. The canonical functor \(\mathsf{fSet}\to\mathcal{E}\) coincides with the inclusion \(\operatorname{Dec}\mathcal{E}\to\mathcal{E}\).
For a field \(K\), \((K/\mathsf{Ring})^{\mathrm{op}}\) satisfies the first condition but not the second. The categories \(\mathsf{Stone}\) and \(\mathsf{KHaus}\) satisfy the third condition. The third condition implies the second which, in turn, implies the first. Proposition 4.7 shows that for certain adjunctions \(L\dashv R\colon\mathcal{S}\to\mathcal{C}\), if \(\mathcal{S}\) satisfies the third condition then \(\mathcal{C}\) satisfies the second. This will be used to prove that \(\mathsf{MV}^{\mathrm{op}}\) satisfies the second condition (Theorem 10.1).
## 5. The coextensive category of MV-algebras
For background on MV-algebras we refer to the standard textbooks [8, 30], of which we also follow the notation. In this section we show that \(\mathsf{MV}\) is coextensive by proving that products are codisjoint and couniversal (Proposition 2.2).
**Lemma 5.1**.: _Let \(\mathcal{A}\) be a regular category with finite colimits. If \(0\to 1\) is a regular epimorphism then products are codisjoint._
Proof.: Let \(A\) be an object in \(\mathcal{A}\). As the composite \(0\to A\to 1\) is a regular epimorphism by hypothesis, so is \(A\to 1\) by regularity of \(\mathcal{A}\). That is, not only \(0\to 1\) but actually any \(A\to 1\) is a regular epimorphism. As every regular epimorphism is the coequalizer of its kernel pair, \(A\to 1\) is the coequalizer of the two projections \(A\times A\to A\). Also, as products of regular epimorphisms are epimorphisms, the product of \(id\colon A\to A\) and \(B\to 1\) is a regular epimorphism \(A\times B\to A\times 1\). That is, the projection \(A\times B\to A\) is a regular epimorphism.
To complete the proof we recall a basic fact about colimits: for a commutative diagram as on the left below
such that \(de_{i}=f_{i}e\) for \(i\in\{0,1\}\), the top and bottom forks are coequalizers and \(e\) is epic, the inner right square is a pushout. Applying this observation to the diagram on the right above we obtain that the inner right square in that diagram is a pushout.
In particular, if \(\mathcal{A}\) is the category of models for an algebraic theory with at least one constant then the initial object \(0\) is non-empty and so \(0\to 1\) is a regular epimorphism. This is the case, of course, for \(\mathcal{A}=\mathsf{MV}\).
In \(\mathsf{Ring}\), couniversality of products is entailed by the intimate relationship between idempotents and product decompositions. The situation for \(\mathsf{MV}\) is analogous. An element \(b\) of an MV-algebra \(A\) is called _Boolean_ if it satisfies one of the following equivalent conditions (see [8, 1.5.3]):
\[b\oplus b=b\qquad b\odot b=b\qquad b\vee\neg b=1\qquad b\wedge\neg b=0.\]
For \(x\in A\) we let \(A\to A[x^{-1}]\) be the quotient map induced by the congruence on \(A\) generated by the pair \((x,1)\).
**Lemma 5.2**.: _For any \(f\colon A\to B\) in \(\mathsf{MV}\) the following diagram is a pushout_
_where the right vertical map is the unique one making the square commute._
Proof.: Standard, using the universal property of the (horizontal) quotient homomorphisms.
**Lemma 5.3**.: _For any MV-algebra \(A\) and every Boolean element \(x\in A\), let \(\langle\neg x\rangle\) be the ideal of \(A\) generated by \(\{\neg x\}\). Then the quotient \(q\colon A\to A/\langle\neg x\rangle\) has the universal property of \(A\to A[x^{-1}]\)._
Proof.: If \(k\colon A\to B\) is such that \(kx=1\) then \(\neg x\in\ker k\), so \(\langle\neg x\rangle\subseteq\ker k\). By the universal property of quotients there is exactly one homomorphism \(c\colon A/\langle\neg x\rangle\to C\) such that \(cq=k\).
**Lemma 5.4**.: _In \(\mathsf{MV}\), the diagram_
_is a product precisely when there exists a Boolean element \(x\in C\) such that \(q_{0}\) has the universal property of \(C\to C[(\neg x)^{-1}]\) and \(q_{1}\) has the universal property of \(C\to C[x^{-1}]\). When this is the case, the element \(x\in C\) with the foregoing property is unique._
Proof.: Assume the diagram is a product. Then there is a unique \(x\in C\) such that \(q_{i}x=i\), \(i=0,1\). This \(x\) is Boolean because \(0\) and \(1\) are. Hence \(\neg x\) is Boolean too, and thus \(\oplus\)-idempotent; therefore, \(\langle\neg x\rangle=\{c\in C\mid c\leqslant\neg x\}\). If \(c\leqslant\neg x\) then \(q_{1}c\leqslant q_{1}(\neg x)=0\), so \(q_{1}c=0\) and \(c\in\ker q_{1}\). If \(c\in\ker q_{1}\) then \(q_{1}c=0\leqslant q_{1}(\neg x)\) and \(q_{0}c\leqslant 1=q_{0}(\neg x)\), so \(c\leqslant\neg x\) by the definition of product order. We conclude \(\ker q_{1}=\langle\neg x\rangle\). The projection \(q_{1}\) is surjective so Lemma 5.3 entails that \(q_{1}\) has the universal property of \(C\to C[x^{-1}]\). An entirely similar argument applies to \(q_{0}\).
Conversely, assume \(q_{0}\) and \(q_{1}\) have the universal properties in the statement. By Lemma 5.3 we may identify \(q_{0}\) with \(C\to C/\langle x\rangle\) and \(q_{1}\) with \(C\to C/\langle\neg x\rangle\). So it is enough to show that the canonical \(C\to C/\langle x\rangle\times C/\langle\neg x\rangle\) is bijective. Injectivity follows because if \(c\leqslant x,\neg x\) then \(c\leqslant x\land\neg x=0\), so \(\langle x\rangle\cap\langle\neg x\rangle=0\). To prove surjectivity, let \((q_{0}c_{0},q_{1}c_{1})\in C/\langle x\rangle\times C/\langle\neg x\rangle\) with \(c_{0},c_{1}\in C\) and consider \(c=(c_{0}\land\neg x)\lor(c_{1}\wedge x)\in C\). It is easy to check that \(C\to C/\langle x\rangle\times C/\langle\neg x\rangle\) sends \(c\) in the domain to \((q_{0}c_{0},q_{1}c_{1})\) in the codomain.
**Remark 5.5**.: The content of Lemma 5.4 is far from new, cf. e.g. [8, Section 6.4] and [7, Proposition 3.9]. However, having expressed that content in the form that is most suitable for the sequel, we have included a proof for the reader's convenience.
**Proposition 5.6**.: \(\mathsf{MV}\) _is coextensive._
Proof.: Any algebraic category is complete and cocomplete, so in particular it has finite products and pushouts. We appeal to the characterization of extensive categories in Proposition 2.2. Codisjointness of products follows from Lemma 5.1 or from a direct calculation observing that the projections of a product \(A\times B\) send \((0,1)\) to \(0\) and \(1\) respectively, so \(0=1\) must hold in the pushout.
It remains to show that products are couniversal. So we consider the pushout of a product diagram as below
and prove that the bottom span is product diagram. Indeed, observe that the Boolean element \((0,1)\in A\times B\) is sent to the Boolean element \(x\coloneqq f(1,0)\in C\) so,
by Lemma 5.4, it is enough to check that \(q_{0}\) inverts \(\neg x\) and \(q_{1}\) inverts \(x\); but this follows from Lemma 5.2.
Although it was not necessary to prove the main result of this section, it seems worthwhile to observe that, in the context of algebraic categories, Lemma 5.1 may be strengthened to a characterisation.
**Proposition 5.7**.: _In any algebraic category, binary products are codisjoint if, and only if, the initial algebra has non-empty underlying set._
Proof.: If the initial algebra \(0\) is not empty then the unique map \(0\to 1\) is a regular epimorphism so we can apply Lemma 5.1. For the converse implication notice that the following square
is a pushout by hypothesis. As any of the projections \(0\times 0\to 0\) is split epic, its pushout \(0\to 1\) is a regular epimorphism, so \(0\) must be non-empty.
## 6. Subterminals in \(\mathsf{MV}^{\mathrm{op}}\), and rational algebras
The aim of this section is to characterize subterminal objects in \(\mathsf{MV}^{\mathrm{op}}\). Perhaps unexpectedly, the following fact will play an important role.
**Lemma 6.1**.: _Monomorphisms in \(\mathsf{MV}\) are stable under pushout._
Proof.: It is well known [22] that, in algebraic categories, stability of monomorphisms under pushout is equivalent to the conjunction of the Amalgamation Property (AP) and of the Congruence Extension Property (CEP). Pierce proved the AP for Abelian lattice-groups in [31], and Mundici [29, Proposition 1.1] observed that Pierce's result transfers through the functor \(\Gamma\) to MV-algebras. For a different proof of the AP for Abelian lattice-groups and MV-algebras, see [27, Theorems 36 40]. The CEP for MV-algebras was proved in [16, Proposition 8.2]; for an alternative proof, see [30, p. 230]. For yet another proof in the more general context of residuated lattices, see [27, Corollary 44].
Most of the work will be done on the algebraic side, so it is convenient to start with an arbitrary category \(\mathcal{A}\) with finite coproducts whose initial object is denoted \(0\). As suggested above, we concentrate on the objects \(A\) such that the unique map \(0\to A\) is epic. Notice that such an object is exactly a subterminal object in \(\mathcal{A}^{\mathrm{op}}\), but we prefer to avoid introducing new terminology such as 'cosubterminal' or'supra-initial'. For convenience we state here the dual of Lemma 4.1.
**Lemma 6.2**.: _For any object \(A\) in \(\mathcal{A}\), the following are equivalent:_
1. _The map_ \(0\to A\) _is epic._
2. _The codiagonal_ \(\nabla\colon A+A\to A\) _is an isomorphism._
3. _The coproduct injections_ \(in_{0},in_{1}\colon A\to A+A\) _are equal._
We shall also need a simple auxiliary fact.
**Lemma 6.3**.: _Let \(0\to A\) be epic and \(m\colon B\to A\) be a map. If the coproduct map \(m+m\colon B+B\to A+A\) is monic then \(0\to B\) is epic._
Proof.: The following square commutes
by naturality of the codiagonal. The bottom map is an isomorphism by Lemma 6.2, and the left vertical map is monic by hypothesis. So the top map is also monic, as well as split epic.
Assume from now on that \(\mathcal{A}\) has finite colimits and that monomorphisms are stable under pushout. We stress that this stability property is quite restrictive. For instance, it does not hold in \(\mathsf{Ring}\). On the other hand, we already know that it holds in \(\mathsf{MV}\) by Lemma 6.1.
**Lemma 6.4**.: _The map \(0\to A\) is epic if, and only if, for every monomorphism \(B\to A\), \(0\to B\) is epic._
Proof.: One direction is trivial and does not need stability of monomorphisms. For the converse observe that, as monomorphisms are stable under pushout, finite coproducts of monomorphisms are monic. So we can apply Lemma 6.3.
The following is a further auxiliary fact.
**Lemma 6.5**.: _For any \(d\colon A\to D\) and \(e\colon B\to A\) in \(\mathcal{A}\), if \(e\) is epic and the composite \(de\colon B\to D\) is monic then \(d\) is an monic._
Proof.: The right square below is trivially a pushout and, since \(e\colon B\to A\) is epic, the left square is also a pushout
so the rectangle is a pushout too. As the top composite is monic, and these are are stable under pushout by hypothesis, the bottom map is monic.
We emphasise the next particular case of Lemma 6.5.
**Lemma 6.6**.: _Let \(d\colon A\to D\) be a regular epimorphism in \(\mathcal{A}\). If \(0\to A\) is epic and \(0\to D\) is monic then \(d\) is an isomorphism._
Assume now that our category \(\mathcal{A}\) with finite colimits and stable monomorphisms has a terminal object \(1\) such that for any object \(A\) in \(\mathcal{A}\) the unique \(A\to 1\) is a regular epimorphism. This is common in algebraic categories.
A _quotient_ of \(A\) in \(\mathcal{A}\) is an equivalence class of regular epimorphisms with domain \(A\), where two such are equivalent if they are isomorphic as objects of \(A/\mathcal{A}\).
An object \(A\) is _simple_ if it has exactly two quotients, namely, those represented by \(A\to 1\) and \(id\colon A\to A\). So, if \(\mathcal{A}\) is an algebraic category, then an object is simple if and only if it has exactly two congruences.
To motivate the hypotheses of the following lemma observe that for every object \(A\) in \(\mathsf{BA}\), \(A\) is terminal or \(0\to A\) is monic. Similarly for \(\mathsf{MV}\) and for \(K/\mathsf{Ring}\) with \(K\) a field. In contrast, that is not the case in \(\mathsf{Ring}\).
**Lemma 6.7**.: _If for every object \(D\) of \(\mathcal{A}\), \(D\) is terminal or \(0\to D\) is monic, then for every epic \(0\to A\) the following hold._
1. \(A\) _is simple or terminal._
2. _If_ \(m\colon B\to A\) _is monic then_ \(B+B\) _is simple or terminal._
Proof.: To prove the first item let \(d\colon A\to D\) be a regular epimorphism. Then \(D\) is terminal or \(0\to D\) is monic by hypothesis. If \(0\to D\) is monic then \(d\) is an isomorphism by Lemma 6.6. So the only possible quotients of \(A\) are \(A\to 1\) or \(id\colon A\to A\). So \(A\) is terminal or simple.
To prove the second item first recall that epimorphisms are closed under coproduct. Then recall that, as monomorphisms are stable by hypotheses, they are closed under finite coproducts. Therefore, \(m+m\colon B+B\to A+A\) is a monomorphism and \(0=0+0\to A+A\) is epic. So, by Lemma 6.4, \(0\to B+B\) is also epic. The first item implies that \(B+B\) is simple or terminal.
The material in this section applies to the case \(\mathcal{A}=\mathsf{MV}\), so we may now prove our first MV-algebraic result. For the proof we require a standard fact from the theory of MV-algebras and lattice-groups, which will also find further application later in this paper. An ideal \(\mathfrak{m}\) of the MV-algebra \(A\) is _maximal_ if it is proper, and inclusion-maximal amongst proper ideals of \(A\); equivalently, the quotient \(A/\mathfrak{m}\) is a simple algebra.
**Lemma 6.8** (Holder's Theorem [20] for MV-algebras [8, 3.5.1]).: _For every MV-algebra \(A\), and for every maximal ideal \(\mathfrak{m}\) of \(A\), there is exactly one homomorphism of MV-algebras_
\[\mathfrak{h}_{\mathfrak{m}}\colon\tfrac{A}{\mathfrak{m}}\longrightarrow[0,1],\]
_and this homomorphism is injective._
In connection with the result that follows, let us explicitly recall that the initial object \(0\) in \(\mathsf{MV}\) is the two-element Boolean algebra \(\{0,1\}\).
**Theorem 6.9**.: _For any MV-algebra \(A\) the following are equivalent._
1. \(A\) _is a subalgebra of_ \([0,1]\cap\mathbb{Q}\)_._
2. \(A\) _is non-trivial and the unique map_ \(0\to A\) _is epic._
3. _The unique map_ \(0\to A\) _is monic and epic._
4. \(A\) _is simple and_ \(0\to A\) _is epic._
Proof.: If \(A\subseteq[0,1]\cap\mathbb{Q}\) then \(A\) is certainly non-trivial, and [30, Proposition 7.2] shows that the coproduct inclusions \(in_{0},in_{1}\colon A\to A+A\) are equal. So \(0\to A\) is epic by Lemma 6.2.
The second and third items are clearly equivalent, and they imply the fourth by Lemma 6.7.
Finally, assume that \(A\) is simple and that \(0\to A\) is epic. By Holder's Theorem (Lemma 6.8) together with simplicity, there is exactly one monomorphism \(A\to[0,1]\). Now let \(r\in A\) and write \(\iota\colon A_{r}\to A\) for the subalgebra of \(A\) generated by \(r\). As \(A_{r}\) is not trivial (and \(0\to A\) is epic) Lemma 6.7 implies that \(A_{r}+A_{r}\) is simple. Hence, by the computation in [30, Proposition 7.3], \(r\) must be rational.
## 7. The \(\pi_{0}\) functor for topological spaces
In this section we show that the full inclusion \(\mathsf{Stone}\to\mathsf{KHaus}\) of the category of Stone spaces into that of compact Hausdorff spaces has a left adjoint \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\) that preserves set-indexed products. The result just stated may be concisely referenced as follows. That the inclusion at hand is reflective is well known and flows readily from the universal property of the quotient topology. As shown in [5, Section 7], the reflection has "stable units"; we need not discuss this property here, except to recall that it easily implies that the left adjoint
preserves finite products. Since Gabriel and Ulmer in [14, p. 67] show that \(\pi_{0}\) preserves cofiltered limits, \(\pi_{0}\) preserves all products.3
Footnote 3: We are grateful to Luca Reggio and to Dirk Hofmann for pointing out to us, respectively, the relevance of [5, Section 7] and of [14, p. 67].
We give here a different proof that emphasises the key role of totally disconnected spaces in the general case. We first obtain a product-preserving left adjoint to the full inclusion of the category \(\mathsf{TD}\) of totally disconnected topological spaces into \(\mathsf{Top}\). We then show how to restrict this left adjoint to the categories of interest to us in the present paper.
A topological space \(X\) is _connected_ if it so in the sense of Definition 2.3. A subset of a space is _clopen_ if it is both closed and open. Then, a space \(X\) is connected if and only if it contains exactly two clopen sets, which are then necessarily \(\emptyset\) and \(X\). Equivalently [12, Theorem 6.1.1], \(X\) is connected if whenever \(X=A\cup B\) with \(A\cap B=\emptyset\) and \(A\) and \(B\) closed subsets of \(X\), then exactly one of \(A\) and \(B\) is empty. If \(X\) is a space and \(x\in X\), the _component_ of \(x\) in \(X\), written \(C_{x}\) (with \(X\) understood), is defined as
\[C_{x}:=\bigcup\{C\subseteq X\mid x\in X\text{ and }C\text{ is connected}\}\subseteq X.\]
It can be shown that \(C_{x}\) is a connected subspace of \(X\)[12, Corollary 6.1.10], and it therefore is the inclusion-largest such to which \(x\) belongs. Also, \(C_{x}\) is closed in \(X\)[12, Corollary 6.1.11]. A topological space \(X\) is _totally disconnected_ if for each \(x\in X\) we have \(C_{x}=\{x\}\).
Consider the equivalence relation on \(X\) given by
\[x\sim y\text{ if, and only if, }C_{x}=C_{y}, \tag{1}\]
and define
\[\pi_{0}X:=\frac{X}{\sim}.\]
We equip \(\pi_{0}X\) with the quotient topology, and call it the _space of components_ of \(X\). We write
\[q\colon X\longrightarrow\pi_{0}X \tag{2}\]
for the quotient map.
**Lemma 7.1**.: _For every continuous map \(f\colon X\to Y\) between topological spaces there is exactly one map such that the square below commutes._
Proof.: We first show that \(f\colon X\to Y\) preserves the equivalence relation \(\sim\) in (1). Given \(x,x^{\prime}\in X\), suppose \(x\sim x^{\prime}\), so that \(C_{x}=C_{y}\eqqcolon C\). Since continuous maps preserve connectedness [12, Theorem 6.1.3], \(f[C]\) is a connected subset of \(Y\) that contains both \(fx\) and \(fx^{\prime}\). Hence \(f[C]\subseteq Cf_{fx}\cap Cf_{fx^{\prime}}\), which entails \(C_{fx}=C_{fy}\). This completes the proof that \(f\) preserves \(\sim\). Existence and uniqueness of \(\pi_{0}f\) follow from the universal property of the quotient \(X\to\pi_{0}X\).
Lemma 7.1 implies that the assignment that sends \(f\) to \(\pi_{0}f\) extends to an endofunctor
\[\pi_{0}\colon\mathsf{Top}\longrightarrow\mathsf{Top}. \tag{3}\]
This endofunctor determines the full subcategory \(\mathsf{TD}\), as we now show.
**Lemma 7.2**.: _If \(C\subseteq\pi_{0}X\) is a connected subspace then so is \(q^{-1}[C]\subseteq X\)._
Proof.: Let \(q^{-1}[C]=F_{1}\cup F_{2}\) with \(F_{1}\) and \(F_{2}\) disjoint closed subsets of \(X\). For any \(y\in C\) we can write the fibre \(q^{-1}[\{y\}]\) as \(C_{x}\) for any \(x\in q^{-1}[\{y\}]\). Further, we can express \(C_{x}\) as the disjoint union \(C_{x}=(F_{1}\cap C_{x})\cup(F_{2}\cap C_{x})\). And \(C_{x}\) is closed and connected, because it is a component. Hence exactly one of \(q^{-1}[\{y\}]=C_{x}\subseteq F_{1}\) or \(q^{-1}[\{y\}]=C_{x}\subseteq F_{2}\) holds, for each \(y\in C\). We can then define
\[S_{i}\coloneqq\{y\in C\mid q^{-1}[\{y\}]\subseteq F_{i}\},\ i=1,2,\]
to the effect that \(C=S_{1}\cup S_{2}\) and \(S_{1}\cap S_{2}=\emptyset\). By construction we have \(F_{i}=q^{-1}[S_{i}]\), \(i=1,2\). The definition of quotient topology then entails that \(S_{i}\) is closed because \(F_{i}\) is. Since \(C\) is connected, exactly one of \(S_{1}\) and \(S_{2}\) is empty, and hence so is exactly one of \(F_{1}\) and \(F_{2}\).
**Lemma 7.3**.: _For any space \(X\), the quotient map \(q\colon X\to\pi_{0}X\) in (2) is universal from \(X\) to the full inclusion \(\mathsf{TD}\to\mathsf{Top}\)._
Proof.: We first show that \(\pi_{0}X\) is totally disconnected. Let \(C_{y}\) be the component of \(y\in\pi_{0}X\), with the intent of showing it is a singleton. By Lemma 7.2, since \(C_{y}\) is connected in \(\pi_{0}X\), so is \(q^{-1}[C_{y}]\) connected in \(X\). Therefore \(q^{-1}[C_{y}]\) is contained in the component \(C_{x}\) of any \(x\in X\) with \(x\in q^{-1}[C_{y}]\); and thus, the direct image \(q[q^{-1}[C_{y}]]\) is contained in \(q[C_{x}]=\{y\}\). Since \(q[q^{-1}[C_{y}]]=C_{y}\), because \(q\) is surjective, we conclude \(C_{y}\subseteq\{y\}\), as was to be shown.
Let \(f\colon X\to Y\) be a continuous map, with \(Y\) totally disconnected. We already know from the proof of Lemma 7.1 that \(f\) preserves \(\sim\) so, as \(Y\) is totally disconnected, \(x\sim x^{\prime}\) in \(X\) implies \(fx=fx^{\prime}\) in \(Y\). The universal property of the quotient \(q\colon X\to\pi_{0}X\) implies the existence of a unique \(g\colon\pi_{0}X\to Y\) such that \(gq=f\).
We conclude that the full inclusion \(\mathsf{TD}\to\mathsf{Top}\) has a left adjoint that, with no risk of confusion, will again be denoted by \(\pi_{0}\colon\mathsf{Top}\to\mathsf{TD}\).
**Proposition 7.4**.: _The functor \(\pi_{0}\colon\mathsf{Top}\to\mathsf{TD}\) preserves all set-indexed products._
Proof.: Consider a family \((X_{s}\mid s\in S)\) of spaces in \(\mathsf{Top}\) indexed by a set \(S\) and let
\[\gamma\colon\pi_{0}\prod_{s\in S}X_{s}\longrightarrow\prod_{s\in S}\pi_{0}X_{s}\]
be the unique map such that the triangle below commutes
for every \(s\in S\). In other words, \(\gamma(C(x_{s}\mid s\in S))=(Cx_{s}\mid s\in S)\in\prod_{s\in S}\pi_{0}X_{s}\) for any \((x_{s}\mid s\in S)\) in \(\prod_{s\in S}X_{s}\).
To prove that \(\gamma\) is injective assume that \(\gamma(q(x_{s}\mid s\in S))=\gamma(q(y_{s}\mid s\in S))\) in \(\prod_{s\in S}\pi_{0}X_{s}\). That is, \(qx_{s}=qy_{s}\) in \(\pi_{0}X_{s}\) for every \(s\in S\). By [12, Theorem 6.1.21] we have \(q(x_{s}\mid s\in S)=q(y_{s}\mid s\in S)\) in \(\pi_{0}\left(\prod_{s\in S}X_{s}\right)\), so \(\gamma\) is injective.
To prove that \(\gamma\) is surjective observe that the following diagram commutes
for every \(s\in S\), so the inner triangle commutes. As products of surjections are surjective, the inner vertical map is surjective and hence so is \(\gamma\), the bottom map of the triangle.
We next identify a related construction which will provide a useful alternative description of \(\pi_{0}\) when restricted to \(\mathsf{KHaus}\). Let us write \(\operatorname{C}\left(X,\mathbb{2}\right)\) for the set of continuous maps from the space \(X\) to the discrete two-point space \(\mathbb{2}\coloneqq\{0,1\}\). There is a canonical continuous function
\[E=\langle f\mid f\in\operatorname{C}\left(X,\mathbb{2}\right)\rangle\colon X \longrightarrow\mathbb{2}^{\operatorname{C}\left(X,\mathbb{2} \right)}, \tag{4}\] \[x\longmapsto(fx\mid f\in\operatorname{C}\left(X,\mathbb{2} \right)).\]
For any subset \(S\subseteq X\), write \(\chi_{S}\colon X\to\mathbb{2}\) for the characteristic function defined by \(\chi_{S}x=1\) if, and only if, \(x\in S\). Then \(S\) is clopen precisely when \(\chi_{S}\in\operatorname{C}\left(X,\mathbb{2}\right)\). Thus, \(E\) in (4) can equivalently be described as the function that sends each point \(x\in X\) to the set of clopen subsets of \(X\) that contain \(x\).
In order to prove the next lemma recall [12, p. 356] that the _quasi-component_ of \(x\in X\) is defined as
\[\widetilde{C}_{x}\coloneqq\bigcap\{S\subseteq X\mid S\text{ is clopen, and }x\in S\}.\]
It is clear that the quasi-components of a space \(X\) partition \(X\) into closed non-empty sets. The relation between \(E\) and quasi-components may be stated as follows.
**Lemma 7.5**.: _For any \(x,x^{\prime}\in X\), \(Ex=Ex^{\prime}\) if and only if \(\widetilde{C}_{x}=\widetilde{C}_{x^{\prime}}\)._
Proof.: If \(Ex=Ex^{\prime}\) then clearly \(\widetilde{C}_{x}=\widetilde{C}_{x^{\prime}}\). For the converse assume that \(\widetilde{C}_{x}=\widetilde{C}_{x^{\prime}}\) and let \(S\subseteq X\) be a clopen containing \(x\). Then \(x^{\prime}\in\widetilde{C}_{x^{\prime}}=\widetilde{C}_{x}\subseteq S\). That is, \(x^{\prime}\in S\).
The reader should beware that the quasi-component \(\widetilde{C}_{x}\) of \(x\in X\) in general fails to be connected. Indeed, the inclusion \(C_{x}\subseteq\widetilde{C}_{x}\) always holds for each \(x\in X\)[12, Theorem 6.1.22], and may be proper [12, Example 6.1.24]. However:
**Lemma 7.6**.: _For any \(X\) there exists a unique \(E^{\prime}\colon\pi_{0}X\to\mathbb{2}^{\operatorname{C}\left(X,\mathbb{2} \right)}\) such that the following diagram_
_commutes._
Proof.: Let \(x,x^{\prime}\in X\) be such that \(x\sim x^{\prime}\); that is, \(C_{x}=C_{x^{\prime}}\). Then
\[x\in C_{x}\cap C_{x^{\prime}}\subseteq\widetilde{C}_{x}\cap\widetilde{C}_{x^{ \prime}}\]
so, as quasi-components are equal or disjoint, \(\widetilde{C}_{x}=\widetilde{C}_{x^{\prime}}\). That is, \(Ex=Ex^{\prime}\) by Lemma 7.5.
Let \(X\xrightarrow{D}\pi_{0}^{\prime}X\xrightarrow{m}2^{\operatorname{C}\left(X, \mathbb{2}\right)}\) be the epi/regular-mono factorization of the canonical map \(E\) in (4). Then the following square commutes
by Lemma 7.6 and, as \(q\) is regular-epi and \(m\) is monic, there is exactly one continuous map \(c\colon\pi_{0}(X)\to\pi_{0}^{\prime}(X)\) making the inner-triangles commute. Since \(D\) is epic, so is \(c\). Also, since \(m\) is a regular mono, \(\pi_{0}^{\prime}X\) carries the subspace topology inherited from the product \(2^{\operatorname{C}\left(X,\mathbb{2}\right)}\) and, as the latter is a Stone space, \(\pi_{0}^{\prime}X\) is Hausdorff.
**Lemma 7.7**.: _If \(X\) is compact Hausdorff then \(c\colon\pi_{0}X\to\pi_{0}^{\prime}X\) is an isomorphism and these isomorphic spaces are Stone spaces._
Proof.: First recall [12, Theorem 6.1.23] that, in any compact Hausdorff space \(X\), the equality \(C_{x}=\widetilde{C}_{x}\) holds for each \(x\in X\). In other words, in this case, the function \(\pi_{0}X\to\pi_{0}^{\prime}X\) is bijective. Also, since \(X\) is compact, so is \(\pi_{0}X\) because \(q\) is surjective. Hence, as we already know that \(\pi_{0}^{\prime}X\) is Hausdorff, the Closed Map Lemma implies that \(c\) is an isomorphism.
Similarly, compactness of \(X\) implies compactness of \(\pi_{0}^{\prime}X\) and hence, the Closed Map Lemma implies that \(m\) is closed. Therefore, \(\pi_{0}^{\prime}X\) is a closed subspace of the Stone space \(2^{\mathbb{C}\left(X,2\right)}\).
It is classical that each Stone space is totally disconnected, so there is a full inclusion \(\mathsf{Stone}\to\mathsf{TD}\) such that the following diagram
commutes. Lemma 7.7 implies that the composite \(\mathsf{KHaus}\)Top\(\xrightarrow{\pi_{0}}\)TD factors through the full inclusion \(\mathsf{Stone}\to\mathsf{TD}\). The factorization will be conveniently denoted by \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\).
**Theorem 7.8**.: _The functor \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\) is left adjoint to the full inclusion \(\mathsf{Stone}\to\mathsf{KHaus}\), and preserves all set-indexed products. _
Proof.: Since, as observed above, \(\pi_{0}\colon\mathsf{Top}\to\mathsf{TD}\) restricts to \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\), the fact that the former is a left adjoint to \(\mathsf{TD}\to\mathsf{Top}\) (Lemma 7.7) restricts to the fact that \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\) is left adjoint to \(\mathsf{Stone}\to\mathsf{KHaus}\). It is standard that products in \(\mathsf{KHaus}\) and in \(\mathsf{Stone}\) agree with products in \(\mathsf{Top}\) (using, in particular, Tychonoff's Theorem that any product of compact spaces is compact), so Proposition 7.4 entails that \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\) preserves all set-indexed products.
## 8. Spectra of MV-algebras
In this section we recall the material about spectra of MV-algebras that is needed in the sequel.
Recall that an ideal \(\mathfrak{p}\) of an MV-algebra \(A\) is _prime_ if it is proper, and the quotient \(A/\mathfrak{p}\) is totally ordered. The (_prime_) _spectrum_ of an MV-algebra \(A\) is
\[\operatorname{Spec}A\coloneqq\{\mathfrak{p}\subseteq A\mid\mathfrak{p}\text{ is a prime ideal of }A\}\]
topologised into the _spectral space_ of \(A\), as follows. For a subset \(S\subseteq A\), define
\[\mathbb{V}\left(S\right) \coloneqq\{\mathfrak{p}\in\operatorname{Spec}A\mid S\subseteq \mathfrak{p}\},\] \[\mathbb{S}\left(S\right) \coloneqq\operatorname{Spec}A\setminus\mathbb{V}\left(S\right)= \{\mathfrak{p}\in\operatorname{Spec}A\mid S\not\subseteq\mathfrak{p}\}.\]
The set \(\mathbb{V}\left(S\right)\) is called the _vanishing locus_, or _zero set_, of \(S\), while \(\mathbb{S}\left(S\right)\) is called its _support_. If \(a\in A\), write \(\mathbb{V}\left(a\right)\) as a shorthand for \(\mathbb{V}\left(\{a\}\right)\), and similarly for \(\mathbb{S}\left(a\right)\). Then the collection
\[\{\mathbb{V}\left(I\right)\mid\text{$I$ is an ideal of }A\}\]
is the set of closed sets for a topology on \(\operatorname{Spec}A\) that makes the latter a spectral space in the sense of Hochster [19]. The collection
\[\{\mathbb{S}\left(a\right)\mid a\in A\}\]
is a basis of compact open sets for this topology; see [3, Chapitre 10] and [30, Chapter 4]. The topology is variously known as the _Stone_, _Zariski_, or _hull-kernel_ topology of \(A\).
The assignment \(A\mapsto\operatorname{Spec}A\) extends to a functor \(\mathsf{MV}^{\operatorname{op}}\to\mathsf{Top}\), because inverse images of primes ideals along homomorphisms are prime. Althouh it is common to take the codomain of \(\operatorname{Spec}\) as the category of spectral spaces and spectral maps, for our purposes in this paper it is expedient to regard \(\operatorname{Spec}\) as taking values in \(\mathsf{Top}\).
The _maximal spectrum_ of an MV-algebra \(A\) is
\[\operatorname{Max}A\coloneqq\{\mathfrak{m}\subseteq A\mid\mathfrak{m}\text{ is a maximal ideal of }A\}.\]
We have \(\operatorname{Max}A\subseteq\operatorname{Spec}A\), or equivalently, any simple MV-algebra is totally ordered (see e.g. [8, 3.5.1]). The _maximal spectral space_ of \(A\) is the set \(\operatorname{Max}A\) equipped with the subspace topology it inherits from \(\operatorname{Spec}A\). Then \(\operatorname{Max}A\) is a compact Hausdorff space [30, Proposition 4.15], and every compact Hausdorff space arises in this manner from some MV-algebra \(A\)[30, Theorem 4.16(iv)].
The standard example of MV-algebra, the interval \([0,1]\) equipped with the constant \(0\) and the operations \(\oplus\), \(\neg\), generalises as follows. If \(X\) is any set, the collection \([0,1]^{X}\) of all functions from \(X\) to \([0,1]\) inherits the structure of an MV-algebra upon defining operations pointwise. If, additionally, \(X\) is a topological space, since \(\oplus\colon[0,1]^{2}\to[0,1]\), \(\neg\colon[0,1]\to[0,1]\), and \(0\) are continuous with respect to the Euclidean topology of \([0,1]\), the subset
\[\operatorname{C}(X)\coloneqq\{f\colon X\to[0,1]\mid f\text{ is continuous}\} \tag{5}\]
is a subalgebra of the MV-algebra \([0,1]^{X}\). We shall describe a natural MV-homomorphism \(\eta_{A}\colon A\longrightarrow\operatorname{C}\left(\operatorname{Max}A\right)\), for each MV-algebra \(A\). Its existence descends from Holder's Theorem (Lemma 6.8), which allows us to define a close analogue to the Gelfand transform in functional analysis. Indeed, in light of that result, to \(a\in A\) and \(\mathfrak{m}\in\operatorname{Max}A\) we associate the real number \(\mathfrak{h}_{\mathfrak{m}}(a/\mathfrak{m})\in[0,1]\), obtaining the function
\[\widehat{a}\colon\operatorname{Max}A \longrightarrow[0,1] \tag{6}\] \[\mathfrak{m} \longmapsto h_{\mathfrak{m}}(\tfrac{a}{\mathfrak{m}}).\]
It can be shown [30, 4.16.iii] that the function (6) is continuous with respect to the Stone topology of \(\operatorname{Max}A\) and the Euclidean topology of \([0,1]\). We thereby arrive at the announced homomorphism
\[\eta_{A}\colon A \longrightarrow\operatorname{C}(\operatorname{Max}A) \tag{7}\] \[a \longmapsto\widehat{a}\]
for each MV-algebra \(A\).
**Lemma 8.1**.: _For any MV-homomorphism \(h\colon A\to B\) and any \(\mathfrak{m}\in\operatorname{Max}B\) we have \(h^{-1}(\mathfrak{m})\in\operatorname{Max}A\). Moreover, the inverse-image map \(h^{-1}\colon\operatorname{Max}B\to\operatorname{Max}A\) is continuous with respect to the Stone topology._
Proof.: The first assertion is proved in [8, 1.2.16]. The second assertion is a straightforward verification using the definition of Stone topology.
In light of Lemma 8.1 we henceforth regard \(\operatorname{Max}\) as a functor:
\[\operatorname{Max}\colon\mathsf{MV}\longrightarrow\mathsf{KHaus}^{ \operatorname{op}}, \tag{8}\]
where \(\mathsf{KHaus}\) denotes the category of compact Hausdorff spaces and their continuous maps.
Given a continuous map \(f\colon X\to Y\) in \(\mathsf{KHaus}\), it is elementary that the induced function
\[\operatorname{C}(f)\colon \operatorname{C}(Y) \longrightarrow\operatorname{C}(X),\] \[g\in\operatorname{C}(Y) \longmapsto g\circ f\in\operatorname{C}(X)\]
is a morphism in \(\mathsf{MV}\). We therefore regard \(\operatorname{C}\) as a functor:
\[\operatorname{C}\colon\mathsf{KHaus}^{\operatorname{op}}\longrightarrow \mathsf{MV}.\]
There is an adjunction
\[\operatorname{Max}\dashv\operatorname{C}\colon\mathsf{KHaus}^{ \operatorname{op}}\rightarrow\mathsf{MV} \tag{9}\]
known as the _Cignoli-Dubuc-Mundici_ adjunction [9]; see [26, Section 3] for further references and details not mentioned below. Dually to (7), for any space \(X\) in \(\mathsf{KHaus}\) there is a continuous map
\[\epsilon_{X}\colon X \longrightarrow\operatorname{Max}\operatorname{C}(X) \tag{10}\] \[x \longmapsto\{f\in\operatorname{C}(X)\mid f(x)=0\},\]
and it is a standard fact that \(\epsilon_{X}\) is a homeomorphism. (Compare [30, 4.16].) Writing \(\operatorname{Id}_{\mathsf{C}}\) for the identity functor on a category \(\mathsf{C}\), we can summarise the adjunction as follows.
**Theorem 8.2** ([8, Propositions 4.1 and 4.2]).: _The functor \(\operatorname{Max}\) is left adjoint to the fully faithful \(\operatorname{C}\), i.e. \(\operatorname{Max}\dashv\operatorname{C}\colon\mathsf{KHaus}^{ \operatorname{op}}\rightarrow\mathsf{MV}\). The unit and the counit of the adjunction are the natural transformations \(\eta\colon\operatorname{Id}_{\mathsf{MV}}\rightarrow\operatorname{C} \operatorname{Max}\) and \(\epsilon\colon\operatorname{Max}\operatorname{C}\rightarrow\operatorname{Id} _{\mathsf{KHaus}^{\operatorname{op}}}\) whose components are given by (7) and (10), respectively. _
## 9. The Pierce functor preserves coproducts
The category \(\mathsf{BA}\) of Boolean algebras may be identified with the domain of the full subcategory \(\operatorname{I}\colon\mathsf{BA}\rightarrow\mathsf{MV}\) determined by the \(\operatorname{MV}\)-algebras whose operation \(\oplus\) is idempotent. It is then clear that \(\operatorname{I}\colon\mathsf{BA}\rightarrow\mathsf{MV}\) is a variety so, in particular, it has a left adjoint. It also has a right adjoint that we now describe.
We write \(\operatorname{P}A\) for the collection of all Boolean elements of the \(\operatorname{MV}\)-algebra \(A\). By [8, 1.5.4], \(\operatorname{P}A\) is the largest subalgebra of \(A\) that is a Boolean algebra. A homomorphism \(h\colon A\to B\) preserves Boolean elements, because the latter are defined by equational conditions. Therefore, \(h\) induces by restriction a function \(\operatorname{P}h\colon\operatorname{P}A\rightarrow\operatorname{P}B\) that is evidently a homomorphism of Boolean algebras. We thus obtain a functor
\[\operatorname{P}\colon\mathsf{MV}\longrightarrow\mathsf{BA}\]
from the category of \(\operatorname{MV}\)-algebras to that of Boolean algebras; we call it the _Pierce functor_ because of the close analogy with the theory developed in [32] for rings.
**Lemma 9.1**.: _The functor \(\operatorname{P}\) is right adjoint to the functor \(\operatorname{I}\)._
Proof.: This is a direct consequence of the fact that \(\operatorname{P}A\) is the largest Boolean subalgebra of \(A\), for any \(\operatorname{MV}\)-algebra \(A\).
The proof of Proposition 5.6--in particular, Lemma 5.4--makes it clear that \(\operatorname{P}\colon\mathsf{MV}\rightarrow\mathsf{BA}\) is essentially the 'complemented subobjects' functor \(\operatorname{B}\) determined by the extensive category \(\mathsf{MV}^{\operatorname{op}}\).
We now embark on the proof of the central fact that \(\operatorname{P}\colon\mathsf{MV}\rightarrow\mathsf{BA}\) preserves coproducts. Our aim is to reduce the problem to a situation where we can apply the topological results in Section 7.
**Lemma 9.2**.: _For any MV-algebra \(A\) and any element \(a\in A\), \(a\) is Boolean if, and only if, for each prime ideal \(\mathfrak{p}\) of \(A\), we have \(a/\mathfrak{p}\in\{0,1\}\subseteq A/\mathfrak{p}\)._
Proof.: Let \(C\) be any totally ordered MV-algebra. For \(x\in C\), either \(x\leqslant\neg x\) or \(\neg x\leqslant x\). If the former holds then \(x\land\neg x=x\), so that if \(x\) is Boolean then \(x=0\). If the latter holds then \(x\lor\neg x=x\), and thus \(x=1\) if \(x\) is Boolean. In summary, if \(x\in C\) is Boolean then \(x\in\{0,1\}\). The converse implication is clear. Summing up, the Boolean elements of \(C\) are precisely \(0\) and \(1\).
Boolean elements, being definable by equational conditions, are preserved by homomorphisms. Hence if \(a\) is Boolean then \(a/\mathfrak{p}\in A/\mathfrak{p}\) is Boolean, and therefore, since \(A/\mathfrak{p}\) is totally ordered, \(a/\mathfrak{p}\in\{0,1\}\) by the argument in the preceding paragraph. This proves the left-to-right implication in the statement of the lemma. For the converse implication, we recall that in any MV-algebra \(A\) we have \(\bigcap\operatorname{Spec}A=\{0\}\)[8, 1.3.3]. Hence, the function \(\iota\colon A\longrightarrow\prod_{\mathfrak{p}\in\operatorname{Spec}A}A/ \mathfrak{p}\) defined by \(a\in A\longmapsto(a/\mathfrak{p})_{\mathfrak{p}\in\operatorname{Spec}}\in \prod_{\mathfrak{p}\in\operatorname{Spec}A}A/\mathfrak{p}\) is an injective homomorphism. Assume that for each \(\mathfrak{p}\in\operatorname{Spec}A\) we have \(a/\mathfrak{p}\in\{0,1\}\). Since operations in \(\prod_{\mathfrak{p}\in\operatorname{Spec}A}A/\mathfrak{p}\) are computed pointwise, we infer \(\iota(a)\lor\neg\iota(a)=(a/\mathfrak{p})_{\mathfrak{p}\in\operatorname{Spec }}\lor\neg(a/\mathfrak{p})_{\mathfrak{p}\in\operatorname{Spec}}=1\), and therefore, since \(\iota\) is an isomorphism onto its range, \(a\lor\neg a=1\). This completes the proof.
**Lemma 9.3**.: _Let \(A\) be an MV-algebra, and suppose there exist \((\)possibly empty\()\) closed subsets \(X_{0},X_{1}\subseteq\operatorname{Spec}A\) with \(\operatorname{Spec}A=X_{0}\cup X_{1}\) and \(X_{0}\cap X_{1}=\emptyset\). Then there exists exactly one Boolean element \(b\in A\) such that \(b/\mathfrak{p}=0\) for each \(\mathfrak{p}\in X_{0}\) and \(b/\mathfrak{p}=1\) for each \(\mathfrak{p}\in X_{1}\)._
Proof.: By [3, 10.1.7], there is exactly one ideal \(I_{i}\) of \(A\) such that \(\mathbb{V}\left(I_{i}\right)=X_{i}\), \(i=0,1\). Consider the elements \(0,1\in A\). The fact that \(\operatorname{Spec}A\) is partitioned into \(X_{0}\) and \(X_{i}\) entails \(I_{0}\lor I_{1}=A\) and \(I_{0}\cap I_{1}=\{0\}\), so that the Chinese Remainder Theorem [3, Lemme 10.6.3] applied to \(0\) and \(X_{0}\), and to \(1\) and \(X_{1}\), yields one element \(b\in A\) such that \(b/I_{0}=0\) and \(b/I_{1}=1\). Using the Third Isomorphism Theorem, the latter conditions imply \(b/\mathfrak{p}\in\{0,1\}\) for each \(\mathfrak{p}\in\operatorname{Spec}A\) so that by Lemma 9.2 we conclude that \(b\) is Boolean. If \(b^{\prime}\in A\) also satisfies \(b^{\prime}/\mathfrak{p}=0\) for each \(\mathfrak{p}\in X_{0}\) and \(b^{\prime}/\mathfrak{p}=1\) for each \(\mathfrak{p}\in X_{1}\), then \(b/\mathfrak{p}=b^{\prime}/\mathfrak{p}\) for \(\mathfrak{p}\in\operatorname{Spec}A\), so that \(b=b^{\prime}\) because \(\bigcap\operatorname{Spec}A=\{0\}\)[8, 1.3.3].
We record a corollary that will have further use in the paper. It is the exact analogue for MV-algebras of a standard result for the category \(\mathsf{Ring}\), see e.g. [21, Theorem 7.3]. In order to state it, let us write \(\operatorname{Cp}X\) for the Boolean algebra of clopen sets of any topological space \(X\). Let us then observe that the uniqueness assertion about the Boolean element \(b\) in Lemma 9.3 allows us to define, for any MV-algebra \(A\), a function
\[\chi_{A}\colon\operatorname{Cp}(\operatorname{Spec}A)\longrightarrow \operatorname{P}A \tag{11}\]
that assigns to each \(X_{0}\in\operatorname{Cp}\left(\operatorname{Spec}A\right)\) the unique element \(b\in\operatorname{P}A\) with the properties stated in that lemma with respect to \(X_{0}\) and \(X_{1}\coloneqq\operatorname{Spec}A\setminus X_{0}\). It is then elementary to verify that \(\chi_{A}\) is a homomorphism of Boolean algebras.
**Corollary 9.4**.: _For any MV-algebra \(A\), the function_
\[\phi_{A}\colon\operatorname{P}A\longrightarrow\operatorname{Cp}( \operatorname{Spec}A)\]
_that sends \(b\in\operatorname{P}A\) to \(\mathbb{V}\left(b\right)\subseteq\operatorname{Cp}(\operatorname{Spec}A)\) is an isomorphism of Boolean algebras whose inverse is the homomorphism \(\chi_{A}\) in (11). In particular, \(A\) is indecomposable if, and only if, \(\operatorname{Spec}A\) is connected._
Proof.: By Lemma 9.2 it is clear that \(\mathbb{V}\left(b\right)\) for each \(b\in\operatorname{P}A\) is clopen and that \(\phi_{A}\) is a homomorphism. Let us consider \(b^{\prime}\coloneqq\chi_{A}\phi_{A}b\). For each \(\mathfrak{p}\in\mathbb{V}\left(b\right)\) we have \(b/\mathfrak{p}=0\) by definition of \(\mathbb{V}\), and \(b^{\prime}/\mathfrak{p}=0\) by the defining property of \(b^{\prime}\). Similarly, for each \(\mathfrak{p}\in\operatorname{Spec}A\setminus\mathbb{V}\left(A\right)\) we have \(b/\mathfrak{p}=b^{\prime}/\mathfrak{p}=0\). Thus, \(b\) and \(b^{\prime}\) agree at each prime and
thus \(b=b^{\prime}\) because \(\bigcap\operatorname{Spec}A=\{0\}\)[8, 1.3.3]. Conversely, for \(X_{0}\in\operatorname{Cp}\left(\operatorname{Spec}A\right)\), consider the clopen \(\phi_{A}X_{A}X_{0}\). For \(\mathfrak{p}\in\operatorname{Spec}A\), by definition of \(\chi_{A}\) we have \(\mathfrak{p}\in X_{0}\) if, and only if, \((\chi_{A}X_{0})/\mathfrak{p}=0\). Hence \(\phi_{A}(\chi_{A}X_{0})=X_{0}\), and the proof is complete.
The _radical_ of \(A\) is the ideal
\[\operatorname{Rad}A\coloneqq\bigcap\operatorname{Max}A.\]
In accordance with standard terminology in general algebra, one says \(A\) is _semisimple_ precisely when \(\operatorname{Rad}A=\{0\}\). We note in passing that, unless \(A\) is semisimple, the statement in Lemma 9.2 cannot be strenghtened to "\(a\) is Boolean if, and only if, for each \(\mathfrak{m}\in\operatorname{Max}A\) we have \(a/\mathfrak{m}\in\{0,1\}\subseteq A/\mathfrak{m}\)".
**Lemma 9.5**.: _Let \(A\) be an MV-algebra, and suppose there exist \((\)possibly empty\()\) closed subsets \(X_{0},X_{1}\subseteq\operatorname{Max}A\) with \(\operatorname{Max}A=X_{0}\cup X_{1}\) and \(X_{0}\cap X_{1}=\emptyset\). Then there exists exactly one Boolean element \(b\in A\) such that \(b/\mathfrak{m}=0\) for each \(\mathfrak{m}\in X_{0}\) and \(b/\mathfrak{m}=1\) for each \(\mathfrak{m}\in X_{1}\)._
Proof.: By [8, 1.2.12], each \(\mathfrak{p}\in\operatorname{Spec}A\) is contained in exactly one \(\lambda\mathfrak{p}\in\operatorname{Max}A\), so that we can define a function
\[\lambda\colon\operatorname{Spec}A \longrightarrow\operatorname{Max}A, \tag{12}\] \[\mathfrak{p} \longmapsto\lambda\mathfrak{p}.\]
By [3, 10.2.3], this function is continuous, and it is a retraction for the inclusion \(\operatorname{Max}A\subseteq\operatorname{Spec}A\). Therefore, \(X_{0}^{\prime}\coloneqq\lambda^{-1}[X_{0}]\) and \(X_{1}^{\prime}\coloneqq\lambda^{-1}[X_{1}]\) are closed subsets of \(\operatorname{Spec}A\) satisfying \(\operatorname{Spec}A=X_{0}^{\prime}\cup X_{1}^{\prime}\) and \(X_{0}^{\prime}\cap X_{1}^{\prime}=\emptyset\). Now Lemma 9.3 provides a unique Boolean element \(b\) such that \(b/\mathfrak{p}=0\) for each \(\mathfrak{p}\in X_{0}^{\prime}\), and \(b/\mathfrak{p}=1\) for each \(\mathfrak{p}\in X_{1}^{\prime}\). As \(X_{i}\subseteq X_{i}^{\prime}\), \(i=0,1\), \(b\) satisfies the condition in the statement. Concerning uniqueness, suppose \(a\) is a Boolean element of \(A\) such that \(a/\mathfrak{m}=0\) for each \(\mathfrak{m}\in X_{0}\), and \(a/\mathfrak{m}=1\) for each \(\mathfrak{m}\in X_{1}\). We claim \(a=b\). Indeed, let \(\mathfrak{p}\in X_{i}^{\prime}\), \(i=0,1\). Then \(a/\lambda\mathfrak{p}=i\) because \(\lambda\mathfrak{p}\in X_{i}\). The inclusion \(\mathfrak{p}\subseteq\lambda\mathfrak{p}\) induces a quotient map \(q\colon A/\mathfrak{p}\to A/\lambda\mathfrak{p}\). By Lemma 9.2 we have \(a/\mathfrak{p}\in\{0,1\}\). Also, \(A/\lambda\mathfrak{p}\) is nontrivial. Therefore since \(q(a/\mathfrak{p})=a/\lambda\mathfrak{p}=i\) it follows that \(a/\mathfrak{p}=i\). By the uniqueness assertion in Lemma 9.3 we conclude \(a=b\).
**Remark 9.6**.: We observe that the analogue of Lemma 9.5 about coproduct decompositions of \(\operatorname{Max}A\) being indexed by idempotent elements does not hold in general for rings. Indeed, spectra of MV-algebras always are completely normal--which affords the existence of the map \(\lambda\) used in the proof above--whereas spectra of rings are not, in general. For more on the important role that the continuous retraction \(\lambda\) in (12) plays in the theory of lattice-groups and MV-algebras, see [2] and the references therein.
Our next objective is to show that \(\operatorname{P}\) sends the unit \(\eta\) of \(\operatorname{C}\dashv\operatorname{Max}\) in (7) to an isomorphism.
**Lemma 9.7**.: _For any MV-algebra \(A\), the morphism \(\operatorname{P}\eta_{A}\colon\operatorname{P}A\to(\operatorname{P}\operatorname{ C}\operatorname{Max})A\) is an isomorphism._
Proof.: Let \(b^{\prime}\in\operatorname{C}\left(\operatorname{Max}A\right)\) be Boolean, with the aim of exhibiting \(b\in\operatorname{P}A\) such that \(\eta_{A}(b)=b^{\prime}\). Evaluating the defining equality \(b^{\prime}\oplus b^{\prime}=b^{\prime}\) at each \(\mathfrak{m}\in\operatorname{Max}A\) we see that \(b^{\prime}(\mathfrak{m})\in\{0,1\}\) holds. Therefore, the two closed subsets \(X_{0}\coloneqq b^{\prime-1}[\{0\}]\) and \(X_{1}\coloneqq b^{\prime-1}[\{1\}]\) of \(\operatorname{Max}A\) satisfy the hypotheses of Lemma 9.5. We conclude that there exists one Boolean element \(b\in A\) with \(b/\mathfrak{m}=0\) for \(\mathfrak{m}\in X_{0}\) and \(b/\mathfrak{m}=1\) for \(\mathfrak{m}\in X_{1}\). By the definition of \(\eta_{A}\) this entails at once \(\eta_{A}(b)=b^{\prime}\), so \(\eta_{A}\) is surjective. By the uniqueness statement in Lemma 9.5, \(\eta_{A}\) is also injective.
Our next step will be to factor \(\mathrm{P}\) into a manner that is useful to our purposes.
Lemma 9.7 implies that the functors \(\mathsf{MV}\to\mathsf{BA}\) in the diagram below
(13)
are naturally isomorphic.
**Lemma 9.8**.: _The functor \(\mathrm{PC}\colon\mathsf{KHaus}^{\mathrm{op}}\to\mathsf{BA}\) preserves all set-indexed coproducts._
Proof.: Using Stone duality, it is an exercise to verify that the composite functor \(\mathrm{PC}\colon\mathsf{KHaus}^{\mathrm{op}}\to\mathsf{BA}\) induces, by taking opposite categories on each side, a functor naturally isomorphic to the functor \(\pi_{0}\colon\mathsf{KHaus}\to\mathsf{Stone}\) of Section 7. The lemma then follows from Theorem 7.8.
We finally obtain the main result of this section.
**Theorem 9.9**.: _The Pierce functor \(\mathrm{P}\colon\mathsf{MV}\to\mathsf{BA}\) preserves all set-indexed coproducts._
Proof.: As we saw above, the triangle (13) commutes up to a natural isomorphism. Further, Max preserves arbitrary set-indexed colimits because it is left adjoint by Theorem 8.2; and \(\mathrm{PC}\) preserves set-indexed coproducts by Lemma 9.8. Hence \(\mathrm{P}\) preserves set-indexed coproducts.
## 10. Main result, and final remarks
Let \(\mathcal{A}\) be a coextensive category. Recall from the introduction that an object \(A\) in \(\mathcal{A}\) is _separable_ if \(A\) is decidable as an object in the extensive \(\mathcal{A}^{\mathrm{op}}\). Thus, \(A\) is separable if, and only if, there is a morphism \(f\colon A+A\to A\) such that the span
is a product diagram.
**Theorem 10.1**.: _Separable MV-algebras coincide with finite products of subalgebras of \([0,1]\cap\mathbb{Q}\)._
Proof.: By Theorem 9.9 we have an reflection \(\pi_{0}\dashv\mathrm{I}^{\mathrm{op}}\colon\mathsf{Stone}\to\mathsf{MV}^{ \mathrm{op}}\) such that both adjoints preserve finite products and finite coproducts, so Proposition 4.7 implies that every decidable object in \(\mathsf{MV}^{\mathrm{op}}\) is a finite coproduct of subterminal objects. Theorem 6.9 completes the proof.
We conclude the paper with some final remarks that point to further research aimed at developing an 'arithmetic connected-component functor'. The guiding result from Algebraic Geometry is this: the category \(\mathcal{E}\) of etale schemes over \(K\) is reflective as a subcategory of that of locally algebraic schemes over \(K\)[11, Proposition I, SS4, 6.5]. The left adjoint there is denoted by \(\pi_{0}\), and \(\pi_{0}X\) is called the \(k\)-_schema des composantes connexes de \(X\)_ in Definition I, SS4, 6.6 op. cit. Moreover, it is then proved that \(\pi_{0}\) preserves finite coproducts. In terms of extensive categories, this says that for \(\mathcal{C}=\mathcal{E}^{\mathrm{op}}\), the subcategory \(\mathrm{Dec}\,\mathcal{C}\to\mathcal{C}\) has a finite-product preserving left adjoint. We announce that the same holds for \(\mathcal{C}=\mathsf{MV}^{\mathrm{op}}_{\mathrm{fp}}\), where \(\mathsf{MV}_{\mathrm{fp}}\) is category of finitely presetable MV-algebras. The proof will be published elsewhere, but it is appropriate to indicate here the role of locally finite MV-algebras in connection with that result.
An MV-algebra \(A\) is _locally finite_ if each finitely generated subalgebra of \(A\) is finite. Finite MV-algebras are evidently locally finite; \([0,1]\cap\mathbb{Q}\) is an example of a
locally finite MV-algebra that is not finite. Locally finite MV-algebras were studied in [9]; see also [10] for a generalisation of the results in [9], and [30, Section 8.3] for further material and [1] for recent progress on the topic. The connection with Theorem 6.9 is the following characterisation of rational algebras.
**Lemma 10.2**.: _For any MV-algebra \(A\) the following are equivalent._
1. \(A\) _is simple and locally finite._
2. \(A\) _is a subalgebra of_ \([0,1]\cap\mathbb{Q}\)_._
Proof.: (i) \(\Rightarrow\) (ii). By Holder's Theorem (Lemma 6.8), since \(A\) is simple there is exactly one monomorphism \(A\to[0,1]\); let us therefore identify \(A\) with a subalgebra of \([0,1]\). If \(A\) contains an irrational number \(\rho\in[0,1]\) then the subalgebra generated by \(\rho\) is infinite. Indeed, the Euclidean algorithm of successive subtractions applied to \(\rho,1\in\mathbb{R}\) does not terminate (because \(\rho\) and \(1\) are incommensurable) and produces an infinite descending sequence of distinct, non-zero elements of \(A\). Thus, \(A\subseteq[0,1]\cap\mathbb{Q}\) by local finiteness.
(ii) \(\Rightarrow\) (i). Any subalgebra of \([0,1]\) evidently has no proper non-trivial ideal, by the Archimedean property of the real numbers, and is therefore simple. If, moreover, \(A\subseteq[0,1]\cap\mathbb{Q}\), the subgroup of \(\mathbb{R}\) generated by finitely many \(a_{1},\ldots,a_{n}\in A\) together with \(1\) is discrete, and therefore by [8, 3.5.3] the subalgebra generated by \(a_{1},\ldots,a_{n}\) is a finite chain. Thus \(A\) is locally finite.
**Corollary 10.3**.: _An MV-algebra \(A\) is separable if, and only if, \(A\) is locally finite and \(\mathrm{P}A\) is finite._
Proof.: If \(A\) is separable then, by Theorem 10.1, \(A=\prod_{i\in I}A_{i}\) with \(I\) finite and \(A_{i}\subseteq[0,1]\cap\mathbb{Q}\) for each \(i\in I\). In particular, \(\mathrm{P}\,A\) is finite. Also, each \(A_{i}\) is locally finite by Lemma 10.2. As finite products of locally finite algebras are locally finite, \(A\) is locally finite. Conversely, assume that \(A\) is locally finite and \(\mathrm{P}\,A\) is finite. Then, \(A=\prod_{i\in I}A_{i}\) with \(I\) finite and \(A_{i}\) directly indecomposable for each \(i\in I\). As locally finite algebras are closed under quotients, each \(A_{i}\) is locally finite. Hence, each \(A_{i}\) is locally finite and indecomposable. But then \(A\) must be simple. Indeed, Corollary 9.4 entails that \(\mathrm{Spec}\,A\) is connected, and \(\mathrm{Spec}\,A=\mathrm{Max}\,A\) by [9, Theorem 5.1]. Then the spectral space \(\mathrm{Spec}\,A\) is Hausdorff, and thus has a base of clopen sets--hence, being compact, it is a Stone space. Since Stone spaces are totally disconnected, connectedness of \(\mathrm{Spec}\,A\) entails that \(\mathrm{Spec}\,A\) is a singleton, so \(A\) has exactly two ideals, and so is simple. By Lemma 10.2, \(A\) is then a subalgebra of \([0,1]\cap\mathbb{Q}\). Therefore, \(A\) is separable by Theorem 10.1.
Now, let \(\mathsf{LF}\to\mathsf{MV}\) be the full subcategory determined by locally finite MV-algebras. Let us prove that this subcategory is coreflective.
An element \(a\) of an MV-algebra \(A\) is _of finite order-rank_4 if the subalgebra \(B\) it generates in \(A\) is finite. If \(B\) is terminal, we say the order-rank of \(a\) is zero. Otherwise, there exists exactly one \(n\in\{1,2,\ldots\}\) such that \(B=C_{1}\times\cdots\times C_{n}\) with each \(C_{i}\) directly indecomposable and non-terminal, and we then say the order-rank of \(a\) is \(n\). We set
Footnote 4: The terminology we introduce here is best motivated using lattice-groups—please see Appendix A.
\[\mathrm{R}A\coloneqq\{a\in A\mid a\text{ is of finite order-rank}\}.\]
Note that \(\mathrm{P}A\subseteq\mathrm{R}A\), because any Boolean algebra is locally finite. For any MV-algebra \(A\) and subset \(G\subseteq A\), let us write \(\mathrm{S}G\) for the subalgebra of \(A\) generated by \(G\). When \(G=\{g\}\) we write \(\mathrm{S}g\) for \(\mathrm{S}\{g\}\).
**Lemma 10.4**.: _Any homomorphism of MV-algebras sends elements of finite order-rank to elements of finite order-rank._
Proof.: Let \(h\colon A\to B\) be a homomorphism and let \(a\in\operatorname{RA}\). Since \(h\) commutes with operations, a routine argument in general algebra shows that \(h[Sa]=\operatorname{S}\left(ha\right)\); since \(\operatorname{S}a\) is finite, so is \(\operatorname{S}\left(ha\right)\).
**Lemma 10.5**.: _For any MV-algebra \(A\), \(\operatorname{RA}\) is a locally finite subalgebra of \(A\). Further, \(\operatorname{RA}\) is the inclusion-largest locally finite subalgebra of \(A\)._
Proof.: Let \(F\coloneqq\{a_{1},\ldots,a_{n}\}\subseteq A\) be a finite subset of elements of finite order-rank, \(n\geqslant 0\) an integer. We need to show that the subalgebra \(\operatorname{S}F\) of \(A\) generated by \(F\) is finite. Induction on \(n\). If \(n=0\) then \(\operatorname{S}\emptyset\) is either the terminal one-element algebra or the initial two-element algebra. Now suppose \(G\coloneqq\{a_{1},\ldots,a_{n-1}\}\) is such that \(\operatorname{S}G\) is finite. The subalgebra \(\operatorname{S}a_{n}\) is also finite, because \(a_{n}\) is of finite order-rank by hypothesis. The subalgebra \(\operatorname{S}F\) is the least upper bound of \(\operatorname{S}G\) and of \(\operatorname{S}a_{n}\) in the lattice of subalgebras of \(A\), and therefore can be written as a quotient of the coproduct \(\operatorname{S}G+\operatorname{S}a_{n}\). In more detail, by the universal property of the coproduct, the inclusion maps \(\operatorname{S}G\subseteq\operatorname{S}F\) and \(\operatorname{S}a_{n}\subseteq\operatorname{S}F\) induce a unique homomorphism \(h\colon\operatorname{S}G+\operatorname{S}a_{n}\to A\) whose regular-epi/mono factorisation \(h=mq\) is such that \(m\colon\operatorname{S}\to A\) exhibits the subobject of \(A\) that is the join of the subobjects \(\operatorname{S}G\) and \(\operatorname{S}a_{n}\)--in particular, \(S\) is isomorphic to \(\operatorname{S}F\). So \(\operatorname{S}F\) is a quotient of the algebra \(\operatorname{S}G+\operatorname{S}a_{n}\). Since finite coproducts of finite MV-algebras are finite by [30, Corollary 7.9(iii)], \(\operatorname{S}G+\operatorname{S}a_{n}\) is finite and therefore so is \(\operatorname{S}F\).
To show that \(\operatorname{RA}\) is a subalgebra of \(A\), first note that clearly \(0\in\operatorname{RA}\). If \(a\in\operatorname{RA}\) then \(\neg a\) lies in the subalgebra generated by \(a\), which is finite; hence \(\neg a\) is of finite order-rank. If \(a,b\in\operatorname{RA}\), then \(a\oplus b\) lies in the subalgebra generated by \(\{a,b\}\), which is finite by the argument in the preceding paragraph; hence \(a\oplus b\) is of finite order-rank.
For the last assertion in the statement, let \(B\) be a locally finite subalgebra of \(A\). Given any \(b\in B\), the subalgebra generated by \(b\) in \(A\) is finite, by our assumption about \(B\); hence \(b\) is of finite order-rank, and \(b\in\operatorname{R}A\). This completes the proof.
Lemmas 10.4 and 10.5 allow us to regard \(\operatorname{R}\) as a functor
\[\operatorname{R}\colon\operatorname{\mathsf{MV}}\longrightarrow\operatorname{ \mathsf{LF}}.\]
**Corollary 10.6**.: _The functor \(\operatorname{R}\colon\operatorname{\mathsf{MV}}\longrightarrow\operatorname{ \mathsf{LF}}\) is right adjoint to the full inclusion \(\operatorname{\mathsf{LF}}\longrightarrow\operatorname{\mathsf{MV}}\)._
Proof.: This is an immediate consequence of the fact that \(\operatorname{RA}\) is the largest locally finite subalgebra of the MV-algebra \(A\), as proved in Lemma 10.5.
**Remark 10.7**.: It is proved in [30, Theorem 8.10] that \(\operatorname{\mathsf{LF}}\) has all set-indexed products. This follows at once from Corollary 10.6: indeed, for any set-indexed family \(\{A_{i}\}_{i\in I}\) of locally finite MV-algebras the product of \(\{A_{i}\}_{i\in I}\) in \(\operatorname{\mathsf{LF}}\) is the coreflection \(\operatorname{R}\left(\prod_{i\in I}A_{i}\right)\) of the product \(\prod_{i\in I}A_{i}\) in \(\operatorname{\mathsf{MV}}\).
We have been unable to prove that \(\operatorname{R}^{\operatorname{op}}\colon\operatorname{\mathsf{MV}}^{ \operatorname{op}}\rightarrow\operatorname{\mathsf{LF}}^{\operatorname{op}}\) preserves finite products. However, writing \(\mathcal{C}\) for \(\operatorname{\mathsf{MV}}_{\operatorname{fp}}^{\operatorname{op}}\), we can show that the functor \(\operatorname{R}^{\operatorname{op}}\) restricts to a left adjoint \(\pi_{0}\colon\mathcal{C}\rightarrow\operatorname{Dec}\mathcal{C}\) to the inclusion \(\operatorname{Dec}\mathcal{C}\rightarrow\mathcal{C}\) and, moreover, it preserves finite products. As mentioned, the proof will appear elsewhere.
## Appendix A Separable unital lattice-ordered Abelian groups
For background on lattice-groups we refer to [3]. We recall that a _lattice-ordered group_, or _\(\ell\)-group_ for short, is a group that is also a lattice5 such that the group
operation distributes over binary meets and joins. We only consider Abelian \(\ell\)-groups, and thus adopt additive notation. The underlying group of an Abelian \(\ell\)-group is torsion-free, and its underlying lattice is distributive. Write \(\ell\mathsf{A}\) for the category of Abelian \(\ell\)-groups and of lattice-group homomorphisms. An element \(1\in G\) in an Abelian \(\ell\)-group is a (_strong order_) _unit_ if for each \(g\in G\) there is a natural number \(n\) such that \(n1\geqslant g\). An Abelian \(\ell\)-group \(G\) equipped with a distinguished unit \(1\) is called _unital_, and denoted \((G,1)\). Write \(\ell\mathsf{A}_{1}\) for the category of unital Abelian \(\ell\)-groups and of unit-preserving lattice-group homomorphisms.
There is a functor \(\Gamma\colon\ell\mathsf{A}_{1}\to\mathsf{MV}\) that acts on objects by sending \((G,1)\) to its unit interval \([0,1]\coloneqq\{x\in G\mid 0\leqslant x\leqslant 1\}\), and on morphisms by restriction; here, \([0,1]\) is regarded as an MV-algebra under the operations \(x\oplus y\coloneqq(x+y)\wedge 1\), \(\neg x\coloneqq 1-x\), and \(0\). This functor has an adjoint \(\Xi\colon\mathsf{MV}\to\ell\mathsf{A}_{1}\), and Mundici proved in [28] that \(\Gamma\) and \(\Xi\) constitute an equivalence of categories.
The initial object in \(\ell\mathsf{A}_{1}\) is \((\mathbb{Z},1)\), and the terminal object is the trivial unital \(\ell\)-group \((\{0=1\},0)\). In analogy with the relationship between non-unital and unital rings, the category \(\ell\mathsf{A}\) has a zero object and is not coextensive, while the category \(\ell\mathsf{A}_{1}\) is. Separable unital Abelian \(\ell\)-groups are defined as for any coextensive category, cf. the beginning of Section 10.
An object \(G\) of \(\ell\mathsf{A}\) is _Archimedean_ if whenever \(nx\leqslant y\) holds in \(G\) for each positive integer \(n\), then \(x\leqslant 0\); and an object \((G,1)\) of \(\ell\mathsf{A}_{1}\) is called Archimedean if \(G\) is. The following characterisations hold: \((G,1)\) is Archimedean precisely when \(\Gamma(G,1)\) is semisimple; and \((G,1)\) is totally ordered and Archimedean precisely when \(\Gamma(G,1)\) is simple. Holder's Theorem for the category \(\ell\mathsf{A}_{1}\) may be stated as follows: _Any \((G,1)\) that is Archimedean and totally ordered has exactly one morphism to \((\mathbb{R},1)\), and that morphism is monic_ (equivalently, its underlying function is injective).
Let us say that an object \((G,1)\) of \(\ell\mathsf{A}_{1}\) is _rational_ if it is isomorphic to an ordered subgroup of the additive group \(\mathbb{Q}\) containing \(1\), where the order of \(G\) is inherited from the natural order of the rationals. Theorem 6.9 may be then formulated for the category \(\ell\mathsf{A}_{1}\) as follows.
**Theorem A.1**.: _For any unital Abelian \(\ell\)-group \((G,1)\) the following are equivalent._
1. \((G,1)\) _is rational._
2. \((G,1)\) _is non-trivial, and the unique map_ \((\mathbb{Z},1)\to(G,1)\) _is epic._
3. _The unique map_ \((\mathbb{Z},1)\to(G,1)\) _is monic and epic._
4. \((G,1)\) _is totally ordered and Archimedean, and the unique map_ \((\mathbb{Z},1)\to(G,1)\) _is epic._
An object \((G,1)\) of \(\ell\mathsf{A}_{1}\) is _Specker_ if its unit-interval MV-algebra \(\Gamma(G,1)\) is a Boolean algebra. Write \(\mathsf{Sp}_{1}\) for the full subcategory of \(\ell\mathsf{A}_{1}\) on the the Specker objects. The inclusion functor \(\mathsf{Sp}_{1}\to\ell\mathsf{A}_{1}\) has a right adjoint \(\operatorname{P}\colon\ell\mathsf{A}_{1}\to\mathsf{Sp}_{1}\), the _Pierce functor_ for \(\ell\mathsf{A}_{1}\), and \(\operatorname{P}\) preserves arbitrary coproducts (Theorem 9.9). Our main result, Theorem 10.1, would be proved for the category \(\ell\mathsf{A}_{1}\) using this Pierce functor; it can be phrased as follows.
**Theorem A.2**.: _Separable unital Abelian \(\ell\)-groups coincide with finite products of rational unital Abelian \(\ell\)-groups._
**Remark A.3**.: Products in the category \(\ell\mathsf{A}\) are Cartesian products, because \(\ell\mathsf{A}\) is a variety of algebras. On the other hand, while \(\ell\mathsf{A}_{1}\) is equivalent to a variety by Mundici's cited theorem, its underlying-set functor is not right adjoint. Indeed, products in \(\ell\mathsf{A}_{1}\) are not, in general, Cartesian products. However, finite products in \(\ell\mathsf{A}_{1}\)_are_ Cartesian--the product of \((G,1)\) and \((H,1)\) is \((G\times H,(1,1))\) with the Cartesian projections.
An Abelian \(\ell\)-group is called a _simplicial group_ if it is isomorphic in \(\ell\mathsf{A}\) to a free Abelian group of finite rank \(\mathbb{Z}^{r}\) equipped with the coordinatewise order. A unit in
such a simplicial group is then any element \(1\in\mathbb{Z}^{r}\) whose each coordinate is strictly positive; the pair \((\mathbb{Z}^{r},1)\) is called a _unital simplicial group_. These lattice-groups play a key role in the representation theory of dimension groups, see e.g. [18].
An object \((G,1)\) in \(\ell\mathsf{A}_{1}\) is a unital simplicial group exactly when its unit-interval MV-algebra \(\Gamma(G,1)\) is finite. An object \((G,1)\) is _locally simplicial_ if each sublattice subgroup generated by finitely many elements along with \(1\) is a unital simplicial group. An object \((G,1)\) in \(\ell\mathsf{A}_{1}\) is locally simplicial exactly when its unit-interval MV-algebra \(\Gamma(G,1)\) is locally finite. Then: _An object \((G,1)\) of \(\ell\mathsf{A}_{1}\) is separable just when it is locally simplicial, and \(\mathrm{P}(G,1)\) has finite \((\mathbb{Z}\)-module) rank6_(Corollary 10.3).
Footnote 6: In the literature on lattice-groups, the condition that \(\mathrm{P}(G,1)\) has finite rank is expressed in the following traditional manner: the unit of \(G\) has finitely many components.
Write \(\mathsf{LS}_{1}\) for the full subcategory of \(\ell\mathsf{A}_{1}\) on the locally simplicial objects. _The inclusion functor_\(\mathsf{LS}_{1}\to\ell\mathsf{A}_{1}\)_has a right adjoint_\(\mathrm{R}\colon\ell\mathsf{A}_{1}\to\mathsf{LS}_{1}\) (Corollary 10.6); that is, every \((G,1)\) has an inclusion-largest locally simplicial unital sublattice subgroup. To prove this in the category \(\ell\mathsf{A}_{1}\) one would introduce the notion of element of 'finite-order rank' of a unital Abelian \(\ell\)-group. It is this notion that motivates the terminology we adopted in the context of MV-algebras in Section 10; by way of conclusion of this appendix, we offer a short discussion.
Let \((G,1)\) be a unital Abelian \(\ell\)-group, let \(g\in G\), and let \(H\) be the sublattice subgroup of \(G\) generated by \(g\) and by \(1\). If \((H,1)\) is a unital simplicial group \((\mathbb{Z}^{r},1)\)--equivalently, if the MV-algebra \(\Gamma(H,1)\) is finite--then we call \(g\) an element of _finite order-rank_\(r\). This notion of rank crucially depends on the interplay between the lattice and the group structure, and is not reducible to the linear notion of rank. To explain why, let us preliminarly observe that a simplicial group \(\mathbb{Z}^{r}\) enjoys the finiteness property that its positive cone \((\mathbb{Z}^{r})^{+}\)--that is, the monoid of non-negative elements of \(\mathbb{Z}^{r}\)--is finitely generated as a monoid. Next, let us point out that the underlying group of the Abelian \(\ell\)-group \(H\) generated by \(g\) and \(1\) in \(G\) is necessarily free: indeed, any finitely generated object of \(\ell\mathsf{A}\) has free underlying group, as was proved in [17]. The \(\mathbb{Z}\)-module rank of \(H\) is at most countably infinite, because \(H\) is countable. But even if we assume the rank of \(H\) is finite, the unit-interval \(\Gamma(H,1)\) may be infinite, and in that case the lattice order of \(\mathbb{Z}^{r}\cong H\) cannot be simplicial--and indeed, one can prove that the monoid \(H^{+}\) cannot be finitely generated. Hence, the condition that the sublattice subgroup \(H\) of \(G\) generated by \(g\) and \(1\) is simplicial is strictly stronger than the condition that \(H\) has finite \(\mathbb{Z}\)-module rank. To illustrate, consider the subgroup \(H\) of \(\mathbb{R}\) generated by an irrational number \(\rho\in\mathbb{R}\) together with \(1\); then \(H\cong\mathbb{Z}^{2}\) as groups, the total order inherited by \(\mathbb{Z}^{2}\) from \(\mathbb{R}\) is palpably not simplicial, the positive cone \(H^{+}\) can be shown not to be finitely generated by an easy direct argument, and \(\Gamma(H,1)\) is an infinite simple MV-algebra.
|
2310.00666 | Quintessence scalar field model in Weyl-type $f(Q,T)$ Gravity with
$w_D-w'_D$ analysis | In the present study, we explore the dynamical characteristics of the
quintessence cosmological model in Weyl-type $f(Q,T)$ gravity. Here, $T$
represents the trace of the matter energy-momentum tensor, and $Q$ symbolizes
the nonmetricity tensor. We propose a solution to the field equation using the
specific parametrization in the form$H(z) = H_{0} (1+z)^{1+\alpha+\beta}
e^{\left(\frac{- \beta z}{1+z}\right)}$, which depicts the necessary transition
of cosmos from decelerating era to the current accelerating scenario. The
values of model parameters are estimated as $H_0 = 71.17\pm 0.25 $, $\alpha =
-0.663\pm0.030$, and $\beta = 1.488\pm0.087$ using the MCMC analysis and
limiting the model with a joint dataset of Pantheon, BAO, and OHD. We discuss
the cosmic behavior of many features of the derived model like EoS parameters,
energy density, and cosmic pressure. Further, we have also explored the
cosmological behavior of the quintessence model in Weyl $f(Q,T)$gravity. We
have described the cosmic behavior of the model by $\omega_D-\omega_D'$
analysis. The diagnosis of the model is also performed using state finders and
jerk parameters. In the end, we have discussed the energy conditions for the
proposed model. Our analysis shows that the suggested model is well consistent
with the recent findings. | Vinod Kumar Bhardwaj, Priyanka Garg | 2023-10-01T13:26:17Z | http://arxiv.org/abs/2310.00666v2 | ###### Abstract
###### Abstract
In the present study, we explore the dynamical characteristics of the quintessence cosmological model in Weyl-type \(f(Q,T)\) gravity. Here, \(T\) represents the trace of the matter energy-momentum tensor, and \(Q\) symbolizes the nonmetricity tensor. We propose a solution to the field equation using the specific parametrization in the form\(H(z)=H_{0}(1+z)^{1+\alpha+\beta}e^{\left(\frac{-\beta z}{1+z}\right)}\), which depicts the necessary transition of cosmos from decelerating era to the current accelerating scenario. The values of model parameters are estimated as \(H_{0}=71.17\pm 0.25\), \(\alpha=-0.663\pm 0.030\), and \(\beta=1.488\pm 0.087\) using the MCMC analysis and limiting the model with a joint dataset of Pantheon, BAO, and OHD. We discuss the cosmic behavior of many features of the derived model like EoS parameters, energy density, and cosmic pressure. Further, we have also explored the cosmological behavior of the quintessence model in Weyl \(f(Q,T)\)gravity. We have described the cosmic behavior of the model by \(\omega_{D}-\omega^{\prime}_{D}\) analysis. The diagnosis of the model is also performed using state finders and jerk parameters. In the end, we have discussed the energy conditions for the proposed model. Our analysis shows that the suggested model is well consistent with the recent findings.
**Quintessence scalar field model in Weyl-type \(f(Q,T)\) Gravity with \(w_{D}-w^{\prime}_{D}\) analysis**
Vinod Kumar Bhardwaj\({}^{1}\), Priyanka Garg\({}^{2}\)
Department of Mathematics, GLA University, Mathura-281 406,
Uttar Pradesh, India
\({}^{1}\)E-mail:[email protected]
\({}^{2}\)E-mail:[email protected]
**Keywords** : Weyl-type \(f(Q,T)\) gravity, Observational constraints, State finders, Quintessence DE model, \(\omega_{D}-\omega^{\prime}_{D}\) analysis.
PACS: 98.80.-k
## 1 Introduction
In the early 20th century, astronomers like Edwin Hubble suggested the idea of expanding the universe when they saw that galaxies were moving apart from one another. But in the late 1990s, supernova experiments added a new unexpected finding in the expansion discovery of the universe which is that the universe was not only expanding, but it was growing faster. Later on, numerous cosmological observations have also confirmed these cosmological findings [1, 2, 3, 4, 5, 6, 7]. To explain this accelerated expansion, scientists proposed the existence of a mysterious form of energy called Dark energy (DE). DE is postulated to have a negative pressure and is associated with the vacuum energy of space itself [8, 9]. This negative pressure leads to the repulsive gravitational effects that drive the universe's accelerated expansion [10, 11]. Current theories suggest that dark energy constitutes about 68% of the total energy content of the universe. This is the dominant contributor to the total matter of the universe, along with about 27% of dark matter,
and the rest in the form of normal matter (atoms).
To explain the cosmic expansion, a term has been introduced in Einstein's equation as an alternative to DE. This term causing space to repel itself and leading to the observed accelerated expansion is called a cosmological constant. However, this approach faces two major challenges: the problems of fine-tuning and cosmic coincidence. To address these challenges, alternative theories of gravitation have been explored. Modified theories of gravity involve modifying the gravitational part of Einstein's equations in various ways. These modifications could potentially explain the accelerated expansion. Such theories might provide explanations for the accelerated expansion by altering fundamental laws of gravity rather than invoking a new form of energy. Several theories have been developed through modifications of Lagrangian and curvature of Einstein's equations in general relativity [12, 13, 14, 15, 16, 17, 18, 19, 20]. Among these modified theories, \(f(R)\) theory is one of the appropriate alternatives of GR theory, which faces challenges in passing certain observational tests [21, 22, 23, 24, 25, 26, 27]. The \(f(R,T)\) theory is another modified version of \(f(R)\) theory that explains the accelerated expansion of the universe at late times [28, 29, 30, 31, 32].
"\(f(Q,T)\) gravity" is an extension of \(f(Q)\) gravity within the field of theoretical physics and cosmology that extends and modifies Einstein's General Theory of Relativity (GR). In General Relativity, the gravitational interaction between masses is described by the curvature of space-time caused by the distribution of matter and energy. In \(f(Q,T)\) theory the energy-momentum tensor describes the distribution of matter and energy in space-time, and the quadratic scalar is a mathematical term derived from the geometry of the space-time manifold. By allowing for a wider range of functions and interactions between these two terms, \(f(Q,T)\) gravity aims to provide a more comprehensive and flexible description of gravitational phenomena. In some \(f(Q,T)\) gravity models, modifications to the gravitational field equations can lead to predictions that mimic the effects of dark matter and dark energy, thus providing alternative explanations for the observed cosmic acceleration and galactic rotation curves. Researchers and cosmologists are actively exploring various \(f(Q,T)\) gravity models to better understand their implications for cosmology, astrophysics, and fundamental physics [34, 35, 36]. In the study of the expansion of the current universe, Bhattacharjee et al. [37], and Arora et al. [38] have also focused on the framework of \(f(Q,T)\) gravity. This approach allows researchers to explore modifications to the standard General Relativity equations and their implications for the evolution of the cosmos. Zia et. Al. [39] discusses the general form of the transient behavior of \(f(Q,T)\) gravity. This likely includes a study of how this modified gravity theory behaves over time, especially in comparison to standard General Relativity. The reference [40] appears to focus on explaining the parameters of a linear case model within the context of \(f(Q,T)\) gravity. This kind of study helps establish the relationships between the theory's parameters and its predictions.
Weyl-type \(f(Q,T)\) gravity is a fascinating extension of gravitational theory that introduces a novel approach to understanding the fundamental forces governing the universe. In this framework, the traditional Einstein-Hilbert action is modified by incorporating a function of the Weyl tensor (\(Q\)) and the trace of the energy-momentum tensor (\(T\)). This modification allows for a richer description of gravity and space-time geometry. In gravitational physics, the Weyl tensor represents the traceless part of the Riemann curvature tensor. It characterizes the gravitational tidal forces and describes the intrinsic geometry of space-time. The energy-momentum tensor encodes information about the distribution of matter and energy in space-time. Weyl-type \(f(Q,T)\) gravity is a rapidly evolving field of research. Scientists Yixin et al. [41], Yang et
al. [42], Gadbail et al. [43] are actively investigating various models, cosmological implications, and observational tests to validate or refine this framework. In the context of Weyl-type \(f(Q,T)\) gravity, reference [44] discusses the impact of viscosity on cosmological evolution. This suggests an investigation into how the presence of viscosity affects the dynamics of the universe within the framework of this modified gravity theory. Gadbail et al. [45] have used parametrization of DP to explore the cosmological dynamics of the universe in Weyl-type \(f(Q,T)\) gravity.
In the present study, we have explored the cosmological dynamics of the universe in Weyl-type \(f(Q,T)\) by assuming the parametrization \(H(z)=H_{0}(1+z)^{1+\alpha+\beta}exp\left(\frac{-\beta z}{1+z}\right)\). The present study is organized as: In Section 2, we have discuss the formulation in Weyl-type \(f(Q,T)\) gravity. Using parametrization of the deceleration parameter, the solution of field equations has been proposed in Section 3. In Section 4, we have discussed the observational constraints with Hubble data points in the range \(0\leq z\leq 2.36\). Some features of the proposed model are explained in Section 5. The viability of the derived model is also examined through \(w_{D}-w^{\prime}_{D}\) in Section 6. The quintessence field is defined in Section 7. State-finders and sound speed are explained in Sections 8 and 9. Classical linear and nonlinear energy conditions are defined in Section 10. Finally, in Section 11 conclusion is summaries.
## 2 Field equation of Weyl type \(f(Q,T)\) gravity
Weyl proposed a new geometry in 1918, that introduces a relationship characterized by the change in both the orientation and magnitude of a vector under parallel vector transport. The formulation of the action in Weyl gravity is defined as [41].
\[S=\int\sqrt{-g}\left[-\frac{1}{4}W_{\mu\upsilon}W^{\mu\nu}-\frac{1}{2}M^{2}w_ {\mu}w^{\mu}+\kappa^{2}f(Q,T)+\left(R+6\nabla_{a}w^{\alpha}-6w_{\alpha}w^{ \alpha}\right)\lambda+\mathcal{L}_{m}\right]d^{4}x, \tag{1}\]
with \(16\pi G\kappa^{2}=1\). Non-metricity Q is an essential aspect of our theory that can be determined through the process.
\[Q\equiv-g^{\mu\upsilon}\left(L^{\alpha}_{\beta\mu}L^{\beta}_{\nu\alpha}-L^{ \alpha}_{\beta\alpha}L^{\beta}_{\mu\nu}\right) \tag{2}\]
here \(L^{\lambda}_{\mu\nu}\) is define as
\[L^{\lambda}_{\mu\nu}=\frac{-g^{\lambda\gamma}}{2}\left(Q_{\mu\gamma\nu}+Q_{ \nu\gamma\mu}-Q_{\gamma\mu\nu}\right). \tag{3}\]
The covariant derivative of metric in the Riemannian geometry is zero but in Weyl geometry, non metricity tensor is defined as
\[-\widetilde{\Gamma}^{\rho}_{\alpha\mu}g_{\rho\nu}-\widetilde{\Gamma}^{\rho}_ {\alpha\nu}g_{\rho\mu}+\partial_{\alpha}g_{\mu\nu}=\tilde{\nabla}_{\alpha}g_ {\mu\nu}\equiv Q_{\alpha\mu\nu}. \tag{4}\]
From Eq. (3) and (4), we find the following relation
\[Q=-6\omega^{2}. \tag{5}\]
The generalized equation by variation on Eq.(1) is
\[-\omega_{\mu}\left(M^{2}+12\kappa^{2}f_{Q}+12\lambda\right)+\nabla^{\nu}W_{ \mu\nu}=6\lambda\nabla_{\mu}. \tag{6}\]
The effective mass of the vector field is
\[M^{2}_{eff}=M^{2}+12\kappa^{2}f_{Q}+12\lambda. \tag{7}\]
The variation on action (1) w.r.t. metric tensor and weyl vector yield
\[\left(\frac{T_{\mu\nu}+S_{\mu\nu}}{2}\right)-\kappa^{2}f_{T}\left(T_{\mu\nu}+ \Theta_{\mu\nu}\right)=-\frac{\kappa^{2}}{2}g_{\mu\nu}f-6k^{2}f_{Q}\omega_{\mu }\omega_{\nu}+\left(-6\omega_{\mu}\omega_{\nu}+R_{\mu\nu}+3g_{\mu\nu}\nabla_{ \rho}\omega^{\rho}\right)\lambda\]
\[+g_{\mu\nu}\Box\lambda-6\omega_{(\mu}\nabla_{v)}\lambda+3g_{\mu\nu}\omega^{ \rho}\nabla_{\rho}\lambda-\nabla_{\nu}\lambda\nabla_{\mu}. \tag{8}\]
here, \(f_{Q}\) and \(f_{T}\) are the partial derivatives of \(f(Q,T)\) w.r.to \(Q\) and \(T\) respectively. \(T_{\mu\nu}\) and \(\Theta_{\mu\nu}\) are define as
\[T_{\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}L_{m}\right)}{ \delta g^{\mu\nu}}. \tag{9}\]
\[\Theta_{\mu\nu}=g^{\alpha\beta}\frac{\delta T_{\alpha\beta}}{\delta g_{\mu \nu}}=g_{\mu\nu}L_{m}-2T_{\mu\nu}-2g^{\alpha\beta}\frac{\delta^{2}L_{m}}{ \delta g^{\mu\nu}\delta g^{\alpha\beta}}. \tag{10}\]
Here, Re-scaled energy momentum tensor \(S_{\mu\nu}\) is given by
\[S_{\mu\nu}=-\frac{g_{\mu\nu}}{4}\omega_{\rho\sigma}W^{\rho\sigma}+W_{\mu\rho} W_{v}^{\rho}-\frac{M^{2}}{2}g_{\mu\nu}\omega_{\rho}\omega^{\rho}+M^{2}\omega_{ \mu}\omega_{\nu}, \tag{11}\]
and
\[W_{\mu\nu}=\nabla_{\nu}w_{\mu}-\nabla_{\mu}\omega_{\nu}. \tag{12}\]
The energy-momentum tensor in the weyl gravity is defined as
\[\nabla^{\mu}T_{\mu\nu}=\frac{\kappa^{2}}{1+2\kappa^{2}f_{T}}\left[-f_{T} \nabla_{\nu}T-2T_{\mu\nu}\nabla^{\mu}f_{T}+2\nabla_{\nu}\left(f_{T}{\cal L}_{ m}\right)\right]. \tag{13}\]
As a result, the matter energy-momentum tensor is not conserved, meaning that it does not remain constant over time. This non-conservation is indicated by the presence of a non-zero right-hand side in the equation. Physically, the non-conservation of the matter energy-momentum tensor implies the existence of an additional force acting on massive test particles. This force affects the motion of these particles, causing them to be non-geodesic. Geodesic motion refers to the path that a free particle would naturally follow in the absence of any external forces [46].
## 3 Cosmological model with deceleration parameter
For the purpose of modeling, we consider a spatially flat FLRW metric.
\[ds^{2}=\delta_{ij}dx^{i}dx^{j}a^{2}(t)-dt^{2}, \tag{14}\]
here \(a(t)\) represents the scale factor. Because of spatial symmetry, the vector field \(w_{\mu}\) can be assume as \(w_{\mu}=[\psi(t),0,0,0]\). Using this we get,
\[Q=-6\omega^{2}=6\psi^{2}(t),and\ \ \omega^{2}=\omega_{\mu}\omega^{\mu}=-\psi^{2}(t)\]
The energy momentum tensor for the perfect fluid is given by:
\[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}. \tag{15}\]
here \(\rho\) and \(p\) are energy density and pressure, respectively. \(u^{\mu}\) is the four velocity vector satisfying \(u_{\mu}u^{\mu}=-1\). So we have
\[\Theta_{\nu}^{\mu}=\delta_{\nu}^{\mu}p-2T_{\nu}^{\mu}={\rm diag}(2\rho+p,-p,-p,-p),and\ \ T_{\nu}^{\mu}={\rm diag}(-\rho,p,p,p).\]
The generalized Proca equation and constraint of flat space in the cosmological case can be found as:
\[\psi=\psi^{2}-3H\psi+\dot{H}+2H^{2}, \tag{16}\]
\[\dot{\lambda}=\left(-\frac{1}{6}M^{2}-2\kappa^{2}f_{Q}-2\lambda\right)\psi=- \frac{1}{6}M_{eff}^{2}\psi, \tag{17}\]
\[\partial_{i}\lambda=0. \tag{18}\]
Using equation (8) and (15), the generalized Friedmann equations define as,
\[\kappa^{2}f_{T}(\rho+p)+\frac{1}{2}\rho=\frac{\kappa^{2}}{2}f-\psi^{2}\left(6 \kappa^{2}f_{Q}+\frac{1}{4}M^{2}\right)-\left(\psi^{2}-H^{2}\right)3\lambda-( \psi-H)3\dot{\lambda}. \tag{19}\]
\[-\frac{1}{2}p=\left(2\dot{H}+3H^{2}+3\psi^{2}\right)\lambda+\frac{\kappa^{2}f} {2}+\frac{M^{2}}{4}\psi^{2}+\ddot{\lambda}+\dot{\lambda}(2H+3\psi). \tag{20}\]
Eliminating the derivatives of \(\lambda\) from Eqs. (17) (18), (19), and (20), we get following to expressions.
\[\frac{1}{2}\left(1+2\kappa^{2}f_{T}\right)\rho+\kappa^{2}f_{T}p=\frac{\kappa^ {2}}{2}f+\frac{m^{2}\psi^{2}}{4}+3\lambda\left(H^{2}+\psi^{2}\right)-\frac{1} {2}m_{eff}^{2}H\psi, \tag{21}\]
\[\frac{1}{2}(p+\rho)\left(1+2\kappa^{2}f_{T}\right)=\frac{1}{6}\left(\dot{\psi }-\psi H+\psi^{2}\right)m_{eff}^{2}+2\kappa^{2}f_{Q}\psi-2\dot{H}\lambda. \tag{22}\]
where \(f_{Q}\) & \(f_{T}\) are the partial derivatives w.r.to \(Q\) and \(T\) and dot(.) indicates time derivative. In the present study, we assume \(f(Q,T)=\delta Q+\frac{\gamma}{6\kappa^{2}}T\); here, \(\delta\) and \(\gamma\) are the parameters. \(M^{2}=\frac{m^{2}}{\kappa^{2}}\) represents the Weyl field's mass and \(\kappa^{2}\) denotes the strength of coupling between matter and the Weyl geometry. For this case, we assume \(M=0.95\)[41]. It is important to note that for \(\gamma=0\) and \(\alpha=-1\), Weyl type \(f(Q,T)\)reduces to \(-Q\) a successful reduction of GR. On the other hand, for \(T=0\), it turns to \(f(Q)=\alpha Q\), which is equivalent to GR and is consistent with cosmological assessments and observational findings. Additionally, by utilizing the expression \(\bar{\nabla}_{\lambda}.g_{\mu\nu}=-\omega_{\lambda}.g_{\mu\nu}\) we can obtained \(\psi(t)=-6H(t)\).
Now, the following expressions of the pressure and energy density can be determined using Eqs. (21) and (22).
\[p=-\left(36\left(\frac{18}{\gamma+3}(\delta+1)+\frac{3M^{2}}{2(\gamma+3)} \right)+\frac{18}{2\gamma+3}\right)H^{2}-\frac{(18\gamma+36)\dot{H}}{(2\gamma+ 3)(\gamma+3)}, \tag{23}\]
and
\[\rho=\left(\frac{-(99\gamma+216)(24\delta+25)}{(4\gamma+8)(\gamma+3)}+\frac{2 9\gamma+72}{(4\gamma+6)(\gamma+2)}\right)H^{2}-\frac{9\gamma\dot{H}}{(4\gamma+ 6)(\gamma+3)}, \tag{24}\]
We obtain EoS parameter \(\omega_{eff}=\frac{p}{\rho}\) from equations (23) and (24),
\[\omega_{eff}=\frac{-\left(36\left(\frac{18}{\gamma+3}(\delta+1)+\frac{3M^{2}} {(2\gamma+3)}\right)+\frac{18}{2\gamma+3}\right)H^{2}-\frac{18(\gamma+2)\dot{H }}{(2\gamma+3)(\gamma+3)}}{\left(\frac{-9(11\gamma+24)(24\delta+25)}{(4\gamma +8)(\gamma+3)}+\frac{29\gamma+72}{2(2\gamma+3)(\gamma+2)}\right)H^{2}-\frac{9 \gamma\dot{H}}{2(2\gamma+3)(\gamma+3)}}. \tag{25}\]
### Parametric form of Deceleration Parameter
The deceleration parameter, \(q\), describes the rate of change of the expansion of the universe. It provides information about the universe's expansion at a particular point in its history. The deceleration parameter can be used to classify the evolution of the universe into different
phases: \(q>0\) indicates that the expansion of the universe is decelerating. In other words, the rate of expansion is slowing down over time. This is typical for a universe dominated by matter and radiation. \(q=0\) signifies that the universe's expansion is neither accelerating nor decelerating. It could correspond to a transitional phase between deceleration and acceleration. \(q<0\) suggests that the expansion of the universe is accelerating. In this scenario, the rate of expansion is increasing over time. This can be attributed to the presence of dark energy, a hypothetical form of energy with negative pressure that counteracts the gravitational attraction of matter. The parametric form of Deceleration Parameter (DP) is considered as [47].
\[q=\beta z(1+z)^{-1}+\alpha \tag{26}\]
here, \(\alpha\) and \(\beta\) are fixed values. The current deceleration parameter value, denoted as \(q_{0}\) equal to \(\alpha\), at \(z=0\). Furthermore, for specific value of \(\alpha=1/2\) and \(\beta=0\), the value of deceleration parameter is equal to \(\frac{1}{2}\) which indicate that the universe is dominated by dark matter. In terms of redshift \(z\), the Hubble parameter can be defined as:
\[H=-(z+1)^{-1}\frac{dz}{dt} \tag{27}\]
From Eqs. (26) and (27), we can explain the Hubble parameter as
\[H(z)=H_{0}(z+1)^{\alpha+\beta+1}exp\left(\frac{-\beta z}{1+z}\right) \tag{28}\]
Here, \(H_{0}\) denotes the current value of the Hubble parameter.
## 4 Observational data analysis
In this segment, we have applied the Markov Chain Monte Carlo (MCMC) technique to find the best-fitted optimal parameter values of the model. Here, \(\zeta_{th}\) signifies the theoretically predicted value, while \(\zeta_{ob}\) stands for the observational value. The \(\chi^{2}\) evaluation function is examined as follows:
\[\chi_{\zeta}^{2}\left(P\right)=\sum_{i=1}\frac{\left(\zeta_{th}(P)-\zeta_{ob} \right)^{2}}{\sigma_{\zeta}^{2}}. \tag{29}\]
In this context, \(P\) signifies the parameters of the model, while \(\sigma_{\zeta}\) denotes the standard errors associated with measurements of a physical measure. Here, the parameter vector is denoted by \(P=(H_{0},\alpha,\beta)\). Through a statistical process that involves minimizing the estimation function \(\chi^{2}\), it becomes possible to determine the most probable values of parameters. For this purpose, we utilized the datasets of experimental findings like the observational Hubble data (OHD) consisting of 57points within the range \(0.07\leq z\leq 2.36\)[48, 49], Pantheon dataset of 1048 Type Ia Supernovae (SN Ia) within the redshift range \(0.01\leq z\leq 2.26\)[50], and an observation dataset related to baryon acoustic oscillation (BAO) [51, 52, 53, 54, 55]. The confidence contour plots with \(1\sigma\) (\(68\%\)) and \(2\sigma\) (\(95\%\)) confidence in two dimensional for the given model are shown in Figure 1. The summary of best-estimated values of parameters for the derived model are tabulated in Table 1.
For the combined observed data set, the estimator \(\chi^{2}_{total}\) can be expressed as
\[\chi^{2}_{total}=\chi^{2}_{OHD}+\chi^{2}_{Pantheon}+\chi^{2}_{BAO} \tag{30}\]
For different sets of observations of BAO, OHD, and Pantheon, the best outcomes on Hubble tension \(H_{0}\) and other model parameters for the proposed are found in nice agreement with recent findings [56, 57, 58, 59, 60, 61, 62].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Parameters & \(H_{0}\) & \(\alpha\) & \(\beta\) & \(z_{t}\) & \(q_{0}\) \\ \hline BAO+OHD57+Pantheon & \(71.17\pm 0.25\) & \(-0.663\pm 0.030\) & \(1.488\pm 0.087\) & \(0.804^{+0.175}_{-0.132}\) & \(-0.663\pm 0.030\) \\ \hline BAO+OHD57 & \(70.0^{+1.4}_{-1.2}\) & \(-0.638\pm 0.076\) & \(1.33\pm 0.16\) & \(0.922^{+0.644}_{-0.316}\) & \(-0.638\pm 0.076\) \\ \hline Pantheon & \(71.14^{+0.32}_{-0.28}\) & \(-0.676^{+0.060}_{-0.071}\) & \(1.77^{+0.37}_{-0.31}\) & \(0.618^{+0.429}_{-0.214}\) & \(-0.676^{+0.060}_{-0.071}\) \\ \hline OHD57 & \(68.60\pm 1.6\) & \(-0.571\pm 0.092\) & \(1.34^{+0.19}_{-0.17}\) & \(0.752^{+0.574}_{-0.290}\) & \(-0.571\pm 0.092\) \\ \hline \end{tabular}
\end{table}
Table 1: The values obtained of parameters for different observational dataset
Figure 1: Confidence levels from the combination of OHD+Pantheon+BAO for model parameters.
## 5 Features of the Model
For all observational datasets energy density is positive of Weyl-type of \(f(Q,T)\) gravity as shown in Figure 2(a). For various observational datasets, the behavior of cosmic pressure is demonstrated in Figure 2(b). It has been seen that the pressure is negative throughout the whole evolution of the universe. The negative behavior describes the present accelerated expansion of the universe. It also explains that with the universe's development, the energy density diminishes, leading to an increase in the volume of space.
The equation of state parameter is a crucial parameter in cosmology and astrophysics that describes the relationship between the pressure and the energy density of a substance, such as matter or dark energy, in the universe. It is used to characterize the nature of the substance and its effect on the expansion of the universe. The value of the equation of state parameter can vary for different substances: For non-relativistic matter, the pressure is negligible compared to the energy density, so \(\omega\) is close to 0. For relativistic particles, such as photons and neutrinos, the pressure is significant compared to the energy density, leading to \(\omega=\frac{1}{3}\). This is due to
Figure 3: Plot of EoS parameter vs redshift \(z\).
Figure 2: (a) Energy density curve against \(z\), (b) Plot of the cosmic pressure.
the relativistic equation of state. The equation of state parameter for a cosmological constant is \(\omega=-1\), which means that its pressure is negative and equal in magnitude to its energy density. This negative pressure is responsible for the accelerated expansion of the universe. Some theories propose that dark energy might not be a cosmological constant but instead a dynamic field evolving over time. In these cases, the EoS parameter can vary and be different from -1. It could be greater than -1 (indicating repulsive gravity) or less than -1 (indicating even more rapid expansion). From the figure, we can observed that the derived model initially lies in quintessence era and advanced to Chaplygin gas scenario in late time [56, 61].
## 6 \(w_{D}-w^{\prime}_{D}\) Analysis
Caldwell and Linder [63] conducted a study to investigate the changing dynamics of quintessence models of dark energy in the phase plane represented by \(w\) and its time derivative \(w^{\prime}\), where \(w^{\prime}\) is the derivative of \(w\) with respect to the logarithm of the scale factor \(a\) i.e. (\(w^{\prime}=\frac{dw}{dln(a)}\)). They demonstrated that these models classified into two distinct regions in the phase plane, known as the 'thawing' (\(w_{D}<0,w^{\prime}_{D}>0\)) and 'freezing' regions (\(w_{D}<0,w^{\prime}_{D}<0\)) with quite different behavior in the \(w_{D}-w^{\prime}_{D}\) plane [64, 65, 66].
We have plotted \(w^{\prime}_{D}\) against \(w_{D}\) for all the observational values in Figure 4 to construct the \(w_{D}-w^{\prime}_{D}\) plane. The curves in the plot exhibit both freezing and thawing regions for
Figure 4: Plots of \(w_{D}-w^{\prime}_{D}\) for Weyl-type \(f(Q,T)\) gravity vs \(z\).
different observational datasets. We observe that our model's trajectories primarily diverge in the freezing region, as supported by observational data, indicating a more accelerated expansion of the universe in this area.
## 7 Quintessence field in Weyl-type of \(f(Q,T)\) gravity
Quintessence is a theoretical concept in cosmology that refers to a hypothetical scalar field responsible for dark energy. It is considered one of the leading explanations for the accelerated expansion of the universe. The term "quintessence" comes from ancient cosmology, where it was used to describe the fifth element that completes the classical four elements of earth, water, air, and fire. In modern cosmology, quintessence is associated with a scalar field that has a positive energy density and negative pressure, causing it to have repulsive gravitational effects. This negative pressure leads to the expansion of the universe at an accelerating rate, counteracting the attractive gravitational force of matter and radiation. The presence of quintessence would help explain the observed phenomenon of DE. The dynamics of the quintessence field depend on its potential energy function. Different potential energy functions result in different behaviors of the quintessence field over cosmic time. The scalar potential is a basic idea in physics, which gives each point in space a unique value to describe the energy associated with a scalar field. The specific physical system under examination will determine the properties of this field and its potential. Since negative energy can lead to solutions that defy the laws of physics, a positive scalar potential is often linked with stable configurations in physics. The negative scalar potential is demonstrated in specific frameworks like models involving dark energy or cosmological inflation. The behavior of a scalar potential is ultimately determined by physics and the particular values of relevant parameters.
The action for the quintessence field is explained as, [67]
\[S=\int\sqrt{-g}d^{4}x\left[-\frac{1}{2}g^{ij}\partial_{i}\phi\partial_{j}\phi -V(\phi)\right], \tag{31}\]
where \(g\) is the metric determinant & \(g^{ij}\) is the metric in the above Eq.(31), \(V(\phi)\) is the potential for the quintessence field \(\phi\). The energy density and pressure for the quintessence scalar field by varying the action w.r.t. the metric and \(\phi\) are defined as:
\[\rho_{\phi}=\frac{\dot{\phi}^{2}}{2}+V(\phi),p_{\phi}=\frac{\dot{\phi}^{2}}{2} -V(\phi). \tag{32}\]
Figures 5(a) and 5(b) illustrate the changes in the scalar field \(\phi\) and its corresponding potential \(V(\phi)\) within the quintessence model as the redshift z varies. The plots were generated using the estimated values of parameters. For these suitable choices of parameters, the field gets trapped in the local minimum because the kinetic energy during a scaling regime is small. The field then enters a regime of damped oscillations leading to an accelerating universe [61, 67, 68].
## 8 statefinders
Statefinder is a cosmological diagnostic tool used in astrophysics and cosmology to distinguish between different dark energy (DE) models. It has been computed for various existing models of dark energy, that provide an effect between different forms of dark energy in the cosmological plane. This plane represents distinct, well-known regions of the universe. For instance, the pair (\(s>0\) and \(r<1\)) corresponds to the region of quintessence DE eras, \((r,s)=(1,1)\) represents the CDM limit, \(((r,s)=(1,0))\) signifies the \(\Lambda\)CDM limit, and \((s<0\) and \(r>1\) ) indicates the Chaplygin gas region. Using this diagnostic tool, one can assess how closely a dark energy model resembles \(\Lambda\)CDM dynamics [69, 70]. Alam et al. [71] have defined the state-finders (\(r\), \(s\)) as following
\[r=\frac{\dddot{a}}{aH^{3}},s=\frac{r-1}{3\left(q-\frac{1}{2}\right)} \tag{33}\]
where \(a\) is the scale factor of the universe, H is the Hubble parameter, and the primes represent derivatives with respect to the cosmic time. State-finder depends on the scale factor and its time derivative. The state-finders can also read as
\[r=2q^{2}+q-\frac{\dot{q}}{H}\quad,s=\frac{2}{3}(q+1)-\frac{\dot{q}}{3H\left(q -\frac{1}{2}\right)} \tag{34}\]
Figure 5: (a) Plot of scalar potential vs \(z\) (b) Graph of scalar field \(\phi\) vs \(z\) for all four sets of best fitted values.
For various best-fitted values of free parameters for the different observational datasets, the trajectories depicting the evolutionary behavior of the model in \(r-s\) plane are shown in Figure 6. The figure shows that the model lies in the quintessence region (\(r<1,s>0\)). By using these Statefinder parameters, researchers can probe the dynamics of the universe and differentiate between different dark energy models more effectively [61, 67, 68].
## 9 Speed of sound
The sound speed \(v_{s}^{2}\) should be necessarily less than the speed of light (\(c\)). The velocity of sound exists within the range \(0\leq v_{s}^{2}\leq 1\) with cosmic time, as we are working with gravitational units with a unit speed of light time [73, 74, 75]. The formula for the square of sound speed is:
\[v_{s}^{2}=\frac{dp}{d\rho} \tag{35}\]
Figure 6: Plots of \(r\) versus \(s\) for Weyl-type f(Q,T) theory.
To ensure the stability of the given theoretical model, the velocity (\(v_{s}^{2}\)) should lie within the range of -1 to 1 for the redshift scale. By examining Figure 7, we can see that the sound speed consistently lies within this specified range throughout the evolution of the universe.
## 10 Classical linear and nonlinear Energy Conditions
In the context of general relativity, energy conditions are used to explore the properties and behaviors of space-time, and they play a role in various theorems and conjectures related to the nature of gravity and the possibility of constructing "exotic" configurations of matter that might lead to violations of certain physical principles. There are several energy conditions, and they fall into two main categories: linear energy conditions and non-linear energy conditions. In classical physics, energy conditions are linear, meaning that they have straightforward requirements on the stress-energy tensor that ensure the conservation of energy and other fundamental physical properties. However, in certain cases, nonlinear energy conditions are considered to explore more exotic scenarios that could potentially violate some of the standard assumptions. These conditions are used in the study of general relativity and are closely related to the concept of exotic matter and the possibilities of faster-than-light travel and traversable wormholes.
The linear energy conditions (ECs) within the framework of GR are mathematically explained as (1) Weak Energy Condition (WEC): \(\rho+p\geq 0\), \(\rho\geq 0\), (2) Null Energy Condition (NEC): \(\rho+p\geq 0\), (3) Dominant Energy Condition (DEC): \(\rho-p\geq 0\), (4) Strong Energy Condition (SEC): \(\rho+3p\geq 0\)[76, 77].
In addition, non linear energy conditions are : 1. The flux EC: \(\rho^{2}\geq p^{2}\) 2. The determinant EC: \(\rho\Pi p_{i}\geq 0\) 3. The trace of square EC: \(\rho^{2}+\sum p^{2}\geq 0\)
Figure 7: Sound speed trajectory vs \(z\).
Figure 8 illustrates the accomplishment of energy conditions for the proposed model using the joint observational data from BAO, OHD, and Pantheon. The derived model satisfies WEC, DEC, and NEC, but violates SEC as depicted in Figure 8. The violation of SEC for the proposed model depicts the accelerated expansion of the cosmos and confirms the presence of exotic matter in the cosmos [76, 77, 78]. The term "flux" generally refers to the flow of some physical quantity through a surface. The tensor of stress energy is efficient in describing the flow of energy, momentum, or stress across a particular area. In the case of the Null Energy Condition, the NEC requires that the flux of the stress-energy tensor's components should be
Figure 8: Plots of classical linear and non-linear Energy conditions.
non-negative for all null (light-like) vectors. This helps to ensure that the energy density along any null geodesic remains non-negative.
The 'trace' of the'stress-energy tensor' is the sum of its diagonal components, that typically correspond to pressure and energy density. In the context of the Strong Energy Condition, the trace condition often states that the summation of three times the pressure and the energy density should be non-negative. Figures 8(d), 8(e), 8(f) show the non-linear energy conditions. The Flux and the trace of the square of both ECs are satisfied. Non-linear energy conditions are particularly important in investigations of exotic matter and space-time geometries that might allow for phenomena like faster-than-light travel, time travel, or other violations of classical energy conditions. They help to identify regions of space-time where such exotic behavior might occur or be ruled out.
## 11 Concluding remarks
This article focuses on examining a modified theory known as the Weyl \(f(Q,T)\) gravity, where the relationship between metric and the Weyl vector plays an important role in determining the metric tensor's covariant divergence. Consequently, the geometrical properties of the theory are impacted by both the metric tensor and the Weyl vector. We have explored the universe's dynamics utilizing the parametric form of DP represented as \(q(z)=\alpha+\frac{\beta z}{1+z}\). To estimate the free parameters \(\beta\), \(\alpha\), and \(H_{0}\), we have utilized the latest experimental datasets of BAO, OHD, and Pantheon and implementing the application of the MCMC method. We have employed the open-source Python package 'emcee' for this purpose. Additionally, we have determined a few kinetic properties such as \(w_{D}-w^{\prime}_{D}\), quintessence, state-finders, sound speed, and energy conditions which provide further insights into the behavior and implications within this context. The highlights of the model are given below.
* We have used three distinct sets of observational data: Baryon Acoustic oscillation(BAO), Observational Hubble Data (OHD), and data from Pantheon compilation. Implementing the MCMC statistical approach, we have estimated the free parameters \(\beta\), \(\alpha\), and \(H_{0}\). Taking the joint dataset into account, the confidence contour for model parameters is plotted in Figure 1. The constrained values of model parameters are tabulated in Table 1.
* The evolutionary behaviours of the energy density and cosmic pressure are represented in Figures 2(a) and 2(b) respectively. For all observational datasets, the nature of the energy density is positive while nature of cosmic pressure is negative throughout the evolution.
* Figure 3 shows the EoS parameter \(\omega\) for the three models of Weyl type \(f(Q,T)\) gravity. It has been plotted for the observational values. During its evolution, the derived model initially lies in quintessence era and advanced to Chaplygin gas scenario in late time.
* In Figure 4, we have described the \(w_{D}-w^{\prime}_{D}\) for all the observational values. It is found that the curve lies in both thawing and freezing regions.
* The nature of scalar potential \(V(\phi)\) and scalar field \(\phi\) for Weyl type of \(f(Q,T)\) gravity is depicted in Figure 5. We observe that the potential for the quintessence model is
a decreasing function of \(z\), which gives rise to an accelerated expansion. The behavior of the state-finders is shown in Figure 6. The trajectories for the proposed models lie in quintessence region (\(r<1,s>0\)) as shown in the figure. For the proposed model, the sound speed's nature can provide insights into its stability and characteristics. The trajectories of sound speed for different observational data sets are plotted in Figure 7.
* Figure 8 shows linear and nonlinear energy conditions of the proposed model for observational datasets. The violation of SEC for the derived model depicts an accelerated expansion of the cosmos. Figures 8(d), 8(e), 8(f) show the non-linear energy conditions. The Flux and trace of square both ECs are satisfied.
In the manuscript, we have present a comprehensive study of an accelerated expanding cosmological model in Weyl \(f(Q,T)\) gravity with the aid of a specific parametric approach.
|
2305.15963 | Electron energy spectrum of the spherical GaAs/Al$_x$Ga$_{1-x}$As
quantum dot with several impurities on the surface | The model of a spherical quantum dot with several donor impurities on its
surface is suggested. The electron energy spectra are studied as a function of
the quantum dot radius and the number of impurities. Several cases of the
location of impurities on the quantum dot surface are considered. The plane
wave functions method has been applied to calculate the electron energy
spectrum. The splitting of electron energy levels is analyzed in the cases of
different number of impurities. It is shown that the electron energy splitting
depends on both the number of impurities on the surface and on their location.
The electron binding energy is defined too. | R. Ya. Leshko, I. V. Bilynskyi, O. V. Leshko, V. B. Hols'kyi | 2023-05-25T12:01:23Z | http://arxiv.org/abs/2305.15963v1 | [
###### Abstract
The model of a spherical quantum dot with several donor impurities on its surface is suggested. The electron energy spectra are studied as a function of the quantum dot radius and the number of impurities. Several cases of the location of impurities on the quantum dot surface are considered. The plane wave functions method has been applied to calculate the electron energy spectrum. The splitting of electron energy levels is analyzed in the cases of different number of impurities. It is shown that the electron energy splitting depends on both the number of impurities on the surface and on their location. The electron binding energy is defined too.
energy spectrum, surface impurity, plane wave functions method pacs: 26, 23704 +
Footnote †: Corresponding author: [email protected].
## 1 Introduction
Recent developments in nanotechnologies have made it possible to produce zero-dimensional semiconductor structures such as quantum dots (QDs). These quantum systems were proposed as solar cells and solar concentrators [1, 2, 3], photodetectors [4, 5, 6], single QD transistors [7, 8], lasers [9, 10, 11], and light emitting diodes [12, 13]. Various methods are used to manufacture these devices based on QD systems. The examples are molecular beam epitaxy, metal organic chemical vapor deposition, lithography methods, and colloidal methods. Using the above mentioned and other methods does not ensure that QDs are not contaminated by impurities. Obviously, the presence of impurity is undesirable. However, if there is an impurity, it can be located in the QD or on the QD surface. On the other hand, there are many devices, where doped QDs are used. In this case, impurities are desirable. In both cases, the impurities are mostly on the QD surface. However, there exists extrinsic and intrinsic doping which provides the presence of impurity not only on the QD surface but also in the QD. For example, a review of colloidal doping methods is presented in [14]. High quantum yield Cu doped CdSe quantum dots were studied in [15]. The authors detected two states of Cu oxidation (+1 and +2) for surface-doped quantum dots. The results revealed that the quantum dots doped with low concentrations were dominated by Cu\({}^{2+}\) ions, whereas the dots doped with high concentrations were dominated by Cu\({}^{1+}\) along with a less percentage of Cu\({}^{2+}\) ions [15].
Due to the intensive development of nanofabrication and doping methods, there appeared a lot of theoretical works (some of them had preceded the practical realization) where impurity states were calculated. Special attention is paid to hydrogenic impurities. There are many works regarding the hydrogenic impurities in the QD [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 31, 32]. Spherical [16, 17, 19, 20, 21, 22, 23, 24, 25, 27, 31, 32], cubic [33], parallelepiped-shaped [26], lens-shaped [28], and ellipsoidal [18, 29, 30] QDs with impurities were studied. In those works, donor [16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 32, 28, 30, 32, 29, 33] and acceptor [24, 27, 31] impurities were considered.
Electron and hole spectra, linear and nonlinear optical properties [18, 19, 22, 28, 30, 32] were analyzed in electric [25, 26] and magnetic [21] fields with impurities located in the center and off-center of the QD. Most of those works are devoted to one impurity located in the QD center or outside. A less number of works are connected with two impurities, for example [31, 32]. However, in all these works the impurities are considered in different QD locations. It is very important to study several impurities located on the QD surface because many experimental methods make it possibile to dope the surface of the prepared QDs and dope the QD during its growth.
In this work, we consider the cases of several impurities on the QD surface. We consider 1, 2, 4, 6 impurities on the QD surface. Therefore, the aim of this work is
* to determine the electron energy in the QD with several impurities on the QD surface;
* to establish the influence of the impurity number on the electron spectra;
* to calculate the electron binding energy in the QD with surface impurities.
## 2 Nanoheterosystem model and calculation method
We consider a semiconductor spherical QD with radius \(r_{0}\), electron effective mass \(m_{0}\) surrounded by a semiconductor matrix with electron effective mass \(m_{1}\). The dielectric constant of the QD and matrix have very close values. Furthermore, we assume that the lattice constants of the QD and matrix have very close values too. That is why we do not regard the influence of polarization and deformation. On the QD surface there are impurity ions. We assume that the surface concentration of impurities is not larger than \(\sigma=0.0048\) A\({}^{-2}\). For example, if \(r_{0}=20\) A, \(\mathrm{S}_{\mathrm{surface}}=\pi r_{0}^{2}\approx 1256\) A\({}^{2}\), and we get \(1256\) A\({}^{2}\cdot 0.0048\) A\({}^{-2}\approx 6\). Therefore, for the QD radius \(r_{0}=20\) A, the number of impurities on the QD surface can be 6 or less (if there are 6 impurities, the minimum QD radius should be 20 A)
For convenience, we have written the effective mass Hamiltonian of the described system in units of effective Rydberg energy and effective Bohr radius:
\[\mathbf{\hat{H}}=-\nabla\frac{m_{0}}{m\left(r\right)}\nabla+U\left(r\right)+V \left(\vec{r},\vec{D}_{1},\vec{D}_{2},...,\vec{D}_{N}\right), \tag{1}\]
where the effective mass
\[m\left(r\right)=\left\{\begin{array}{ll}m_{0},&r\leqslant r_{0},\\ m_{1},&r>r_{0},\end{array}\right. \tag{2}\]
\[U\left(r\right)=\left\{\begin{array}{ll}0,&r\leqslant r_{0},\\ U_{0},&r>r_{0},\end{array}\right. \tag{3}\]
is a confinement potential, and
\[V\left(\vec{r},\vec{D}_{1},\vec{D}_{2},...,\vec{D}_{N}\right)=\sum_{j=1}^{N} V_{j}\left(\vec{r},\vec{D}_{j}\right)=-\sum_{j=1}^{N}\frac{2}{\left|\vec{r}- \vec{D}_{j}\right|} \tag{4}\]
is Coulomb potential energy of interaction between \(j\)-th ion (in location \(\vec{D}_{j}\) ) and electron, \(N\) is the number of impurities.
To calculate the electron energy spectrum and wave functions, the Schrodinger equation should be solved. In the general case of many impurities, this equation cannot be solved exactly. That is why we use the plane wave method, which is described in detail in [25, 26, 34]. We extend the method to the case of many impurities on the QD surface. The wave function, which is a solution of the Schrodinger equation, can be expressed in the form:
\[\psi\left(\vec{r}\right)=\sum_{n_{x}=-n_{\max}}^{n_{\max}}\sum_{n_{y}=-n_{\max}}^{n _{\max}}\sum_{n_{z}=-n_{\max}}^{n_{\max}}C_{n_{x},n_{y},n_{z}}\psi_{n_{x},n_{y}, n_{z}}^{(0)}\left(x,y,z\right), \tag{2.5}\]
where \(\psi_{n_{x},n_{y},n_{z}}^{(0)}\left(x,y,z\right)\) are plane waves which form a complete closed system of functions in the domain \(L_{x}\times L_{y}\times L_{z}\), \(n_{\max}\rightarrow\infty\) (but in numerical calculation, \(n_{\max}\) will be limited),
\[\psi_{n_{x},n_{y},n_{z}}^{(0)}\left(x,y,z\right)=\frac{1}{\sqrt{L_{x}L_{y}L_{ z}}}\mathrm{e}^{\mathrm{i}\left\{\left(k_{x}+n_{x}K_{x}\right)x+\left(k_{y}+n_{y}K_{y} \right)y+\left(k_{z}+n_{z}K_{z}\right)z\right\}}, \tag{2.6}\]
where we use a Cartesian coordinate system and consider the system in the large cube \(L_{x}=L_{y}=L_{z}\equiv L\), that is why \(K_{x}=K_{y}=K_{z}\equiv 2\pi/L\).
After substitution (2.5) into the Schrodinger equation with Hamiltonian (2.1), the linear homogeneous system of equations is obtained:
\[\sum_{n_{x},n_{y},n_{z}=-n_{\max}}^{n_{\max}}\left[T_{n^{{}^{\prime}}_{x},n^{ {}^{\prime}}_{y},n^{{}^{\prime}}_{z}}+U_{n^{{}^{\prime}}_{x},n^{{}^{\prime}} _{y},n^{{}^{\prime}}_{z}}+V_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{ \prime}}_{z}}+V_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}-E \delta_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}\right]C_ {n_{x},n_{y},n_{z}}=0, \tag{2.7}\]
where
\[T_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}} =\left\{\left(k_{x}+n^{{}^{\prime}}_{x}K_{x}\right)\left(k_{x}+n_ {x}K_{x}\right)+\left(k_{y}+n^{{}^{\prime}}_{y}K_{y}\right)\left(k_{y}+n_{y}K_ {y}\right)\right.\] \[+\left.\left(k_{z}+n^{{}^{\prime}}_{z}K_{z}\right)\left(k_{z}+n_ {z}K_{z}\right)\right\}\times\left\{\frac{m_{0}}{m_{1}}\delta_{n^{{}^{\prime} }_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}+\frac{m_{0}}{m_{0,1}}S\right\}, \tag{2.8}\]
is the matrix element of the kinetic energy, \(m_{0,1}=m_{0}m_{1}/(m_{1}-m_{0})\),
\[S=\left\{\begin{array}{ll}4\pi r_{0}^{3}/(3L^{3}),&n_{x}=n^{{}^{\prime}}_{x} \ \ \text{and}\ \ n_{y}=n^{{}^{\prime}}_{y}\ \ \text{and}\ \ \ n_{z}=n^{{}^{\prime}}_{z},\\ 4\pi/(L^{3}\lambda^{3})\cdot\left(\sin\left(\lambda r_{0}\right)-\lambda r_{0} \cos\left(\lambda r_{0}\right)\right),&n_{x}\neq n^{{}^{\prime}}_{x}\ \ \text{or}\ \ n_{y}\neq n^{{}^{\prime}}_{y}\ \ \text{or}\ \ n_{z}\neq n^{{}^{\prime}}_{z},\end{array}\right. \tag{2.9}\]
\[U_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}=U_{0}\delta_{n ^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}-U_{0}S, \tag{2.10}\]
is the matrix element of the confinement potential,
\[V_{n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}}=\sum_{j=1}^{N} \left\langle n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime}}_{z}\right|V _{j}\left(\vec{r},\vec{D}_{j}\right)\left|n_{x},n_{y},n_{z}\right\rangle, \tag{2.11}\]
is the matrix element of the Coulomb potential, where
\[\left\langle n^{{}^{\prime}}_{x},n^{{}^{\prime}}_{y},n^{{}^{\prime }}_{z}\right|V_{j}\left(\vec{r},\vec{D}_{j}\right)\left|n_{x},n_{y},n_{z}\right\rangle\] \[=\frac{3}{R_{0}^{3}}\mathrm{e}^{i\vec{L}\vec{D}_{j}}\left\{ \begin{array}{ll}R_{0}^{2},&n_{x}=n^{{}^{\prime}}_{x}\ \ \text{and}\ \ n_{y}=n^{{}^{\prime}}_{y}\ \ \text{and}\ \ n_{z}=n^{{}^{\prime}}_{z},\\ \frac{2-2\cos\left(R_{0}\right)L^{2}}{L^{2}},&n_{x}\neq n^{{}^{\prime}}_{x}\ \ \text{or}\ \ n_{y}\neq n^{{}^{\prime}}_{y}\ \ \text{or}\ \ n_{z}\neq n^{{}^{\prime}}_{z},\end{array}\right. \tag{2.12}\]
\[R_{0}=L\left(\frac{3}{4\pi}\right)^{1/3},\ \ \vec{\lambda}=\frac{2\pi}{L}\left[ \left(n_{x}-n^{{}^{\prime}}_{x}\right)\vec{e}_{x}+\left(n_{y}-n^{{}^{\prime}}_{ y}\right)\vec{e}_{y}+\left(n_{z}-n^{{}^{\prime}}_{z}\right)\vec{e}_{z}\right],\]
where \(\vec{e}_{x},\ \ \vec{e}_{y},\ \vec{e}_{z}\) are unit vectors. In [25, 26, 34] it was shown that convergence of the results was obtained considering \(n_{\max}=7\) and \(L=2.5\ a_{b}^{*}+2r_{0}\). Moreover, in [34] it was substantiated that with those parameters, the results do not depend on the wave vector \(\left(k_{x},k_{y},k_{z}\right)\) in the range \(\left[0\ldots 2\pi/L\right]\). Our results have a convergence too, even with many impurities for the mentioned parameters.
## 3 Analysis of the obtained results
The calculations were performed using the physical parameters of GaAs/Al\({}_{x}\)Ga\({}_{1-x}\)As semiconductor heterostructure, \(x\) = 0.4, \(m_{0}\) = 0.067\(m_{e}\), \(m_{1}\) = 0.1\(m_{e}\), \(U_{0}\) = 297 meV, \(\varepsilon\) = 13.2, where \(m_{e}\) is the mass of a free electron in vacuum. We consider four cases of the location of impurities on the QD surface (figure 1):
* one impurity \(\vec{D_{1}}=(0,0,r_{0})\);
* two impurities \(\vec{D_{1}}=(0,0,r_{0})\), \(\vec{D_{2}}=(0,0,-r_{0})\);
* four impurities \(\vec{D_{1}}=1/\sqrt{3}(r_{0},r_{0},r_{0})\), \(\vec{D_{2}}=1/\sqrt{3}(-r_{0},-r_{0},r_{0})\), \(\vec{D_{3}}=1/\sqrt{3}(-r_{0},r_{0},-r_{0})\), \(\vec{D_{4}}=1/\sqrt{3}(r_{0},-r_{0},-r_{0})\);
* six impurities \(\vec{D_{1}}=(0,0,r_{0})\), \(\vec{D_{2}}=(0,0,-r_{0})\), \(\vec{D_{3}}=(0,r_{0},0)\), \(\vec{D_{4}}=(0,-r_{0},0)\), \(\vec{D_{5}}=(r_{0},0,0)\), \(\vec{D_{6}}=(-r_{0},0,0)\).
We perform the calculations for the aforementioned cases in order to define the dependence of several electron energy levels on the QD radius. Moreover, we followed the condition (surface concentration of impurities is not larger than \(\sigma\) = 0.0048 A\({}^{-2}\)). If there is one impurity ion or two ions on the QD surface, the electron energy spectrum is like the one presented in figure 2.
From figure 2 one can see that the electron energy levels decrease when the QD radius increases. In the case of one impurity, we can also use the method of the Schrodinger equation solution presented in [22]. In the case of two diametrally located ions on the QD surface, we also used the method of Schrodinger equation solution presented in [32]. All the methods yield the same results for one and two ions of impurity, respectively. The divergence is no more than 5%. This comparison of the obtained results with the results obtained by other methods [22, 32] makes it possible to label the levels in figure 2:
Figure 1: (Colour online) Impurities on the QD surface.
Figure 2: Electron energy spectrum (\(s\)- and \(p\)-states) of the QD with one impurity ion (A) and two diametral located ions (B) on the QD surface.
1 - first \(s\)-state; 2,3 - first \(p\)-states (magnetic quantum number \(m_{l}=0,\pm 1\)). The splitting of \(p\)-states can be explained by the violation of spherical symmetry (when impurities are not in the QD center). This splitting is present for all QD radii. For small QD radii, the splitting is very small but still present. Like in our previous works [22, 32], the presence of two diametrally located ions on the QD surface causes a smaller electron energy than in the case of one ion on the QD surface. These two cases A) and B) demonstrate that the plane wave method can be successfully used for C) and D) cases of the impurities located on the QD surface.
The results of calculations are presented in figure 3 (C and D cases) and in table 1. From figure 3 we can see the first \(s\)-level (curves 1), and the first \(p\)-level (curves 2). When there are 4 or 6 impurities on the QD surface, we do not observe a splitting of \(p\)-levels in the figure 3 (see also table 1). In the cases of one (case A) or 2 (case B) diametrally located impurities, \(p\)-levels split. Therefore, we can make a conclusion that in the C) and D) cases for \(p\)-levels, the symmetry of the location of impurities does not cause a \(p\)-level splitting.
Let us see what happens with \(d\)-like levels. In the A) and B) cases, we also got a splitting of \(d\)-like levels [24, 32] by \(|m_{l}|=0,1,2\) (we got 3 levels). Those results are explained by the violation of spherical symmetry and by the existance of cylindrical symmetry. For cylindrical symmetry, a "good quantum number" is \(|m_{l}|\). Therefore, the number of the level splitting depends on the possible number of values \(|m_{l}|\). In the C) case, there is tetrahedral symmetry, and the crystal field theory and symmetrical analysis [35] "say" that \(d\)-levels should be split into two levels: a) twofold degenerate level; b) threefold degenerate level. In the D) case, there is octahedral symmetry and according to the crystal field theory and symmetrical analysis [35], \(d\)-level should be split into two levels: a) threefold degenerate level; b) twofold degenerate level. One can see that in the C) and D) cases the structure of \(d\)-level splitting is
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{4 ions of impurity} & \multicolumn{2}{|c|}{6 ions of impurity} \\ \hline \(E\), meV & level & \(E\), meV & level \\ \hline
10.7 & s (\(l\) = 0, \(m_{l}\) = 0) & \(-20.3\) & s (\(l\) = 0, \(m_{l}\) = 0) \\ \hline
85.4 & p (\(l\) = 1, \(m_{l}\) = \(-1\)) & 54.8 & p (\(l\) = 1, \(m_{l}\) = \(-1\)) \\
85.4 & p (\(l\) = 1, \(m_{l}\) = 0) & 54.8 & p (\(l\) = 1, \(m_{l}\) = 0) \\
85.4 & p (\(l\) = 1, \(m_{l}\) = 1) & 54.8 & p (\(l\) = 1, \(m_{l}\) = 1) \\ \hline
173.9 & d (\(l\) = 2, \(m_{l}\) = \(-1\)) & 138.5 & d (\(l\) = 2, \(m_{l}\) = \(-2\)) \\
173.9 & d (\(l\) = 2, \(m_{l}\) = 0) & 138.5 & d (\(l\) = 2, \(m_{l}\) = 2) \\
173.9 & d (\(l\) = 2, \(m_{l}\) = 1) & 150.7 & d (\(l\) = 2, \(m_{l}\) = \(-1\)) \\ \hline
180.3 & d (\(l\) = 2, \(m_{l}\) = \(-2\)) & 150.7 & d (\(l\) = 2, \(m_{l}\) = 0) \\
180.3 & d (\(l\) = 2, \(m_{l}\) = 2) & 150.7 & d (\(l\) = 2, \(m_{l}\) = 1) \\ \hline
216.6 & s (\(l\) = 0, \(m_{l}\)=0) & 187.4 & s (\(l\) = 0, \(m_{l}\)=0) \\ \hline \end{tabular}
\end{table}
Table 1: Electron energy levels in the QD with 4 and 6 impurities on the surface. QD radius is \(r_{0}=70\) Å.
Figure 3: (Colour online) Electron energy spectrum (\(s\)- and \(p\)-states) of the QD with 4 impurities ions (C) and 6 ions (D) on the QD surface.
the same, but the order of the splitting levels is different. Moreover, in both octahedral symmetry and tetrahedral symmetry field, the \(p\)-level does not split (which can be seen in figure 3). The results are presented in table 1.
In this paper, we also calculate the ground state electron binding energies for the case of 1, 2, 4, 6 impurities on the QD surface. The calculation results are presented in figure 4. One can see that the increase of impurities on the QD surface causes an increase of the binding energy due to the presence of additional potential energy [\(N\) increase in (11)].
## 4 Conclusion
In this work we have used the plane wave functions method for the calculation of the electron energy levels in the spherical QD with several impurities on its surface. The obtained results show that for 1 and 2 diametrally located impurity ions, there is a cylindrical symmetry, and we get the energy levels splitting by \(|m_{l}|\). In the case of 4 impurities located in the vertices of a regular tetrahedron on the QD surface and in the case of 6 impurities located on the vertices of regular octahedron, \(s\)- and \(p\)-levels do not split. But \(d\)-levels split into: one threefold degenerated level and one twofold degenerate level (for 4 impurities); one twofold degenerate level and one threefold degenerated level (for 6 impurities). The number of split \(d\)-levels is the same, but the order is opposite. Moreover, we have shown that for a larger number of surface impurities, the binding energy increases.
The proposed methodology can be used and extended to non-spherical closed and open [36] QDs with donor and acceptor impurities on their surface. These calculations will be performed in our next works.
|
2310.14613 | Optimal Modulation Current for Gain-Switching Lasers | This paper formally shows that an exponentially rising current is optimal in
terms of resistive ohmic loss for driving a semiconductor laser into the
gain-switching mode. A metric to quantify the quality of laser operation that
measures the similarity of a generated optical pulse to the delta function is
proposed. Several circuit implementations to approximate exponentially rising
current are developed, including using a driver circuit with BJT output stage,
a network of RLC circuits, and a saturating inductor. An experimental
comparison between a state-of-the-art sinewave resonant driver circuit and a
directly driven laser is performed that favors the latest variant of the
driver. | Alex Borisevich | 2023-10-23T06:43:14Z | http://arxiv.org/abs/2310.14613v2 | # Optimal Modulation Current for Gain-Switching Lasers
## Abstract
This paper formally shows that an exponentially rising current is optimal in terms of resistive ohmic loss for driving a semiconductor laser into the gain-switching mode. A metric to quantify the quality of laser operation that measures the similarity of a generated optical pulse to the delta function is proposed. Several circuit implementations to approximate exponentially rising current are developed, including using a driver circuit with BJT output stage, a network of RLC circuits, and a saturating inductor. An experimental comparison between a state-of-the-art sinewave resonant driver circuit and a directly driven laser is performed that favors the latest variant of the driver.
## 1 Introduction
Gain-switching is a technique used in lasers to generate short-duration optical pulses. In a gain-switched laser, the gain medium (the material that amplifies the light) is modulated or switched on and off rapidly, leading to the emission of pulsed laser light. This is in contrast to continuous-wave (CW) lasers that emit a steady beam of light.
The basic principle behind gain-switching involves rapidly changing the population inversion in the gain medium. Population inversion is a condition where more atoms or molecules in the gain medium are in an excited state than in the ground state, which is necessary for laser amplification to occur. By quickly "switching" the gain medium from a low-population-inversion state to a high-population-inversion state, a burst of photons is emitted as the excited particles rapidly decay back to the ground state, resulting in a short-duration pulse of laser light. By quickly "switching" the gain medium from a low-population-inversion state to a high-population-inversion state, a burst of photons is emitted as the excited particles rapidly decay back to the ground state, resulting in a short-duration pulse of laser light.
Gain-switched lasers are valued for their ability to generate optical pulses with durations ranging from picoseconds to nanoseconds, typically an order of magnitude shorter than applied electrical pulses. These short-duration pulses have applications in telecommunications, laser material processing, spectroscopy, and medical imaging.
It is well known that the electrical-to-optical efficiency of a laser in the gain-switching mode is extremely low, in the order of a couple of percents. In most applications this is acceptable, but in some applications like wearable or portable devices, the efficiency of the laser is critical for battery life and laser parameters stability that drift with a temperature. The laser inefficiency problem is even more amplified in high repetition rates due to linear scaling of the power dissipation with a frequency.
Laser driving circuits for high speed lasers could be loosely divided into two groups:
1. With a direct electrical coupling of the switching elements to the laser diode, for example [1], [2], [3].
2. With a capacitive coupling to the laser where a capacitor is used to store energy which is pumped into the laser, for example [4], [5], [6], [7].
There are a number of attempts [1], [2] to increase efficiency of the laser operation by customizing modulation current profile. The presented approaches are based on staggering modulation current profile into two phases: slow and fast dynamics. The slow part is used to precharge the laser to build up a lasing carrier density threshold, while the fast part which is required to be a very narrow current spike is to inject carriers into precharged lasing laser.
By our knowledge, the problem of energy efficiency of the gain switching lasers is not systematically addressed in scientific publications and patents, except series of papers [13], [14], [15], [16]. Particularly in [13] it was observed from the numerical simulations that tangential hyperbolic input provides the maximum power along with minimum FWHM.
In this paper we will demonstrate that exponential modulation current is optimal in terms of the electrical losses minimization of the gain switching lasers, along with some concepts how to implement this modulation current by electrical circuits. We also will introduce a metric to evaluate quality of the optical pulses and compare state of the art resonant driver circuit with a circuit that generates more optimal modulation current waveform.
## 2 Optimal Precharge Current Profile
In this section we are going to formally prove that an optimal modulation current to start the lasing in gain-switching mode has a form: \(I(t)=A(T,N_{th})\cdot\exp(t/\tau_{N})\) where \(A(T,N_{th})\) is a constant that is a function of the pulse duration \(T\) and the lasing threshold carrier density \(N_{th}\). The exponential growth rate constant \(\tau_{N}\) is a physical characteristic of the laser.
### State Space Model
The state-space model [1] of the gain switching laser is
\[\begin{split}\dot{N}=\frac{I}{eV}-\frac{N}{\tau_{N}}-g(N,S)\\ \dot{S}=\Gamma\cdot g(N,S)-\frac{S}{\tau_{P}}+\frac{\Gamma\beta N }{\tau_{N}}\end{split} \tag{1}\]
where \(N\) and \(S\) are the carrier density and photon density, \(I\) is the injected current, all other constants are the laser physical parameters, namely: \(\tau_{N}\) is total spontaneous emission carrier lifetime, \(\tau_{P}\) is average photon lifetime inside the cavity, \(\Gamma\) is mode confinement factor, \(\beta\) is fraction of the spontaneous emission, \(e\) is elementary charge, \(V\) is active region volume.
The \(g(N,S)\) is a nonlinear term, which models gain of the medium
\[g(N,S)=g_{0}\frac{(N-N_{t})\cdot S}{1+\epsilon S} \tag{2}\]
where \(g_{0}\) is gain slope constant, \(N_{t}\) is carrier density at transparency, \(\epsilon\) is gain compression factor.
The gain switching mode of laser diode operation can be described as follows (Figure 1). A current pulse \(I(t)\) is applied to a laser. Without being significantly consumed by stimulated emission, injected electrons rapidly build the carrier density \(N\) up to threshold density \(N_{th}\). After that, the population inversion can be achieved, and the laser begins to emit the light pulse, which is modeled
by the second equation for \(S\), and especially first term \(\Gamma\cdot g(N,S)\). When the stimulated emission begins to consume the carrier significantly, which is modeled by \(-g(N,S)\) term in the first equation, the population inversion reaches its maximum value. With the generation of laser pulses, the carrier density \(N\) drops to the lasing threshold, at which point the emission \(S\) reaches its peak value. At this time, the current should be terminated quickly to restrain the secondary optical oscillation.
The lasing threshold carrier density is defined by a condition, when the medium gain is higher than cavity losses:
\[N_{th}=N_{t}+\frac{1}{\tau_{P}\Gamma g_{0}} \tag{3}\]
The minimum current to achieve the laser effect in steady state is called the threshold current \(I_{th}\):
\[I_{th}=\frac{eV}{\tau_{N}}N_{th} \tag{4}\]
### Deriving the Optimal Trajectory
The dynamics of the carrier density before reaching the lasing threshold can be approximated linearly by letting \(g(N,S)=0\) which gives:
\[\dot{N}\approx\frac{I}{eV}-\frac{N}{\tau_{N}} \tag{5}\]
An approximate optimal control problem can be formulated as follows:
- find a current trajectory \(I^{*}(t)\) which reaches the threshold carrier density \(N_{th}\) by an arbitrary time \(T\),
- minimize dissipated in the laser diode energy by minimizing the following performance function:
\[J=\int_{0}^{T}I^{2}(t)dt\rightarrow\min \tag{6}\]
And the linear approximation of carrier density \(N\) dynamics is a just first order linear dynamical system:
Figure 1: Gain switching principle illustration (from [1])
\[\dot{x}=-ax+u/b \tag{7}\]
where \(x=N\) is state variable, \(u=I\) is control input, \(a,b>0\) are coefficients:
\[\begin{array}{c}a=\frac{1}{\tau_{N}}\\ b=eV\end{array} \tag{8}\]
The performance function becomes:
\[J=\int_{0}^{T}u^{2}(t)dt\rightarrow\min \tag{9}\]
Substitution of \(u=b\dot{x}+abx\) to \(J\) gives:
\[J=\int_{0}^{T}(b\dot{x}+abx)^{2}dt\rightarrow\min \tag{10}\]
the corresponding cost functional (integrand) is:
\[V=(b\dot{x}+abx)^{2}=a^{2}b^{2}x^{2}+2ab^{2}\dot{x}x+b^{2}(\dot{x})^{2} \tag{11}\]
In order to find the optimal trajectory for \(x\), the Euler-Lagrange equation needs to be satisfied [8]:
\[\frac{\partial V}{\partial x}-\frac{d}{dt}\frac{\partial V}{\partial\dot{x}}=0 \tag{12}\]
over the optimal trajectory \(x^{*}(t)\).
Substituting (7) into (12) and simplifying, the corresponding Euler-Lagrange equation becomes a second-order linear ordinary differential equation:
\[\ddot{x}-a^{2}\cdot x=0 \tag{13}\]
The solution of which equations is
\[x(t)=C_{1}e^{at}+C_{2}e^{-at} \tag{14}\]
where the constants \(C_{1}\) and \(C_{2}\) evaluated using the given boundary conditions: \(x(0)=0\), \(x(T)=N_{th}\).
\[x(t)=x(T)\frac{\sinh(at)}{\sinh(aT)} \tag{15}\]
The optimal control input (current) trajectory can be found as:
\[u=b\cdot(\dot{x}+ax) \tag{16}\]
which gives:
\[u(t)=\frac{b\cdot x(T)}{\sinh(aT)}\cdot(a\cosh(at)+a\sinh(at)) \tag{17}\]
and after obvious simplifications:
\[u(t)=\frac{a\cdot b\cdot x(T)}{\sinh(a\cdot T)}e^{at} \tag{18}\]
or in original quantities:
\[I(t)=\frac{eVN_{th}}{\tau_{N}\sinh(T/\tau_{N})}\exp(t/\tau_{N}) \tag{19}\]
Examples of optimal current profiles (19) for various pulse widths \(T\) are demonstrated in Figure 2 for a model of 0.5 W semiconductor laser diode.
### Pulse Duration
An arbitrary (but fixed) duration \(T\) of the control pulse is used in the optimal control problem setting (6). It is interesting to find out what duration \(T\) is optimal for the energy efficiency.
The performance function \(J\) can be evaluated by substituting optimal control \(u(t)\) given by equation (19):
\[J(T)=\int_{0}^{T}u^{2}(t)dt=\frac{a^{2}b^{2}x(T)^{2}}{\sinh^{2}(aT)}\int_{0}^{T }e^{2at}dt=ab^{2}x(T)^{2}\frac{e^{aT}}{\sinh(aT)}=\frac{e^{2}V^{2}N_{th}^{2} \cdot\exp(T/\tau_{N})}{\tau_{N}\sinh(T/\tau_{N})} \tag{20}\]
The obtained function \(J(T)\) can be evaluated for different \(T\), and the results are given in the Figure 3.
Figure 2: Examples of optimal current profiles (19) for various pulse durations \(T\)
From the \(J(T)\) dependency follows that longer pulses are more optimal from the minimizing electrical losses point of view. It can be formally proven that there is a lower limit of the energy loss obtained for an infinite long pulse:
\[J_{\min}=\lim_{T\rightarrow\infty}J(T)=\frac{e^{2}V^{2}N_{th}^{2}}{\tau_{N}} \cdot\lim_{T\rightarrow\infty}\frac{\exp(T/\tau_{N})}{\sinh(T/\tau_{N})}=2 \frac{e^{2}V^{2}N_{th}^{2}}{\tau_{N}} \tag{21}\]
However, the optical power is proportional to the instantaneous current in the moment of lasing generation. Thus, there is a trade-off between the optimality of conduction loss and optical power: shorter pulses give higher optical output, but are less efficient.
This can be investigated by varying \(T\) and simulating the full nonlinear model (1), calculating efficiency as a quantity proportional to
\[\eta\sim\frac{\int_{0}^{\infty}S(t)dt}{\int_{0}^{T}I^{2}(t)dt} \tag{22}\]
The results of evaluating the optimal trajectories using the full laser model (1) are presented in the figures 4 and 5 below.
Figure 3: Energy loss performance (6) as a function of pulse duration \(T\)
Figure 4: Optimal current trajectories (19) and optical pulses obtained by integrating (1) for various pulse durations \(T\)
It is noticeable from the Figure 4 that the peak current of the optimal control becomes constant starting from some moment of time \(T\). This fact can be formally established by considering the limit
\[\lim_{T\rightarrow\infty}I(T)=\frac{evN_{th}}{\tau_{N}}\lim_{T \rightarrow\infty}\frac{\exp(T/\tau_{N})}{\sinh(T/\tau_{N})}=2\frac{eVN_{th}}{ \tau_{N}}=2I_{th} \tag{23}\]
Thus, by increasing \(T\) the peak current approaches a constant value equal to twice the threshold generation current (4).
There are two conclusions from the analysis of energy losses as a function of pulse duration \(T\):
1. The optimal peak current \(I(T)\), starting from sufficiently long pulses approaches \(2I_{th}\).
2. The efficiency ratio \(\eta\) as (22) increases with increasing pulse duration and also becomes constant, starting from sufficiently long current pulses.
In addition, the physical feasibility of the optimal current waveform implies a limitation on the rate of change of current due to the presence of parasitic inductances in the circuit:
\[\left|\frac{d}{dt}I(t)\right|\leq(dI/dt)_{max} \tag{24}\]
To meet the slew rate limitation requirement (24) note that the time derivative of the exponential function only increases with time \(\dot{I}(T)\geq\dot{I}(t)\), so it is enough to check only \(\dot{I}(T)\)
\[\dot{I}(T)=\frac{evN_{th}}{\tau_{N}^{2}\sinh(T/\tau_{N})}\exp(T/ \tau_{N})\leq(dI/dt)_{max} \tag{25}\]
Solving the inequality above, the duration \(T\) should be longer than
Figure 5: Efficiency measure \(\eta\) as a function of pulse duration \(T\) calculated by integrating (1)
\[T\geq\tau_{N}\sqrt{\frac{B}{B-2}} \tag{26}\]
where
\[B=\frac{\tau_{N}^{2}}{evN_{th}}\cdot(dI/dt)_{max} \tag{27}\]
Based on the numerical and analytical fact that the longer the current pulse, the more optimal it is in terms of loss energy \(J\), it becomes important to minimize parasitic inductances in the circuit to turn off the current quickly after time \(T\) to avoid optical afterpulsing.
## 3 An Optical Pulse Shape Metric
In order to evaluate shape of produced optical pulses a metric can be proposed which is the amplitude (peak) of the signal normalized by the pulse energy. In other words, for a given signal \(f(t)\) of finite duration and finite energy, the metric is defined as a ratio of a maximum value of (the \(\infty\)-norm) to the integral (the 1-norm) of the signal:
\[\rho[f]=\frac{\|f\|_{\infty}}{\|f\|_{1}}=\frac{\max_{t\in[0,T]}f(t)}{\int_{0}^ {T}f(t)dt} \tag{28}\]
The definition of the metric is inspired by a crest factor or peak-to-RMS ratio [9], which is used for signal waveforms characterization in electrical engineering
For a uniformly sampled discrete signal \(y_{k}\), the integral is obviously just a sum of the signal samples over its duration, so the discrete time version of the \(\rho\) is
\[\rho[y]=\frac{\max_{k}y_{k}}{\Delta T\sum_{k}y_{k}} \tag{29}\]
where \(\Delta T\) is sampling interval.
As it seen, the unit of \(\rho\) is Hz or 1/s.
### Mathematical Properties of \(\rho\)
1. It is obvious that the \(\rho\) metric value calculated for delta function is infinite:
\[\rho[\delta(t)]=\infty \tag{30}\]
This is pretty apparent from the definition of the delta function \(\delta\):
\[\rho[\delta(t)]=\frac{\max_{t}\delta(t)}{\int_{-\infty}^{+\infty}\delta(t)dt}= \max_{t}\delta(t)=\delta(0)=\infty \tag{31}\]
So the higher the \(\rho\) the closer the signal to a delta function shape.
2. The value of \(\rho\) metric calculated for a rectangular shape pulse \(1(t)\) of unity amplitude and finite duration \(T\) equals to \(1/T\):
\[\rho[1(t)]=\frac{\max_{t}1(t)}{\int_{0}^{T}1(t)dt}=\frac{1}{T} \tag{32}\]
3. For two signals \(f(t)\) and \(g(t)\) of the same energy and finite duration the value of metric \(\rho\) will be higher for the signal of higher amplitude.
Let \(\int_{0}^{T}f(t)=\int_{0}^{T}g(t)=E\), and also \(f_{max}=\max_{t}f(t)\), \(g_{max}=\max_{t}g(t)\). Assume \(f_{max}>g_{max}\), then
\[\rho[f]=\frac{\max_{t}f(t)}{\int_{0}^{T}f(t)}=\frac{f_{max}}{E}>\frac{g_{max}} {E}=\frac{\max_{t}g(t)}{\int_{0}^{T}g(t)}=\rho[g] \tag{33}\]
4. For two signals \(f(t)\) and \(g(t)\) of the same amplitude and finite duration the value of metric \(\rho\) will be higher for the signal with lower energy (which usually corresponds to a shorter in duration pulse).
Let \(\max_{t}f(t)=\max_{t}g(t)=A\), and also \(\int_{0}^{T}f(t)=E_{f}\), \(\int_{0}^{T}g(t)=E_{g}\). Assume \(E_{f}>E_{g}\), then
\[\rho[f]=\frac{\max_{t}f(t)}{\int_{0}^{T}f(t)}=\frac{A}{E_{f}}<\frac{A}{E_{g}} =\frac{\max_{t}g(t)}{\int_{0}^{T}g(t)}=\rho[g] \tag{34}\]
The above properties indicate that a narrower pulse with a higher peak amplitude will have a higher \(\rho\) metric value.
### Properties of \(\rho\) With Respect to System Responses
Without loss of generality, all signals under consideration are assumed non-negative: \(f(t)\geq 0\).
1. The metric \(\rho\) is invariant for an arbitrary amplitude scaling of the signal:
\[\rho[k\cdot f]=\rho[f] \tag{35}\]
where \(k\) is a scalar constant.
It is apparent from (28) that:
\[\rho[k\cdot f]=\frac{\max_{t}kf(t)}{\int_{0}^{T}kf(t)dt}=\frac{k\cdot\max_{t} \cdot f(t)}{k\cdot\int_{0}^{T}\cdot f(t)dt}=\frac{\max_{t}\cdot f(t)}{\int_{0 }^{T}\cdot f(t)dt}=\rho[f] \tag{36}\]
2. The value of metric \(\rho\) calculated for the output response of a linear time-invariant system does not exceed the value of metric \(\rho\) calculated for the system input signal.
Let's \(h\) is an impulse response of a linear time-invariant system. In other words to output response of the system to an input signal \(f(t)\) is a convolution \((h*f)(t)\).
Theorem 1.
\[\rho[h*f]\leq\rho[f] \tag{37}\]
Proof.
Proposition 1. \(\|h*f\|_{\infty}\leq\|f\|_{\infty}\cdot\|h\|_{1}\).
This proposition is a particular case of classical Young's convolution inequality: \(\|f*g\|_{r}\leq\|f\|_{p}\cdot\|g\|_{q}\), \(\frac{1}{p}+\frac{1}{q}=\frac{1}{r}+1\) obtained for \(p=r=\infty\) and \(q=1\), and redefining \(g:=h\).
Proposition 2. \(\|h*f\|_{1}=\|f\|_{1}\cdot\|h\|_{1}\)
By defenition of convolution
\[\|h*f\|_{1}=\int\left|\int f(\tau)h(t-\tau)d\tau\right|dt\]
where the time integrals are calculated for their corresponding time intervals.
The absolute value can be propagated under the integral and across the terms of multiplications assuming \(f,g\geq 0\):
\[\int\left|\int f(\tau)h(t-\tau)d\tau\right|dt=\int\int|f(\tau)|\cdot|h(t-\tau)|d \tau dt\]
Using Fubini's theorem, the order of double integration can be changed, which formally gives a product of norms:
\[\int\int|f(\tau)|\cdot|h(t-\tau)|d\tau dt=\int|f(\tau)|d\tau\cdot\int|g(t-\tau )|dt=\|f\|_{1}\cdot\|h\|_{1}\]
Then the theorem can be trivially proven using the propositions 1 and 2:
\[\rho[f*h]=\frac{\|f*h\|_{\infty}}{\|f*h\|_{1}}=\frac{\|f*h\|_{\infty}}{\|f\|_{ 1}\cdot\|h\|_{1}}\leq\frac{\|f\|_{\infty}\cdot\|h\|_{1}}{\|f\|_{1}\cdot\|h\|_{ 1}}=\frac{\|f\|_{\infty}}{\|f\|_{1}}=\rho[f] \tag{38}\]
3. If a signal \(f_{a^{*}}\) from a set of signals \(F=\{f_{a}\}\) is optimal in terms of metric \(\rho\), then it remains optimal among all the responses of a linear time-invariant system \(h\) to all the signals \(f_{a}\in F\). I.e. a linear time-invariant system preserves optimality of the signals in terms of \(\rho\).
Theorem 2.
Let's consider a set of signals \(F=\{f_{a}\}\) with a signal \(f_{a^{*}}\) optimal in terms of \(\rho\). When passing all the signals in the \(F\) through a system \(h\), the response \(h*f_{a^{*}}\) to the signal \(f_{a^{*}}\) will be optimal too:
\[\rho[f_{a^{*}}]\geq\rho[f_{a}]\Longrightarrow\rho[h*f_{a^{*}}]\geq\rho[h*f_{a}] \tag{39}\]
The proof is based on the following construction. Every non-optimal signal except the \(f_{a^{*}}\) in the set \(F=\{f_{a}\}\) can be considered as a response of a specially constructed LTI system \(h_{a}\) to the optimal signal \(f_{a^{*}}\). In other words, the \(h_{a}\) can be found as a deconvolution solution of the equation:
\[f_{a}=h_{a}*f_{a^{*}} \tag{40}\]
Then the statement of the theorem can be formulated as follows:
\[\rho[h*f_{a^{*}}]\geq\rho[h*h_{a}*f_{a^{*}}]=\rho[h*f_{a}] \tag{41}\]
Substituting \(g:=h*f_{a^{*}}\) and using commutatively of the convolution operation, the following can be obtained for the set of \(h_{a}\) systems:
\[\rho[g]\geq\rho[h_{a}*g] \tag{42}\]
which is a true statement according to the previously proved theorem 1.
A practical outcome of the deduced properties is that the metric \(\rho\) can be experimentally measured using limited bandwidth equipment and passing the laser light through a bulk medium. If the parameters of the laser are found optimal using non-ideal equipment, then the same parameters are optimal when using ideal equipment. And the amplitude of the laser optical pulse can be arbitrary scaled by the measurement equipment and method (assuming sufficient SNR or course).
Hardware Circuit Implementations
The following is the review of possible practical implementations of the high speed current generators with an output current waveform that approximates the optimal one (19).
### Implementation 1: Push-Pull BJT Driver
The emitter current in active mode of BJT transistor is modeled by an approximation to the Ebers-Moll model:
\[I_{E}=I_{ES}\left(e^{\frac{V_{BE}}{V_{T}}}-1\right) \tag{43}\]
where \(V_{T}=kT/q\) is the thermal voltage (approximately 26 mV at 300 K), \(I_{E}\) is the emitter current, \(V_{BE}\) is the base-emitter voltage, \(I_{ES}\) is the reverse saturation current of the base-emitter diode, which is a device parameter.
Using this fact, it is straightforward to develop an exponential current generation circuit by applying linearly increasing base-emitter voltage \(V_{BE}\).
The circuit presented in Figure 7 consists of two complementary BJT transistors S1 and S2. During the turn-on phase, the current flows through the laser diode D and turned on transistor S1. During the turn-off phase, the laser diode D is being shorted through the transistor S2. The Uctrl is a voltage source of rectangular pulses ranging from 0 V to the power supply voltage U.
Figure 6: A concept of the optimal laser driver as an exponential current source
The circuit operates as follows: when the voltage at point A becomes equal to the power supply voltage U, the voltage at point B starts rising because of RC circuit consisting of circuit elements R and C. Approximately, the variation of the voltage at the node B is linear (at short time scale). According to the Ebers-Moll equation, the current through the laser diode starts rising exponentially. In the turn-off phase which is timed to turn off the laser diode just after first optical pulse, the voltage at point A is being driven to 0 V and the transistor S2 is switched on and the laser diode current is shorted through it. At the same time, the base of transistor S1 is being driven through the diode D1 to a 0 V potential, and thus the transistor S1 is being switched off. The diode D1 is select such that the forward voltage drop of it is lower than the base-emitter threshold voltage of S1. The resistor R1 is used to balance delays of turning on the transistor S2 and switching off the transistor S1 to minimize the shoot-through current.
Figure 7: Driver topology with BJT transistor as an exponential current source
### Implementation 2: Multi-Resonant Network
The idea behind the multi-resonant approach is to superpose multiple sine wave currents. If multiple resonant circuits are loaded to the common resistive load, the current can be approximated as
\[I(t)=-V_{0}\sum_{i}A_{i}\sin(\omega_{i}t)=-V_{0}\sum_{i}\frac{\sin(t/\sqrt{L_{i}C _{i}})}{\sqrt{L_{i}/C_{i}}} \tag{44}\]
assuming neglecting damping factor and sufficiently small resistive load, and where \(V_{0}\) is an initial voltage of the capacitors.
By a proper selection of inductors \(L_{i}\) and capacitors \(C_{i}\), the current \(I(t)\) can be made sufficiently close to the optimal one (19).
A practical circuit of the laser driver which uses this principle is shown in Figure 9.
Figure 8: Simulated current response of the proposed topology (\(I_{BJT}\)) in comparison to the optimal trajectory \(I_{ref}\) calculated for a 1W 850nm diode model for pulse duration of 5 ns
The circuit consists of push-pull transistor stage S1A and S1B. The output of the push-pull state is connected to a battery of multiple LC circuits L1,C1, L2,C2 and so on, all connected in parallel. The laser diode D is connected reversely, anode to ground. A discrete diode Drev is connected anti-parallel to the laser diode to charge the capacitors in LC circuits.
The circuit operates as follows: S1A is on initially and the capacitors C1, C2,... CN are charging through the Drev. Once the capacitors are charged, then the S1A is switched off and the S1B is switched on. Since the S1B is bidirectionally conductive, the current is flowing from its drain to source through the laser diode D and the battery of LC circuits.
Figure 9: Practical circuit of the laser driver with multi-resonant LC circuit
Even though the current waveform present in Figure 10 is not sufficiently close the exponential one when using a circuit with only three branches, it is obvious that there is an waveform shape improvement in comparison to just one LC branch (i.e. conventional capacitively coupled circuit [12]). The turn off current slew rate is much faster as well than comparing with a sinusoidal current. Additionally, as any capacitively coupled circuit, this driver turns off naturally, so no precise timing is required.
### Implementation 3: Push-Pull driver and Parallel Capacitor
A push-pull constant voltage \(V\) driver gives a linearly increasing current during the precharge state due to board and package inductive parasitics \(L\):
\[I(t)\approx\frac{V}{L}t \tag{45}\]
One simple way to improve the shape of the current is to use a capacitor across the load.
Let's consider an equivalent circuit presented in Figure 11. The laser diode is modeled as a resistor R. The push-pull driver is a voltage source V. The board and package inductive parasitics are lumped into inductance L.
Figure 10: Current waveforms of the laser driver with three LC branches. The total output current \(I_{out}\) through the laser diode in comparison to the reference optimal trajectory \(I_{ref}\). The currents through individual LC branches are shown as well \(I_{1},I_{2},I_{3}\), \(I_{out}=I_{1}+I_{2}+I_{3}\)
By calculating a transient response of the circuit, the current through the resistor R in under-damped case (\(L<4R^{2}C\)) is given as follows:
\[I(t)=\frac{V}{R}\left(1-e^{-t/\tau}\cos(at/\tau)-\frac{e^{-t/\tau}}{a}\sin(at/ \tau)\right) \tag{46}\]
where
\[\tau=2RC,\ a=\sqrt{\frac{4R^{2}C}{L}-1} \tag{47}\]
A comparison of the current waveforms with and without capacitor added is given in Figure 12.
Figure 11: Driver topology with capacitor connected across the laser diode
Although, the current profile obtained in this topology is not exactly exponential, this is considerable improvement achieved by minimal circuit change.
Taking the approach practically, it worth noting that in order to achieve a fast turn off current slew rate, the pull down transistor S1B should be connected to the diode by much lower inductance than the pull up transistor S1A, as it shown in the Figure 13.
Figure 12: The current response of the proposed topology (\(I_{RLC}\)) and of the circuit with stray inductance \(L\) only \(I_{RL}\) for load \(R=5\) ohm, \(C=150\) pF and \(L=15\) nH and \(V=5\) V in comparison to optimal trajectory \(I_{ref}\) calculated for a 1W 850nm diode for pulse duration of 5 ns
In order to achieve the higher inductance in pull up path, this inductance can be artificially increased by a longer PCB trace.
### Implementation 4: Push-Pull Driver and Saturating Inductor
When the magnetic flux in the discrete inductor is approaching its saturation limit, the current starts rising very fast. This effect can be used to generate approximately exponentially increasing current.
More precisely, the inductance variation of a saturating inductance can be approximated as [10]:
\[L(I)=L_{sat}+\frac{L_{0}-L_{sat}}{2}\left(1-\frac{2}{\pi}\arctan(\sigma\cdot( I-I_{1}))\right) \tag{48}\]
where \(L_{sat}\) is an inductance in fully saturated regime, \(L_{0}\) an inductance far from saturation and \(L(I_{1})=(L_{0}+L_{sat})/2\), \(\sigma\) is model parameter.
Figure 13: Practical circuit of the laser driver of proposed topology
The practical circuit implementing this idea is shown in Figure 15. The laser driver circuit consists of a half bridge made by transistors S1A and S1B. The output of the bridge is connected to a laser diode D which has intrinsic parasitic inductance \(L_{diode}\). The nonlinear saturating inductor in between the driver and the diode provides current waveform shaping.
The output of the bridge is producing rectangular voltage pulses of amplitude \(V\). Let's assume
Figure 14: Typical inductance vs current curve of an inductor (from [10])
Figure 15: Practical laser driver circuit shaping using saturating inductor
negligible and constant voltage drop across the laser diode, as well as ideal switches in the bridge. After switching the S1A on, the saturating inductance becomes connected to a constant voltage source \(V\), the current \(I\) will be governed by the following equation:
\[\frac{dI}{dt}=\frac{V}{L(I)+L_{diode}} \tag{49}\]
The simulated current output of the circuit \(I_{out}\) in comparison to the reference optimal current \(I_{ref}\) are shown in Figure 16. The pulse duration \(T=10\) ns, the saturating inductor has nominal inductance of \(L_{0}=35\) nH, and saturated inductance of \(L_{sat}=5\) nH, the parasitic diode inductance is \(L_{diode}=5\) nH. The optimal trajectory is calculated for a fitted model of the LDX-3820-860, the 860nm wavelength, 200 \(\mu\)m bar size, 8 W continuous power laser diode in TO-9 package and 750 mA threshold current.
It worth noting that, the saturation current should be pretty low in order to generate the current profile suitable for the laser diodes. There is a contradiction between required inductance and resulting saturation current, very small inductors have very high saturation currents, because
\[I_{sat}\sim\frac{N\cdot B_{sat}\cdot S}{L} \tag{50}\]
where \(N\) is the number of turns, \(S\) is the inductor cross-section area and \(B_{sat}\) is saturating flux density (a material property), L is inductance.
Figure 16: Simulation of the laser diode driver circuit with saturating inductor
This leaves the method with a limited applicability (especially for small power laser diodes), albeit theoretically possible.
## 5 Experimental comparison of the sine-wave-shaped and triangular-shaped current laser drivers
In this section we compare two practical laser driver circuits used in industry. One generates a sinewave shaped pulse current through the diode and another one uses a direct connection to the push-pull stage driver. Both circuits were running in the same conditions and operated to achieve the same average power and pulse quality metric \(\rho\).
The laser diode Sharp GH0637AA2G with nominal power 700mW and 638nm wavelength is used to benchmark both circuits. This particular laser diode has an isolated package that doesn't connect to either the anode or cathode, which allows benchmarking both of circuits.
Figure 17: Resonant laser driver circuit used for the efficiency comparison
The sinewave current driver is based on the very well known resonant capacitive discharge laser driver design [12]. The circuit (Figure 17) operate as follows. The GaN transistor S starts in the off-state and the capacitor C is been charged through R and Drev. After the capacitor C is fully charged the gate driver turns the transistor S on by external command. Since the S is biderectionally conducting the capacitor C discharges through the laser D and parasitic inductance L. The C and L form a resonant network, hence the current through the diode D rings sinusoidally. The first half wave of the current ignites the laser diode D in gain-switching mode. During the consecutive ringing the energy stored in the capacitor C dissipates quickly and the next period usually is not enough to ignite the diode, but it contributes to the inefficiency of the circuit. The gate driver turns off the transistor S sometime after the first half wave of the current.
As it seen, the efficiency consideration of this circuit consists of the two power soucres: the gate driver power supply and the main power supply U of the resonant circuit. So the efficiency can be calculated as:
Figure 18: Resonant laser driver circuit prototype PCB (the laser diode is on the opposite side of the circuit)
\[\eta=\frac{P_{optical}}{P_{driver}+P_{main}} \tag{51}\]
where \(P_{optical}\) is output optical power, \(P_{driver}\) is electrical power consumed by the gate driver, \(P_{main}\) is electrical power consumed by the main resonant circuit from the voltage source U.
In this particular circuit implementation, the LGM1020 gate driver by TI is used to drive the EPC2036 GaN transistor.
The push-pull circuit (Figure 19) is based on the same LGM1020 gate driver, which consists of the high speed push-pull output stage. The laser is simply connected directly to the push-pull output of the driver. The current pulse width and amplitude is regulated both by adjusting pulse width of the control signal and power supply voltage of the gate driver within the acceptable ranges of the LGM1020.
Figure 19: Direct push-pull driver circuit used for the efficiency comparison
The comparison of the two circuits is summarized in the table 1. Both circuits were running at 2 MHz repetition rate and tuned to produce 0.55 mW of average power. In the both cases the power supply voltage of the gate driver was set to 5.4V, the maximum voltage the driver supports. The voltage of the resonant circuit was set to 10V.
The optical pulse is acquired by Tektronix DPO73304SX high speed scope together with the DX12CF ultrafast photodiode sensor by Thorlabs. The acquired and averaged waveforms are presented in the figures 21 and 22. The limits of integration used to calculate the \(\rho\) is shown by the green vertical lines, and the half-maximum level is shown as red horizontal line.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Circuit & \(P_{driver}\), mW & \(P_{main}\), mW & \(P_{optical}\), mW & Efficiency, \% & \(\rho\), 1/ns & FWHM, ps \\ \hline Resonant & 19.6 & 35 & 0.55 & 1.0 \% & 6.08 & 110 \\ Direct & 16.85 & 0 & 0.55 & 3.3 \% & 6.473 & 120 \\ \end{tabular}
\end{table}
Table 1: Measured characteristics of the resonant and direct drive circuits.
Figure 20: Direct push-pull driver circuit prototype PCB (the laser diode is on the opposite side of the circuit)
Figure 21: Optical pulse waveforms acquired for the resonant circuit
Evidently the efficiency of the push-pull driving circuit is higher than the sinewave one, while matching the same average power level and calculated \(\rho\). There are a couple of differences that contribute to that:
- in both circuits the gate driver consumes almost the same power, but the direct driver circuit doesn't have the high voltage part and secondary power supply,
- the direct driver circuit doesn't have dissipative parts as the precharge resistor R,
- the direct driver circuit has more optimal current shape than the resonant one.
## 6 Conclusion
This paper contributes to both the theoretical and practical development of the high-speed semiconductor laser driver circuits for gain-switching operating mode
* It is shown that exponentially rising current in the form of \(I(t)=A\cdot\exp(t/\tau_{N})\) is optimal in terms of resistive ohmic loss for driving the semiconductor laser into the gain-switching mode.
* A metric \(\rho\) to quantify the quality of laser operation that measures the similarity of a generated optical pulse to the delta function is proposed.
* Several circuit implementations to approximate exponentially rising current are developed.
* An experimental comparison between state-of-the-art sinewave resonant driver circuit and directly driven laser is performed that favors the latest variant of the driver.
Figure 22: Optical pulse waveforms acquired for the direct push-pull circuit
The future research work will be directed towards implementation and evaluation of the proposed circuits using mixed-signal chip semiconductor technology.
## Acknowledgment
Author is grateful to Dr. Han Ban for technical discussions and suggestions during the research and design work, and Mikhail Panin for assembling hardware prototypes of laser drivers.
|
2301.12523 | Generalized Subspace Subcodes in the Rank Metric | Rank-metric codes were studied by E. Gabidulin in 1985 after a brief
introduction by Delsarte in 1978 as an equivalent of Reed-Solomon codes, but
based on linearized polynomials. They have found applications in many areas,
including linear network coding and space-time coding.
They are also used in cryptography to reduce the size of the keys compared to
Hamming metric codes at the same level of security. However, some families of
rank-metric codes suffer from structural attacks due to the strong algebraic
structure from which they are defined.
It therefore becomes interesting to find new code families in order to
address these questions in the landscape of rank-metric codes.
\par In this paper, we provide a generalization of Subspace Subcodes in Rank
metric introduced by Gabidulin and Loidreau. We also characterize this family
by giving an algorithm which allows to have its generator and parity-check
matrices based on the associated extended codes. We have also studied the
specific case of Gabidulin codes whose underlying decoding algorithms are
known. Bounds for the cardinalities of these codes, both in the general case
and in the case of Gabidulin codes, are also provided. | Ousmane Ndiaye, Peter Arnaud Kidoudou, Hervé Tale Kalachi | 2023-01-29T19:37:41Z | http://arxiv.org/abs/2301.12523v2 | # Rank Generalized Subspace subcode
###### Abstract
Rank metric codes were study by E. Gabidulin in 1985 after a brief introduction by Delaste in 1978 as an alternative to Reed-Solomon codes based on linear polynomials. They have found applications in many area including linear network coding and space-time coding.
They are also used in cryptography to reduce the size of the keys compared to Hamming metric codes at the same level of security. Despite this prowess, these codes suffer from structural attacks due to the strong algebraic structure from which they are defined. It therefore becomes interesting to find new families in order to address these questions. This explains their elimination from the NIST post-quantum cryptography competition.
In this paper we provide a generalisation of subspace subcodes in rank metric introduced by Gabidulin and Loidreau. we also characterize this family by giving an algorithm which allows to have its generator and parity-check matrices based on the associated extended codes. We also have bounded the cardinal of these codes both in the general case and in the case of Gabidulin codes. We have also studied the specific case of Gabidulin codes whose the underlined Gabidulin decoding algorithms are known.
**Keywords:** Coding theory, rank metric, Gabidulin code, Cryptography, Shortened code, Punctured code, Subfield subcodes,.
## 1 Introduction
Rank metric codes were study by E. Gabidulin in 1985 [7] after a brief introduction by Delaste in 1978 [5] who gives a description based on finite fields and its properties. By their construction these codes are very close to Reed-Solomon [18] codes since the codewords are obtained by evaluation of q-polynomials on a support included in an extension of degree \(m\) of \(\mathbb{F}_{q}\). Gabidulin codes are one of the first general construction of linear codes that are maximum rank distant (MRD). They have found applications in linear network coding (LRPC codes), for example, when the transmitter and receiver are oblivious to the inner workings and topology of the network.
Rank metric codes are also used recently in the theory of space-time codes. This was introduced by Lu and Kumar from Gabidulin codes[14].
In cryptography we need several difficult problems, many of which could be solved by the quantum machine (factorization, discrete log,...). The theory of codes is a serious candidate to offer perspectives in the face of this formidable machine, in particular the problem of decoding by syndrome in Hamming or Rank metrics. Recently they are widely used in cryptography to provide cryptographic primitives for encryption, signature, hashing or pseudo-random number generation. The main reason for using rank metric codes is the possibility of reducing the size of the keys compared to Hamming metric codes at the same level of security. Despite this prowess, these codes suffer from structural attacks due to the strong algebraic structure from which they are defined. It therefore becomes interesting to find new families in order to address these questions.
Rank metric cryptography began with the GPT cryptosystem and its variants [9, 3] based on Gabidulin codes [14], which are rank metric analogues of Reed-Solomon codes and an alternative to the Sidelnikov and Shestakov attack while trying to keep the benefits of the last one. However, the strong algebraic structure of these codes has been successfully exploited to attack the original GPT cryptosystem and its variants with the Overbeck attack and its variants [17].
Recently candidates for public key cryptosystems based on rank metric codes [15] have been proposed but were all eliminated by security default. It therefore becomes theoretically interesting to find new families of codes in non-trivial rank metrics on subextensions of low degree or quite simply on vector subspaces.
The notion of subcodes whose components of each codeword belong to the same sub-vector space, named subcodes on the subspace, was first introduced in Hamming metrics on RS codes by Hattori, McEliece, and Solomon, G. [11]. A few years later, the same notion was introduced in rank metrics by Gabidulin and Loidreau for applications in publickey cryptography [8, 3].
Recently Berger, Gueye and Klamti have proposed a generalization of these subcodes into Hamming metrics [2] whose constituents are in subspaces that are not necessarily equal. Just after Berger, Gueye, Klamti and Ruatta [1] proposed a cryptosystem based on quasi-cyclic subcodes of SSRS codes.
In this paper we introduce and characterize the family of generalized subspace subcodes of a Code in rank metrics by giving an algorithm which allows to have its generator and parity-check matrices based on the associated extended codes. We also have bounded the cardinal of these codes both in the general case and in the case of Gabidulin codes as a generalization of the results obtained in [8]. We have also studied the specific case of Gabidulin codes whose decoding algorithms are known.
The document is organized as follows, we will start with section 2 Preliminaries to identify the generalities relating to rank metric codes et derived codes, then section 3 results on Generalized Subspace subcodes in rank metric. We will finish with the section 4 to present the results on Generalized Subspace subcodes of Gabidulin codes.
Preliminaries
### Rank Metric Codes
The _rank weight_\(w_{R}(x)\) of a word \(x=(x_{1},\ldots,x_{n})\) in an extension field \(\mathbb{F}_{q^{m}}^{n}\) is defined by the maximal number of its elements that are linearly independent over the base field \(\mathbb{F}_{q}\), where \(m\) and \(n\) are positive integers and \(q\) a prime number. The _rank distance_\(d_{R}\) between two words is defined by the rank weight of their difference, i.e. \(d_{R}(x,y)=w_{R}(x-y)\).
A rank metric code is a \(k\)-dimensional subspace of an \(n\)-dimensional vector space over a finite field \(\mathbb{F}_{q^{m}}\), where \(k\) is a positive integer. Given a code \(C\subset\mathbb{F}_{q^{m}}^{n}\), its minimum rank distance is
\[d_{R}(C)=\min_{c_{1}\neq c_{2}\in C}w_{R}(c_{1}-c_{2})=\min_{c\in C}w_{R}(c).\]
Definition 1: (Matrix code [10])
A matrix code \(\mathcal{C}\) of length \(m\times n\) over \(\mathbb{F}_{q}\) is a subspace of the vector space of matrices of size \(m\times n\) with entries in \(\mathbb{F}_{q}\). If \(\mathcal{C}\) is of dimension \(K\), we say that \(\mathcal{C}\) is an \([m\times n,K]_{q}\) matrix code, or simply an \([m\times n,K]\) code if there is no ambiguity.
Definition 2: (matrix code associated to an \(\mathbb{F}_{q^{m}}\)-linear code).
Let C be an \([n,k]\) linear code over \(\mathbb{F}_{q^{m}}\). Each word \(c\) of \(C\) can be associated to an \(m\times n\) matrix over \(\mathbb{F}_{q}\) by representing each coordinate \(c_{i}\) by a column vector \((c_{i_{1}},\ldots,c_{i_{m}})^{\top}\) where \(c_{i}=\sum_{j=1}^{m}c_{ij}\beta_{j}\) with \((\beta_{1},\ldots,\beta_{m})\) being an arbitrary basis of \(\mathbb{F}_{q^{m}}\) viewed as a vector space over \(\mathbb{F}_{q}\) and \(c_{ij}\in\mathbb{F}_{q}\). In other words the \(c_{ij}\)'s are the coordinates of \(c_{i}\) in this basis. The matrix code associated to \(\mathcal{C}\) is of type \([m\times n,km]_{q}\).
The weight of a word \(c\in\mathcal{C}\) is the rank of its associated matrix and does not depend on the choice of the basis. If \(C\) is a \(k\)-dimensional linear code, it is said to be a \([n,k,d=d_{R}(C)]_{r}\) code. The parameters are related by an equivalent of Singleton bound for rank distance see [7]:
\[Card(C)\leq q^{\min(m(n-d+1),n(m-d+1))}.\]
And a code satisfying the equality \(Card(C)=q^{\min(m(n-d+1),n(m-d+1))}\) is called a Maximum Rank Distance (MRD) code.
### Punctured and Shortened codes
Punctured and Shortened codes of a fixed code are very important derivative codes to characterize certain sub-codes but also to mount structural attacks against code-based cryptosystems. We are going to use them on extended codes to characterize subcodes on subspaces in rank metrics. We recall here their definition and some properties that we need.
**Definition 3**.: _(Punctured code) Let \(\mathcal{C}\subset\mathbb{F}_{q}^{n}\) be a linear code, \(\mathcal{I}\subseteq\{1,..,n\}\) a set of coordinates. The punctured code of \(\mathcal{C}\) on \(\mathcal{I}\), noted \(P_{\mathcal{I}}(\mathcal{C})\) is defined by :_
\[P_{\mathcal{I}}(\mathcal{C})=\{(c_{i})_{i\in\{1,..,n\}\setminus\mathcal{I}\ |\ c \in\mathcal{C}\}.\]
_This code is of length \(n-|\mathcal{I}|\)._
**Proposition 1**.: _Let \(\mathcal{C}\subset\mathbb{F}_{q}^{n}\) an \([n,k,d]\)-code, \(\mathcal{I}\subseteq\{1,..,n\}\). Then \(P_{\mathcal{I}}(\mathcal{C})\) is an \([n-|\mathcal{I}|,k^{\prime},d^{\prime}]\)-code such that:_
\[k-|I|\leq k^{\prime}\leq k\ and\ d-|I|\leq d^{\prime}\leq d.\]
**Definition 4**.: _(Shortened Code) Let \(\mathcal{C}\subset\mathbb{F}_{q}^{n}\) be a linear code, \(\mathcal{I}\subseteq\{1,..,n\}\) a set of coordinates. The Shortened code of \(\mathcal{C}\) on \(\mathcal{I}\), noted \(S_{\mathcal{I}}(\mathcal{C})\) is defined by :_
\[S_{\mathcal{I}}(\mathcal{C})=P_{\mathcal{I}}(\{c\in\mathcal{C}\setminus \mathcal{I}\ |\ c_{i}=0,\ \forall\ i\in\mathcal{I}\}).\]
_This code is of length \(n-|\mathcal{I}|\)._
**Proposition 2**.: _Let \(\mathcal{C}\subset\mathbb{F}_{q}^{n}\) an \([n,k,d]\)-code, \(\mathcal{I}\subseteq\{1,..,n\}\). Then \(S_{\mathcal{I}}(\mathcal{C})\) is an \([n-|\mathcal{I}|,k^{\prime},d^{\prime}]\)-code such that:_
\[k-|I|\leq k^{\prime}\leq k\ and\ d\leq d^{\prime}.\]
**Theorem 1**.: _(Link between Shortened and Punched Codes [[12], Theorem 1.5.7]) Let \(\mathcal{C}\subset\mathbb{F}_{q}^{n}\) a \([n,k,d]\)-code, \(\mathcal{I}\subseteq\{1,..,n\}\). Then_
1. \(S_{\mathcal{I}}(\mathcal{C}^{\perp})=P_{\mathcal{I}}(\mathcal{C})^{\perp}\)__
2. \(S_{\mathcal{I}}(\mathcal{C})^{\perp}=P_{\mathcal{I}}(\mathcal{C}^{\perp})\)__
## 3 Generalized Subspace subcodes in rank metric
In rank metric, the Subspace subcodes were introduced in [8] with a study on their decoding algorithm.
In this section we give a generalization of Subspace subcodes in rank metric introduced in [8] by E. M. Gabidulin and P. Loidreau. First of all we will give a generalization of the definition of the rank of a vector.
**Definition 5** ([8]).: _Let \(\mathcal{C}\) be matrix code of length \(m\times n\) over \(\mathbb{F}_{q}\) of parameters \([n,k,d=d_{R}(C)]_{q^{m}}\). Let \(V\) be an \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension \(s\leq m\). The subspace subcode of \(C\) over \(V\) is the \(\mathbb{F}_{q}\)-vector space \((C|V)=C\cap V^{n}\)_
\((C|V)\) is formed of the codewords whose components lie in the alphabet formed by the subspace V. This code is not usually \(\mathbb{F}_{q^{m}}\)-linear but it is still \(\mathbb{F}_{q}\)-linear and linear over some intermediary extension depending on V. This code also corresponds by projection to the subgroup subcode [13] on the alphabet is \(\mathbb{F}_{q}^{dim(V)}\). In the following we introduce a generalization by choosing on each coordinate a different vector subspace.
**Definition 6**.: _Let \(\mathcal{C}\) be linear code of length \(n\) over \(\mathbb{F}_{q^{m}}\) of parameters \([n,k,d=d_{R}(C)]_{q^{m}}\). Let \(V_{1},...,V_{n}\) be a set of n \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension respectively \(s_{i}\leq m\). Set \(W=\prod\limits_{i=1}^{n}V_{i}\), constituted of n-tuples with the \(i\)-th coordinate in \(V_{i}\). The Rank Generalized subspace subcode of \(C\) relative to \(W\) is the \(\mathbb{F}_{q}\)-vector space \(RGSS_{W}(C)=C\cap W\)_
In [8] all \(V_{i}\) are equal whereas here they are not necessarily.
Let \(B=\{b_{1},b_{2},...,b_{m}\}\) be a basis of \(\mathbb{F}_{q^{m}}\) as a \(\mathbb{F}_{q}\)-vector space. We can construct the map \(\phi_{B}:\mathbb{F}_{q^{m}}\rightarrow\mathbb{F}_{q}^{m}\) defined by, if \(x=\sum\limits_{i=1}^{m}x_{i}b_{i}\), \(x_{i}\in\mathbb{F}_{q}\), then \(\phi_{B}(x)=(x_{1},x_{2},...,x_{m})\). This map will be extended on \(\mathbb{F}_{q^{m}}^{n}\) by the isomorphism:
\[Exp_{B,n}\colon\mathbb{F}_{q^{m}}^{n} \rightarrow\mathbb{F}_{q}^{mn}\] \[(c_{1},c_{2},...,c_{n}) \mapsto(\phi_{B}(c_{1}),\phi_{B}(c_{2}),...,\phi_{B}(c_{n})),\]
Applied to all codewords, this gives a new linear code of length nm called Expanded Codes.
For the definition of generator matrices of subcodes on the subspaces we will need to generalize the expansion functions adapted to several bases simultaneously. Given \(n\) bases \((B_{1},...,B_{n})\) of \(F_{q^{m}}\)
\[Exp_{(B_{i})_{i},n}\colon\mathbb{F}_{q^{m}}^{n} \rightarrow\mathbb{F}_{q}^{mn}\] \[(c_{1},c_{2},...,c_{n}) \mapsto(\phi_{B_{1}}(c_{1}),\phi_{B_{2}}(c_{2}),...,\phi_{B_{n}} (c_{n})),\]
**Definition 7**.: _Let \(C\) be a linear code of length \(n\) and dimension \(k\) over \(F_{q^{m}}\). The expanded code of \(C\) also called (\(q\)-ary image of a code \(C\)) with respect to the \(n\) bases \((B_{1},...,B_{n})\) of \(F_{q^{m}}\) is a linear code over the base field \(\mathbb{F}_{q}\) defined as:_
\[Im_{q,(B_{i})_{i}}(C)=Exp_{(B_{i})_{i},n}(C)\]
**Lemma 1** ([19], Theorem 1).:
1. _Let C having a generator matrix_ \[G=\begin{pmatrix}g_{1}\\ g_{2}\\ \vdots\\ g_{k}\end{pmatrix}\in(\mathbb{F}_{q^{m}}^{n})^{k}.\] _Then the expanded code of C over_ \(F_{q}\) _with respect to a basis_ \(B=\{b_{1},b_{2},...,b_{m}\}\) _of_ \(\mathbb{F}_{q^{m}}\) _has the expanded generator matrix_ \(\hat{G}_{B}\) _such that_ \[\hat{G}_{B}=ExpMat_{B,n}(G)=\begin{pmatrix}Exp_{B,n}(b_{1}g_{i})\\ Exp_{B,n}(b_{2}g_{i})\\ \vdots\\ Exp_{B,n}(b_{m}g_{i})\end{pmatrix}_{1\leq i\leq k}\in(\mathbb{F}_{q}^{mn})^{mk}.\]
2. _Let C having a parity-check matrix_ \[H=\begin{pmatrix}h_{1}\\ h_{2}\\ \vdots\\ h_{n}\end{pmatrix}^{T}\in(\mathbb{F}_{q^{m}}^{n})^{k}.\] _Then the expanded code of C over_ \(F_{q}\) _with respect to a basis_ \(B=\{b_{1},b_{2},...,b_{m}\}\) _of_ \(\mathbb{F}_{q^{m}}\) _has the expanded parity check matrix_ \(\hat{H}_{B}\) _such that_ \[\hat{H}_{B}=ExpMat_{B,n-k}(H^{T})^{T}=\begin{pmatrix}Exp_{B,n-k}(b_{1}h_{j})\\ Exp_{B,n-k}(b_{2}h_{j})\\ \vdots\\ Exp_{B,n-k}(b_{m}h_{j})\end{pmatrix}^{T}_{1\leq j\leq n}\in(\mathbb{F}_{q}^{ mn})^{m(n-k)}.\]
In the general case, we need to process codewords whose n components are in different subspaces and therefore to process different bases. In practice, a single basis is sufficient to extend messages in \(\mathbb{F}_{q^{m}}^{k}\) to be encoded.
**Proposition 3**.:
1. _Let C having a generator matrix_ \[G=\begin{pmatrix}g_{1}\\ g_{2}\\ \vdots\\ g_{k}\end{pmatrix}\in(\mathbb{F}_{q^{m}}^{n})^{k}.\] _Then the expanded code of C over_ \(F_{q}\) _from the basis_ \(B=\{b_{1},b_{2},...,b_{m}\}\) _with respect to the_ \(n\) _bases_ \((B_{1},...,B_{n})\) _of_ \(F_{q^{m}}\) _has the expanded generator matrix_ \(\hat{G}_{(B_{j})_{j}}^{B}\) _such that_ \[\hat{G}_{(B_{j})_{j}}^{B}=ExpMat_{(B_{j})_{j},n}^{B}(G)=\begin{pmatrix}Exp_{(B_ {j})_{j},n}(b_{1}g_{i})\\ Exp_{(B_{j})_{j},n}(b_{2}g_{i})\\ \vdots\\ Exp_{(B_{j})_{j},n}(b_{m}g_{i})\end{pmatrix}_{1\leq i\leq k}\in(\mathbb{F}_{q}^ {mn})^{mk}.\]
2. _Let C having a parity-check matrix_ \[H=\begin{pmatrix}h_{1}\\ h_{2}\\ \vdots\\ h_{n}\end{pmatrix}^{T}\in(\mathbb{F}_{q^{m}}^{n})^{k}.\] _Then the expanded code of C over_ \(F_{q}\) _from the basis_ \(B=\{b_{1},b_{2},...,b_{m}\}\) _with respect to the_ \(n\) _bases_ \((B_{1},...,B_{n})\) _of_ \(F_{q^{m}}\) _has the expanded parity check_
_matrix_ \(\hat{H}_{(B_{j})_{j}}^{B}\) _such that_
\[\hat{H}_{(B_{j})_{j}}^{B}=ExpMat_{(B_{j})_{j},n-k}(H^{T})^{T}=\begin{pmatrix}Exp_{ (B_{j})_{j},n-k}(b_{1}h_{i})\\ Exp_{(B_{j})_{j},n-k}(b_{2}h_{i})\\ \vdots\\ Exp_{(B_{j})_{j},n-k}(b_{m}h_{i})\end{pmatrix}_{1\leq i\leq n}^{T}\in(\mathbb{F }_{q}^{mn})^{m(n-k)}.\]
**Proposition 4**.: _The q-ary image of a code \(C\) from the basis \(B\) with respect to the \(n\) bases \((B_{1},...,B_{n})\) of \(F_{q^{m}}\), \(Im_{q,(B_{i})_{i}}(C)\) has generator matrix the block matrix \((M_{g_{i,j}}^{T})_{1\leq i\leq k,1\leq j\leq n}\in(\mathbb{F}_{q}^{m\times m})^{ k\times n}\) where the matrix \(M_{g_{i,j}}=\mathcal{M}_{B,B_{j}}(f_{g_{i,j}})\) corresponds to \(f_{g_{i,j}}(x)=g_{i,j}x\)._
**Definition 8**.: _Let, for \(1\leq j\leq n\), \(u_{j}=\{u_{1,j},...,u_{s_{j},j}\}\subset\{(j-1)m+1,(j-1)m+2,...,(j-1)m+m\}\) and \(U=\{u_{1},...,u_{n}\}\). We denote by \(S_{U}\) (respectively \(P_{u}\)) the operation of shortening (respectively puncturing) the q-ary image of C on positions \(I_{U}=\{1,2,...,nm\}\setminus U\), i.e. \(S_{U}(C)=S_{I_{U}}(Im_{q,(B_{i})_{i}}(C))\) and \(P_{U}(C)=P_{I_{U}}(Im_{q,(B_{i})_{i}}(C))\)._
**Proposition 5**.: _Let \(C\) be a linear code of length \(n\) over \(\mathbb{F}_{q^{m}}\), \(B=\{b_{1},b_{2},...,b_{m}\}\) be a basis of \(\mathbb{F}_{q^{m}}\) and \(V=<b_{1},...,b_{s}>\) an s-dimensional \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) and \(U=\{u_{1},...,u_{n}\}\) such that for \(1\leq j\leq n\), \(u_{j}=\{(j-1)m+1,...,(j-1)m+s\}\). The code \(S_{U}(C)\) is \(q\)-ary image of the subspace subcode of \(C\) over \(V\) with respect to the basis \(B\)._
Proof.: Let \(y=(y_{1,1},...,y_{s,1},...,y_{1,j},...,y_{s,j},...,y_{1,n},...,y_{s,n})\in S_{U} (C)\) then \(\exists\ c\in Im_{q}(C)\) such that \(y=P_{I_{U}}(c)\) whose components \(c_{i}=0\) for \(i\notin U\).
As \(V=\phi_{B,n}^{-1}(\{(x_{1},...,x_{s},0,...,0)|x_{1},...,x_{s}\in\mathbb{F}_{q}\})\), therefore \(\phi_{B,n}^{-1}(c)\in C\cap V^{n}\) so \(c\in\phi_{B,n}(C\cap V^{n})\) then \(y\in P_{I_{U}}(\phi_{B,n}(C\cap V^{n}))\) that means
\(S_{U}(C)=\phi_{B,n}(C\cap V^{n})=Im_{q,B}(C\cap V^{n})\)
**Lemma 2** ([4], lemma 34).: _Let \(C\) be a linear code of length \(n\) over \(\mathbb{F}_{q^{m}}\), \(B=\{b_{1},b_{2},...,b_{m}\}\) be a basis of \(\mathbb{F}_{q^{m}}\). Let \((Q_{j})_{j}\in(GL_{m}(\mathbb{F}_{q}))^{n}\). The following equality holds._
\[Im_{q,(Q_{j}^{-1}B)_{j}}(C)=Exp_{(Q_{j}^{-1}B)_{j},n}(C)=Exp_{B,n}(C)\cdot \begin{pmatrix}Q_{1}&&\\ &\ddots&\\ &&Q_{n}\end{pmatrix},\]
\[\hat{G}_{(B_{j})_{j}}^{B}=\hat{G}_{B}\cdot\begin{pmatrix}Q_{1}&&\\ &\ddots&\\ &&Q_{n}\end{pmatrix},and\]
\[\hat{H}_{(B_{j})_{j}}^{B}=\hat{H}_{B}\cdot\begin{pmatrix}(Q_{1}^{-1})^{T}&&\\ &&\ddots&\\ &&(Q_{n}^{-1})^{T}\end{pmatrix}\]
**Corollary 1**.: _Let \(C\) be a linear code of length \(n\) over \(\mathbb{F}_{q^{m}}\), \(B=\{b_{1},b_{2},...,b_{m}\}\) be a basis of \(\mathbb{F}_{q^{m}}\) and \(V=<d_{1},...,d_{s}>\) an s-dimensional \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\). The \(q\)-ary image of the subspace subcode \((C|V)=C\cap V^{n}\) can be expressed as \(S_{U}(C)\) on any completed basis \(D=\{d_{1},d_{2},...,d_{s},...,d_{m}\}\) of \(\mathbb{F}_{q^{m}}\)_
Proof.: According to Proposition 5 and the completed basis \(D=\{d_{1},d_{2},...,d_{s},d_{s+1},...,d_{m}\}\), then \(S_{U}(C)=\phi_{D,n}(C\cap V^{n})\).
As \(QD=B\) with \(Q\) be the change-of-basis matrix from the basis \(B\) to the basis
\(D\), then \(S_{U}(C)=\phi_{Q^{-1}B,n}(C\cap V^{n})=Exp_{B,n}(C\cap V^{n})\cdot\begin{pmatrix} Q&\\ &\ddots\\ &&Q\end{pmatrix}\)
**Corollary 2**.: _Let \(C\) be a linear code of length \(n\) over \(\mathbb{F}_{q^{m}}\), \(B\) be a basis of \(\mathbb{F}_{q^{m}}\) and \(V_{i}=<d_{1,i},...,d_{s_{i},i}>s_{i}\)-dimensional \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\). The \(q\)-ary image of \(RGSS_{W}(C)=C\cap W\), where \(W=\prod\limits_{i=1}^{n}V_{i}\), can be expressed as \(S_{U}(C)\) from any basis \(B=\{b_{1},b_{2},...,b_{m}\}\) with respect to the \(n\) completed bases \((B_{1},...,B_{n})\) from those of \(V_{i}\) respectively._
By using the relation between shortened code and punctured code, one can build the generating matrix of a subcode on the subspace. A generator matrix of \(Exp_{(Q_{j}^{-1}B)_{j},n}(C\cap V^{n})\) can be computed from its parity check matrix defined by \(P_{u}(\hat{H}_{(Q_{j}^{-1}B)_{j}})\) according to 1.
**Proposition 6**.: _Let \(\mathcal{C}\) be matrix code of length \(m\times n\) over \(\mathbb{F}_{q}\) of parameters \([n,k,d=d_{R}(C)]_{q^{m}}\). Let \(V_{1},...,V_{n}\) be a set of \(n\)\(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension respectively \(s_{i}\leq m\). Set \(W=\prod\limits_{i=1}^{n}V_{i}\), constituted of n-tuples with the \(i\)-th coordinate in \(V_{i}\). If \(\sum\limits_{i=1}^{n}s_{i}-m(n-k)>0\) then:_
\[q^{\sum\limits_{i=1}^{n}s_{i}-m(n-k)}\leq|RGSS_{W}(C)|\]
Proof.: Without altering the generality, we can assume that \(s_{1}\leq s_{2}\leq...\leq s_{n}\), so up to isomorphisms \(V_{1}\subset...\subset V_{n}\), even if it means permuting the columns and obtaining an isometrically equivalent code. So there exists a serie of bases respectively of \(V_{1},...,V_{n}\), called \(b_{1},b_{2},....b_{n}\) such that \(b_{1}\subset...\subset b_{n}\) et \(b_{i}=\{\beta_{1},\beta_{2},...,\beta_{s_{i}}\}\). Let \(c=(c_{1},...,c_{n})\in W\), then
\(c_{i}=u_{1i}\beta_{1}+...+u_{s_{i}}\beta_{s_{i}}+0\beta_{s_{i}+1}+...+0\beta_{ s_{n}}\) which translates as:
\(c=b_{n}U=(\beta_{1},\beta_{2},...,\beta_{s_{n}})U\) with \(U=(u_{i,t})_{i=1,t=1}^{s_{n},n}\in\mathbb{F}_{q}^{s_{n}\times n}\). Let \(H=(h_{j,t})_{j=1,t=1}^{n-k,n}\in\mathbb{F}_{q^{m}}^{(n-k)\times n}\) a parity-check matrix of \(C\).
\[c\in RGSS_{W}(C)\Leftrightarrow\begin{cases}c=(\beta_{1},\beta_{2},...,\beta_ {s_{n}})U\\ (\beta_{1},\beta_{2},...,\beta_{s_{n}})UH^{T}=0\end{cases}.\]
Let \((\gamma_{1},...,\gamma_{m})\) a basis of \(\mathbb{F}_{q^{m}}\) sur \(\mathbb{F}_{q}\).
\(\forall\)\(i,j,t\) we have \(\beta_{i}h_{j,t}=\sum\limits_{k=1}^{m}\delta_{i,t}^{(j,k)}\gamma_{k}\) where \(\delta_{i,t}^{(j,k)}\in\mathbb{F}_{q}\). Then we have
\(\forall j=1,...,n-k,\forall k=1,...,m,\sum\limits_{i=1,t=1}^{s_{i},n}\delta_{ i,t}^{(j,k)}u_{s_{i},t}=0\)
It is a linear system in \(\sum\limits_{i=1}^{n}s_{i}\) unknowns and \(m(n-k)\) equations. Therefore the space of solution has dimension at least \(\sum\limits_{i=1}^{n}s_{i}-m(n-k)\). Therefore
\(q^{\sum\limits_{i=1}^{n}s_{i}-m(n-k)}\leq|RGSS_{W}(C)|\)
This leads to the result obtained in [8] as a special case, since \(s_{1}=...=s_{n}\).
**Corollary 3**.: _[_8_]_
_Let \(\mathcal{C}\) be matrix code of length \(m\times n\) over \(\mathbb{F}_{q}\) of parameters \([n,k,d=d_{R}(C)]_{q^{m}}\). Let \(V\) be a \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{m}}\) of dimension \(s\leq m\). If \(ns-m(n-k)>0\) then:_
\(q^{ns-m(n-k)}\leq|C\cap V^{n}|\)__
## 4 Generalized Subspace subcodes of Gabidulin codes
Gabidulin codes [7], special case of matrix codes, were defined by linear polynomials that were studied for the first time by O. Ore in [16]. These polynomial are of the for \(P(z)=\sum_{i}p_{i}x^{q^{i}}\), where \(p_{i}\in\mathbb{F}_{q^{m}}\) and \(p_{i}\neq 0\) for finitely many i. By convention, we write \([i]:=q^{i}\), so \(P(z)\) becomes \(P(z)=\sum_{i}p_{i}z^{[i]}\). If \(P\neq 0\), its q-degree is \(deg_{q}(P)=max(\{i:p_{i}\neq 0\})\). Considering addition and multiplication by a scalar, we notice that q-polynomials are \(\mathbb{F}_{q}\)-linear maps from \(\mathbb{F}_{q^{m}}\) to \(\mathbb{F}_{q^{m}}\). The addition and composition of q-polynomials give this a non-commutative ring structure denoted \(\mathcal{P}_{q^{m}}\).
**Definition 9**.: _Let \(g=(g_{1},g_{2},...,g_{n})\in\mathbb{F}_{q^{m}}^{n}\) an ordered set of \(n\leq m\) elements of \(\mathbb{F}_{q^{m}}\),which are linearly independent over \(\mathbb{F}_{q}\) and \(k<n\). The corresponding \([n,k]\) Gabidulin code of support \(g\), of dimension \(k\) and length \(n\) is defined by_
\(Gab_{k}(g)=\{(P(g_{1}),...,P(g_{1}))\in\mathbb{F}_{q^{m}}:P\in\mathcal{P}_{q^ {m}},\ deg_{q}(P)<k\}\)__
A Gabidulin code \(Gab_{k}(g)\) is a \([n,k]\) code with a generator matrix \(G=(g_{j}^{[i-1]})\in\mathbb{F}_{q^{m}}^{k\times n}\), \(1\leq i\leq k,1\leq j\leq n\), where the generating elements \(g_{j}\in\mathbb{F}_{q^{m}}\) have to be linearly independent over \(\mathbb{F}_{q}\). Gabidulin codes satisfy the Singleton bound with rank distance \(d_{R}=n-k+1\). Thus, they can correct up to \(\tau_{max}=\lfloor\frac{n-k}{2}\rfloor\) errors. The Gabidulin code \(Gab_{k}(g)\) of support \(g\) and dimension k is
generated by the matrix:
\[G=\begin{pmatrix}g_{1}^{[0]}&\cdots&g_{n}^{[0]}\\ g_{1}^{[1]}&\cdots&g_{n}^{[1]}\\ \vdots&\cdots&\vdots\\ g_{1}^{[k-1]}&\cdots&g_{n}^{[k-1]}\end{pmatrix}\]
This definition is close to those of Reed-Solomon codes: set of distinct elements is replaced by a set of linearly independent elements, and the classical power \(g_{j}^{i}\) is replaced by the "Frobenius power" \(g_{j}^{[i]}\).
It is well known that a parity-check matrix of an \([n,k,d]\) Gabidulin code is of the form:
\[H=\begin{pmatrix}h_{1}^{[0]}&\cdots&h_{n}^{[0]}\\ h_{1}^{[1]}&\cdots&h_{n}^{[1]}\\ \vdots&\cdots&\vdots\\ h_{1}^{[d-2]}&\cdots&h_{n}^{[d-2]}\end{pmatrix}\]
### Generalized Subspace subcodes of Gabidulin codes
**Proposition 7**.: _Let \(Gab_{k}(g)\) be a Gabidulin code of length \(n\) over \(\mathbb{F}_{q^{m}}\) of parameters \([n,k,d]_{q^{m}}.\) Let \(V_{1},...,V_{n}\) be a set of \(n\)\(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension respectively \(s_{i}\leq m.\)Set \( W=\prod\limits_{i=1}^{n}V_{i}\), constituted of n-tuples with the \(i\)-th coordinate in \(V_{i}.\) If \(\sum\limits_{i=1}^{n}s_{i}-m(n-k)>0\) then:_
\[\sum\limits_{q^{m}=1}^{n}s_{i}-m(n-k)\leq|RGSS_{W}(Gab_{k}(g))|\leq q^{m( \max\limits_{1\leq i\leq n}(s_{i})-d+1)}\]
Proof.: According to the previous proposition we have
\[c\in RGSS_{W}(C)\Leftrightarrow\begin{cases}c=(\beta_{1},\beta_{2},\cdots, \beta_{s_{n}})U\\ (\beta_{1},\beta_{2},\cdots,\beta_{s_{n}})UH^{T}=0\end{cases}\]
where \( H=\begin{pmatrix}h_{1}^{[0]}&\cdots&h_{n}^{[0]}\\ h_{1}^{[1]}&\cdots&h_{n}^{[1]}\\ \vdots&\cdots&\vdots\\ h_{1}^{[d-2]}&\cdots&h_{n}^{[d-2]}\end{pmatrix}\).
By setting \(L=UH^{T}\), we have \( L_{i,j}=\sum\limits_{l=1}^{n}u_{i,l}h_{l}^{[j-1]}=(\sum\limits_{l=1}^{n}u_{i, l}h_{l})^{[j-1]}=L_{i,1}^{[j-1]}.\)
This means, for \(1\leq j\leq d-1\), \(0=b_{n}UH^{T}\) then \(0=(\beta_{1},\beta_{2},\cdots,\beta_{s_{n}}).\)
where \(w_{i}=L_{i,1}\), \(\Rightarrow 0=\beta_{1}w_{1}^{[j-1]}+\beta_{2}w_{2}^{[j-1]}+\cdots+\beta_{s_{n}}w_{s _{n}}^{[j-1]}=(\beta_{1}^{[m-j+1]}w_{1}+\beta_{2}^{[m-j+1]}w_{2}+\cdots+\beta_{s _{n}}^{[m-j+1]}w_{s_{n}})^{[j-1]}\Rightarrow 0=\beta_{1}^{[m-j+1]}w_{1}+\beta_{2}^{[m-j+1]}w_{ 2}+\cdots+\beta_{s_{n}}^{[m-j+1]}w_{s_{n}}\] \[\Rightarrow 0=(w_{1},\cdots,w_{s_{n}})\begin{pmatrix}\beta_{1}^{[m-j+ 1]}\\ \beta_{2}^{[m-j+1]}\\ \vdots\\ \beta_{s_{n}}^{[m-j+1]}\end{pmatrix}\]
which leads to the result
\[(w_{1},...,w_{s_{n}})\begin{pmatrix}\beta_{1}^{[m]}&\beta_{1}^{[m-1]}&\cdots& \beta_{1}^{[m-d+2]}\\ \beta_{2}^{[m]}&\beta_{2}^{[m-1]}&\cdots&\beta_{2}^{[m-d+2]}\\ \vdots&\vdots&\cdots&\vdots\\ \beta_{s_{n}}^{[m]}&\beta_{s_{n}}^{[m-1]}&\cdots&\beta_{s_{n}}^{[m-d+2]}\end{pmatrix} =0\]
Let \(\mathcal{B}_{W}\) be the Gabidulin code having parity-check matrix \(T=(\beta_{j}^{[m-i+1]})_{i=1,j=1}^{d-1,s_{n}}\) then \(c=b_{n}U\in RGSS_{W}(Gab_{k}(g))\Rightarrow(w_{1},\cdots,w_{s_{n}})\in \mathcal{B}_{W}\Rightarrow|RGSS_{W}(Gab_{k}(g))|\leq|\mathcal{B}_{W}|=q^{m(s_ {n}-d+1)}=q^{m(\max_{i}(s_{i})-d+1)}\).
according to proposition 6
\[q^{\sum\limits_{i=1}^{n}s_{i}-m(n-k)}\leq|RGSS_{W}(Gab_{k}(g))|\leq q^{m(\max \limits_{1\leq i\leq n}(s_{i})-d+1)}\]
**Corollary 4**.: _[_8_]_
_Let \(Gab_{k}(g)\) be a Gabidulin code of length \(n\) over \(\mathbb{F}_{q^{m}}\) of parameters \([n,k,d]_{q^{m}}\). Let \(V\) be an \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension \(s\leq m\). If \(ns-m(n-k)>0\) then:_
\[q^{ns-m(n-k)}\leq|Gab_{k}(g)_{|V|}\leq q^{m(s-d+1)}\]
**Proposition 8**.: _The q-ary image of a Gabidulin code \(Gab_{k}(g)\) of length \(n\) over \(\mathbb{F}_{q^{m}}\) of parameters \([n,k,d]_{q^{m}}\) from the basis \(B\) with respect to the \(n\) bases \((B_{1},...,B_{n})\) of \(\mathbb{F}_{q^{m}}\), \(Im_{q,(B_{i})_{i}}(Gab_{k}(g))\) has generator matrix the block matrix \(((M_{g_{j}}^{q^{i-1}})^{T})_{1\leq i\leq k,1\leq j\leq n}\in(\mathbb{F}_{q}^{m \times m})^{k\times n}\) where the matrix \(M_{g_{j}}=\mathcal{M}_{B,B_{j}}(f_{g_{j}})\) corresponds to \(f_{g_{j}}(x)=g_{j}x\)._
## 5 Parent codes of Gabidulin RGSS codes
For decoding needs, it is possible to use a code linked to it by an injective morphism which preserves the rank. In the following we will give the construction of these codes called Parent code introduced in [6].
**Definition 10**.: _(Parent Code) The linear code over \(\mathbb{F}_{q^{m}}\) defined above \(\mathcal{B}_{W}\), with parity-check matrix \(T=(\beta_{j}^{[m-i+1]})_{i=1,j=1}^{d-1,s_{n}}\) is called the parent code of \(RGSS_{W}(Gab_{k}(g))\). It is denoted by \(P_{RGSS_{W}(Gab_{k}(g))}\)._
**Proposition 9**.: _the parent code of \(RGSS_{W}(Gab_{k}(g))\) coincides with the parent code of \(Gab_{k}(g)_{|V_{\max(s_{i})}}\) up to isomorphism._
**Proposition 10**.: _Let \(Gab_{k}(g)\) be a Gabidulin code of length \(n\) over \(\mathbb{F}_{q^{m}}\) of parameters \([n,k,d]_{q^{m}}\) with a parity check matrix admitting \(h=(h_{1},h_{2},...,h_{n})\) as first row. Let \(V_{1},...,V_{n}\) be a set of n \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{m}}\) of dimension respectively \(s_{i}\leq m\) and Let \(b_{i}=\{\beta_{1},\beta_{2},...,\beta_{s_{i}}\}\) a basis of \(V_{i}\) with \(1\leq i\leq n\) such that the bases form an inclusion chain and \(b\) the maximal basis up to isomorphism. Then the map: \(f_{b}:W\rightarrow(\mathbb{F}_{q^{m}})^{\max(s_{i})}\), \(c=bU\mapsto f_{b}(c)=hU^{T}\) is a rank-preserving injective \(\mathbb{F}_{q}\)-linear map and \(f_{b}(RGSS_{W}(Gab_{k}(g))\subset P_{RGSS_{W}(Gab_{k}(g))}\)_
Proof.: Since \(h=(h_{1},h_{2},...,h_{n})\) are linearly independent over \(\mathbb{F}_{q}\), then \(ker(f_{b})=\{0\}\) so \(f_{b}\) injective and \(Rk(bU)=Rk(U)=Rk(hU)\) for any \(U\in\mathbb{F}_{q}^{\max(s_{i})\times n}\). By construction, we have \(f_{b}(RGSS_{W}(Gab_{k}(g))=P_{RGSS_{W}(Gab_{k}(g))}\).
we can therefore note that the minimum distance of \(RGSS_{W}(Gab_{k}(g))\) is smaller or equal than that of \(P_{RGSS_{W}(Gab_{k}(g))}\). |
2301.04341 | A Meta Path-based Approach for Rumor Detection on Social Media | The prominent role of social media in people's daily lives has made them more
inclined to receive news through social networks than traditional sources. This
shift in public behavior has opened doors for some to diffuse fake news on
social media; and subsequently cause negative economic, political, and social
consequences as well as distrust among the public.
There are many proposed methods to solve the rumor detection problem, most of
which do not take full advantage of the heterogeneous nature of news
propagation networks. With this intention, we considered a previously proposed
architecture as our baseline and performed the idea of structural feature
extraction from the heterogeneous rumor propagation over its architecture using
the concept of meta path-based embeddings. We named our model Meta Path-based
Global Local Attention Network (MGLAN). Extensive experimental analysis on
three state-of-the-art datasets has demonstrated that MGLAN outperforms other
models by capturing node-level discrimination to different node types. | Bita Azarijoo, Mostafa Salehi, Shaghayegh Najari | 2023-01-11T07:31:47Z | http://arxiv.org/abs/2301.04341v2 | # A Meta Path-based Approach for Rumor Detection on Social Media
###### Abstract
The prominent role of social media in people's daily lives has made them more inclined to receive news through social networks than traditional sources. This shift in public behavior has opened doors for some to diffuse fake news on social media; and subsequently cause negative economic, political, and social consequences as well as distrust among the public.
There are many proposed methods to solve the rumor detection problem, most of which do not take full advantage of the heterogeneous nature of news propagation networks. With this intention, we considered a previously proposed architecture as our baseline and performed the idea of structural feature extraction from the heterogeneous rumor propagation over its architecture using the concept of meta path-based embeddings. We named our model Meta Path-based Global Local Attention Network (MGLAN). Extensive experimental analysis on three state-of-the-art datasets has demonstrated that MGLAN outperforms other models by capturing node-level discrimination to different node types.
Rumor Detection, Heterogenous Network, Meta Path, Deep Learning, Social Network +
Footnote †: publicationid: pubid: pubid: 978-1-6654-8027-7/22/$31.00 ©2023 IEEE
## I Introduction
Nowadays, interactions with social networks have become an inseparable part of people's lives for their ease of use and fast dissemination of information on a global scale. In this regard, in 2012, only 45% of people used social media to access news, whereas this number jumped to 65% in 2016 [1]. Also, the 2020 Covid-19 pandemic caused a 30% growth in Twitter daily usage [2]. Unfortunately, this rapid increase in using social media has provided an opportunity for various users to spread fake news to cause serious individual, economic, and political repercussions [3]. For example, in 2013, the Associated Press (AP) account on Twitter was hacked, and a piece of news was published claiming an explosion occurred in the White House and Barak Obama was injured. Although the publishing account discredited this rumor within seconds, it leveraged through Twitter and caused the stock value to fall by 130 billion dollars [1]. Consequently, studying fake news and preventing its dissemination as soon as possible is yet an active and open research area.
To avoid the detrimental effects of fake news, there are websites like Snopes1, GossipCop2, and Poitifact3, but they can not detect fake news automatically in their early stages of propagation and rely on manual user intervention for fact-checking as well. As a consequence, the detection of fake news can be a time-consuming process. Thus, to solve this issue under a real scenario, various machine learning-based methods have been proposed, many of which depend on analyzing text to extract language styles of fake news, but text in social media has a short length, and we face the data sparsity problem. Other methods like CSI [4] need many user responses, which is time-consuming. It also models the propagation path of retweets as a sequence. Other models like GLAN [5] model both local and global relations of news and users but fails to capture intrinsic structural difference among node types when we do not have access to manually-extracted node (user, tweet, etc.) features. Our research question underlies the fact that if the metadata of users and tweets are unavailable, we would be able to extract structural features considering the difference in node types. Therefore, The contributions of our work are:
Footnote 1: [https://www.snopes.com](https://www.snopes.com)
Footnote 2: [https://www.gossipcop.com](https://www.gossipcop.com)
Footnote 3: [https://www.politifact.com](https://www.politifact.com)
* We select GLAN as our baseline model and aim to capture meaningful structural embeddings using the concepts of meta paths in heterogeneous news propagation networks.
* Our experiments on three real-world datasets demonstrate improvements over previous works in terms of accuracy and F1 score.
For the rest of the paper, in section II, we overview related works for solving rumor detection problem. In section III, we define preliminaries and formulate our problem. In section IV, we describe all components of the baseline paper as well as our idea to improve upon it. In section V, we evaluate our method on three real-world datasets. Finally, in section VI, all future works applicable to our work are proposed.
## II Related Work
The previous models can be divided into three categories according to their approach:
* **Content-based**: Extracting rich information from texts to learn specific writing styles that rumors inherently have is essential for their detection [6]. Text feature extraction can be done in supervised manners, i.e., TF-IDF and n-gram, or in unsupervised forms, i.e., embeddings from Word2Vec [7], LSTM [8], GRU [9], Transformers [10], and BERT [11]. These strategies can not be solely relied upon as text in social media usually has a short length; thus, they fail to capture the desired syntactic and semantic information needed to detect whether a piece of news is fake.
* **User-based**: Manually scraped user features such as gender, age, nationality, and numbers of followers or followees are beneficial for rumor detection [12]. [13] first used them to determine the credibility of information on Twitter. However, obtaining them is challenging as some social networks enforce restricting policies to access users' profiles or make them publicly available, like Twitter. Moreover, in terms of network representation, nodes are interconnected through edge interactions, and their feature vectors are not independent and identically distributed from one another [14].
* **Structure-based**: Leveraging the inherent structure of rumor propagation in social networks is another way that helps rumor detection. In [15], the propagation path of news was modeled as multivariate time series. They can also be captured in a graph-setting environment considering the structural and semantic features of the news interaction network. In this setting, _Graph Neural Networks(GNNs)_ have shown a significant role in node-level, link-level, and graph-level prediction tasks. [16] considered atop down and bottom-up approach to capture both propagation and dispersion of tweets using _Graph Convolutional Networks(GCNs)_[17]. [18] proposed a deep hybrid model based on propagation and stance network of fake news and used _node2vec_[19] for capturing structural propagation features on FakeNewsNet dataset [20]. GLAN [5] offered a hybrid model using _Graph Attention Networks(GATs)_[21] to capture node-level representations of users and tweets.
Based on what elaborated above, GLAN offers a stable model to capture all three aspects but fails to assign initial features to nodes based on the difference in their types and their interconnectivity. When no initial features of users and tweets are accessible, It generates node features by the normal distribution in their implementation. This assumption can come from the fact that nodes and relations are independent, so initial feature generation using normal distribution would be sufficient. However, in reality, heterogeneous networks are scale-free in nature; in these networks, nodes and relations among them are not independent. This fact motivated us to modify a part of GLAN's architecture and use MetaPath2Vec [22] for extracting features for tweets and users in the propagation graph, discriminating among their node types. This modification has shown improvement in the performance of rumor detection when applied to three state-of-the-art datasets.
## III Preliminaries and Problem Formulation
IIn this section, we provide some basic definitions and then move on to formulate the rumor detection problem in this paper.
**Definition 1**: Heterogeneous Network [23]_. A heterogeneous network is defined as a graph \(G=(V,\mathcal{E})\) with a node type
Fig. 1: The overall proposed architecture. The main difference between MGLAN compared to GLAN is that GLAN assigns node features by normal distribution when it does not have access to manually extracted features of Users and Tweets whereas MGLAN uses the output of MetaPath2Vec in each epoch as learned features of these node types.
mapping function \(\varphi:V\rightarrow\mathcal{A}\), and an edge type mapping function \(\psi:\mathcal{E}\rightarrow\mathcal{R}\). \(\mathcal{A}\) and \(\mathcal{R}\) denote sets of node types and edge types, respectively, so that \(\|\mathcal{A}\|+\|\mathcal{R}\|>2\).
**Definition 2**: Network Schema [24]_. A network schema is a meta template for a heterogeneous network \(G=(V,\mathcal{E})\), with a node type mapping function \(\varphi:V\rightarrow\mathcal{A}\) and an edge type mapping function \(\psi:\mathcal{E}\rightarrow\mathcal{R}\) defined over object types \(\mathcal{A}\) denoted as \(T_{G}=(\mathcal{A},\mathcal{R})\).
**Definition 3**: Meta Path [24]_. A meta path \(P\) is a path on heterogeneous network with network schema \(T_{G}=(\mathcal{A},\mathcal{R})\) in the form of \(A_{1}\xrightarrow{R_{1}}A_{2}\xrightarrow{R_{2}}...\xrightarrow{R_{l}}A_{l+1}\) and \(R\) is a compound relation \(R=R_{1}\circ R_{2}\circ...\circ R_{l}\) from \(A_{1}\) to \(A_{l}\).
The formulation of our problem is quite similar to GLAN. Let \(\mathcal{M}=\{m_{1},m_{2},...,m_{|\mathcal{M}|}\}\) be the set of source news, and each source news has a total of \(n\) retweets and replies denoted as \(\mathcal{R}=\{r_{1},r_{2},...,r_{n}\}\). We define the neighbors for a source news as \(\mathcal{N}(m_{i})=\{r_{1},r_{2},...,r_{\mathcal{N}(m_{i})}\}\). There are separate groups of global and local neighbors. We define replies of a source news as its local neighbors and its retweets as its global neighbors. The reason for assigning local and global terminologies is that replies of a tweet are independent from replies of other tweets, so they are categorized as local neighbors, but retweets diffuse through the whole network. Also, we define social media users as \(\mathcal{U}=\{u_{1},u_{2},...,u_{|\mathcal{U}|}\}\). Our objective is to learn a model \(p(c=1|m_{i},\mathcal{N}(m_{i}),\mathcal{U};\theta)\) that takes a tweet and its neighbors as input. \(c\) is the output and specifies the class to which the source news belongs. \(\theta\) determines the model parameters.
## IV Proposed Method
We aim to show that extracting meta path-based structural features from the heterogeneous network is conducive to rumor detection by having the propagation network without any initial knowledge of node-level features.
In GLAN, initial node features for different node types were assigned by normal distribution when node-level features of tweets and users(_e.g._ age, gender, number of likes, etc.) were unavailable. It is not the optimal way of feature extraction because it fails to take advantage of the rich information that the heterogeneous nature of the news propagation network provides. Recently, heterogeneous graph representation learning models like MetaPath2Vec [22] have shown promising success in extracting features using the concept of meta path in heterogeneous networks. Having this idea in mind, we decided to add a key module called _Meta Path-based Feature Extraction_ for better global feature extraction. Adding this module helps detect rumors more accurately in some evaluation metrics than GLAN. Fig. 1 illustrates the whole architecture of MGLAN.
In the following, we describe our added component as well as GLAN modules in order to maintain integration throughout the paper.
### _Text Representation_
Just like GLAN, we use word-level embeddings for word representation. \(x_{j}\in\mathbb{R}^{d}\) is the \(d\)-dimensional embedding of \(j\)-\(th\) word in text. We assume each text has fixed length \(L\) represented as \(x_{1:L}=[x_{1};x_{2};...;x_{L}]\). Texts with more than \(L\) words are truncated from the end till their length reaches \(L\), and texts with lengths less than \(L\) are zero-padded in the beginning till text length becomes \(L\). Then, each text represented as \(x_{1:L}\) is fed into three parallel CNNs [25] with \(d/3\) dimension output to get semantic representation for each text. The size of the receptive field for each of the three CNNs is different with values \(h\in\{3,4,5\}\). The \(d/3\)-dimensional outputs of each CNN are concatenated together and form the final \(d\) dimensional representation. This process is performed on both source news text and replies of each source news separately, as demonstrated in Fig. 1.
Fig. 2: Architecture of MetaPath2Vec used as feature extraction module. The schema of heterogeneous graph is from [5] and [22].
### _Local Relation Encoding_
The term local relation of news refers to the relations that each news has with its surrounding neighbors in such a way that it is independent of the local relations of other news. In this section, we take the same approach as GLAN did. We use the attention mechanism to capture rich semantic relations of replies and source news and combine them into a single vector that encodes important aspects of source news and its replies. This procedure has two steps:
1. _Self Attention_: We use _MultiHeadAttention_ module as self attention with same inputs for parameters \(Q\), \(K\), and \(V\)[10]. If a piece of news has R replies, and each encoded in \(d\)-dimensional space, the output of the self-attention module is one \(d\)-dimensional embedding that has aggregated features of all the previous encoded replies denoted as \(\widetilde{R}\in\mathbb{R}^{d}\). \[\widetilde{R}=MultiHeadAttention(R,R,R)\] (1)
2. _Cross Attention_: We apply cross attention to infuse source representation with the unified representation of its replies. The input \(K\) of _MultiHeadAttention_ is source news representation, \(Q\) and \(V\) are \(\widetilde{R}\). This way, source representation can attend over its local neighbors to form a new \(d\)-dimensional local text representation denoted as \(\widetilde{m_{j}}\) for news
### _Meta Path-based Feature Extraction_
To capture efficient structural node representation while preserving inherent discrimination among each node type, we modeled the news propagation network as a heterogeneous graph with two node types: User and Tweet. Fig. 3 shows the network schema of news propagation on Twitter. At first, a Tweet is published by a user, and other users participate in retweeting it from publishing users or other retweeters. To simplify the schema, we decided to behave post and retweet relations as one relation called _spread_ that contains both retweet and post relations. It means a user participates in _spreading_ tweets and tweets _are spread by_ users in the network. Finally, we define the meta path schema for the rumor detection problem in Fig. 4. We feed meta path relations schema alongside the network edges into MetaPath2Vec architecture as illustrated in Fig. 2. MetaPath2Vec is robust among heterogeneous representation learning methods to discern structural and semantic correlation among different node types. In order to pay more attention to early diffuser users, weights between edges that connect users and tweets were assigned as follows:
\[w(u_{i},m_{j})=\frac{1}{max(0,t)+1} \tag{2}\]
\(t\) is time elapsed(in minutes) after a tweet was published [5]. This weighting helps us to modify the random walk process of MetaPath2Vec. In the original paper, a node in the next random walk step was selected by the uniform distribution. In our work, a node is selected by weighted distribution according to the following:
\[p(v^{i+1}|v^{i},P)=\begin{cases}\frac{w_{i}v_{i+1}}{\Sigma w_{i}v_{i}}&(v^{i+1},v^{i}_{t})\in E,\phi(v^{i+1})=t+\texttt{1}\\ 0&(v^{i+1},v^{i}_{t})\in E,\phi(v^{i+1})\neq t+\texttt{1}\\ 0&(v^{i+1},v^{i}_{t})\notin E\end{cases} \tag{3}\]
where \(u\) is selected from all neighbors of \(v_{i}\), \(u\in\mathcal{N}(v_{i})\). The output of MetaPath2Vec is \(d\)-dimensional node-level representation of all nodes \(\mathcal{X}\in\mathbb{R}^{n\times d}\).
### _Global Relation Encoding_
Just like GLAN, we used two GATs to capture additional meaningful representations of different nodes. The output of MetaPath2Vec \(\mathcal{X}\) is the input of the first GAT. In the first one, _MultiHeadAttention_ with k=8 heads are enabled to stabilize the training process. In the second one, _MultiHeadAttention_ is disabled. The output of the global relation encoding module is \(m_{j}^{global}\).
### _Rumor Classification_
In the classification module, we concatenate \(\widetilde{m_{j}}\) and \(m_{j}^{global}\) and pass it through a fully-connected linear layer. Then by applying _softmax_ and choosing the maximum probability, we can classify each source news:
\[p_{i}(c|m_{i},\mathcal{N}_{m_{i}},\mathcal{U};\theta)=\textbf{softmax}(\textbf {W}[\widetilde{m_{j}},m_{j}^{global}]+b) \tag{4}\]
\(\textbf{W}\in\mathbb{R}^{2d\times|c|}\) is weight parameter of linear layer and \(b\) is bias. Cross entropy loss is used to classify each piece of news:
\[J(c^{(i)}|D,\mathcal{U}_{i};\theta)=-\sum_{i}y_{i}\log p_{i}(c|m_{i},\mathcal{ N}_{m_{i}},\mathcal{U};\theta) \tag{5}\]
\(y_{i}\) is the probability of source news belonging to class \(i\).
## V Evaluation
In this section, we perform experiments on three state of the art datasets for rumor detection. We show that our proposed architecture outperforms similar state-of-the-art models.
Fig. 4: Meta Path schema in news propagation network.
Fig. 3: News propagation network schema.
### _Datasets_
We analyzed MGLAN on three well-known datasets: Twitter15 [26], Twitter16 [26], and Weibo [27]. Data are scraped from Twitter and Weibo social networks, respectively. Twitter15 and Twitter16 consist of four classes: "NR"(non-rumor), "FR"(false-rumor), "UR"(unverified-rumor), and "TR"(true-rumor). The difference between "FR" and "TR" is that a true label is assigned to a source tweet if it expresses a denial type of stance; otherwise, it is labeled as false [26]. The Weibo dataset has only binary labels "NR" and "FR." Table I provides a statistical perspective of datasets.
### _Baseline Models_
In this section, we introduce previously proposed architectures for the task of rumor detection and compare their performance to MGLAN.
* **GLAN**: Our baseline. It encodes both local and global relations of the heterogeneous network.
* **HGATRD**[28]: It models the propagation network as a heterogeneous one with tweet, user, and word as node types. It decomposes the graph into tweet-word and tweet-user subgraphs and performs attention mechanisms for each subgraph.
* **SMAN**[29]: Based on the news that each user participates in spreading, it assigns a credibility score to each user and uses it as weakly supervised information. It then uses MultiHeadAttention to learn to classify each source news.
* **GCAN**[30]: It creates user communication graphs for all tweets, and GCNs compute their graph-level embeddings. On the other hand, It uses CNNs to model the sequential retweet path of each tweet. Finally, It concatenates respective outputs after applying attention mechanisms and passes them to the classifier.
* **PPC**[15]: It is based on modeling the propagation path of tweets as a multivariate time series, then builds a propagation path classifier using both CNNs and RNNs.
### _Parameter Settings_
MGLAN is an extension to GLAN implemented with Pytorch [31], and Pytorch Geometric [32]. For true comparison, we did not change any of GLAN's hyperparameters. In MetaPath2Vec, we set walk length to 100, context size to 7, number of walks per node to 5, and number of negative samples to 3. The output of MetaPath2Vec has 256 dimensions. The dimension of the first GAT's input and output is 256 and 300. The second GAT, both input and output have 300 dimensions.
### _Results and Analysis_
This section shows that MGLAN is robust enough to detect rumors better than GLAN and other models. Tables II, III, and IV show that MGLAN performs better in almost all classes compared to other models by accuray and F1 score metric; meaning extracting the heterogeneous-based representations play an essential role for rumor detection.
However, in Twitter15 in UR class and Twitter16 in FR class, MGLAN failed to outperform SMAN due to computing a credibility score for users from the existing dataset and using it as weakly supervised information. MGLAN is not reliant on hand-crafted features and, therefore, can be applied in inductive settings where new accounts are created and participate in rumor dissemination. Also, we saw the slightest improvement in accuracy and F1 score in the Weibo dataset. It is related to the inherent characteristic of the dataset; source texts and their corresponding replies are distinguishable enough that considering a meta path-based heterogeneous feature extraction performs slightly better in this case.
In conclusion, Incorporating MetaPath2Vec and capturing heterogeneous structural features in news propagation networks shows improvement over previous models. We will discuss this assumption from another perspective in section V-F.
Fig. 5: Time limit vs Accuracy in Twitter15, Twitter16 and Weibo.
### _Early Rumor Detection_
To prove MGLAN can detect rumors in their early phases of creation, we decided to restrict the time after a tweet is published and examine how MGLAN performs against other models. As shown in Fig. 4(a), Fig. 4(b), and Fig. 4(c). The horizontal axis shows elapsed time since a tweet was published, and the vertical axis represents the accuracy of models. It is apparent in all three figures that right after the tweet was published, MGLAN and GLAN had the same accuracy because the global news propagation network is not shaped. After a short time during which the structure of the propagation network is formed, MGLAN outperforms GLAN and other models.
### _Robustness of MGLAN_
To elaborate on the effectiveness of using MetaPath2Vec from a different perspective, we decided to plot the t-SNE [33] projections of heterogeneous embeddings in MGLAN and GLAN, respectively. As shown in Fig. 6, in MGLAN, projected MetaPath2Vec embeddings in all datasets can cluster tweets of each class to a reasonable extent. For example, if two tweets belong to one class, MetaPath2Vec has similar \(d\)-dimensional embeddings for them. In contrast, in GLAN, by assigning node features using the normal distribution when user or tweet metadata is not accessible, projections of each class are not distinguishable from one another, especially in Twitter15 and Twitter16.
Fig. 6: Representations of tweet nodes of the heterogeneous news network in MGLAN vs. GLAN. Utilizing MetaPath2Vec could capture meaningful structural features since nodes belonging to the same group almost form a cluster, whereas in GLAN, in cases where a single normal distribution assigned node features, clusters are not formed.
## VI Conclusions and Future Work
In this work, we proposed improving GLAN architecture using meta path-based features in heterogeneous news propagation networks called MGLAN. For implementation, we used MetaPath2Vec to extract structural features from the heterogeneous graph and used its output embeddings to GLAN's global relation encoding module. Also, experiments on three state-of-the-art datasets proved the strength of MGLAN. For future work, we can extend our proposed idea in several cases:
* We can scrape a new dataset from Twitter while gathering metadata for different node types and therefore define new relations such as follower-followee and mentions. It allows us to incorporate a variety of meta paths and determine which one plays a more critical role in extracting richer information from the heterogeneous network.
* MetaPath2Vec can focus on only one meta path at one time. Newer methods like MAGNN [34] can simultaneously consider different meta paths and choose the best one for each heterogeneous graph.
## Acknowledgment
Mostafa Salehi was supported by a grant from IPM, Iran (No. CS1401-4-162).
|
2307.06800 | Renormalization Group Evolution with Scalar Leptoquarks | Leptoquarks are theoretically well-motivated and have received increasing
attention in recent years as they can explain several hints for physics beyond
the Standard Model. In this article, we calculate the renormalisation group
evolution of models with scalar leptoquarks. We compute the anomalous
dimensions for all couplings (gauge, Yukawa, Higgs and leptoquarks
interactions) of the most general Lagrangian at the two-loop level and the
corresponding threshold corrections at one-loop. The most relevant analytic
results are presented in the Appendix, while the notebook containing the full
expressions can be downloaded at https://github.com/SumitBanikGit/SLQ-RG. In
our phenomenological analysis, we consider some exemplary cases with focus on
gauge and Yukawa coupling unification. | Sumit Banik, Andreas Crivellin | 2023-07-13T15:11:55Z | http://arxiv.org/abs/2307.06800v2 | # Renormalization Group Evolution with Scalar Leptoquarks
###### Abstract
Leptoquarks are theoretically well-motivated and have received increasing attention in recent years as they can explain several hints for physics beyond the Standard Model. In this article, we calculate the renormalisation group evolution of models with scalar leptoquarks. We compute the anomalous dimensions for all couplings (gauge, Yukawa, Higgs and leptoquarks interactions) of the most general Lagrangian at the two-loop level and the corresponding threshold corrections at one-loop. The most relevant analytic results are presented in the Appendix, while the notebook containing the full expressions can be downloaded at [https://github.com/SumitBanikGit/SLQ-RG](https://github.com/SumitBanikGit/SLQ-RG). In our phenomenological analysis, we consider some exemplary cases with focus on gauge and Yukawa coupling unification.
## 1 Introduction
The Standard Model (SM) of particle physics describes the known fundamental constituents of matter as well as their interactions. While the Higgs particle [1; 2; 3; 4], last missing puzzle piece of the SM, was discovered in 2012 at the Large Hadron Collider (LHC) at CERN [5; 6; 7], it is clear that the SM cannot be the ultimate theory of Nature: It does not account for the astrophysical observation of Dark Matter nor for the non-vanishing neutrino masses required by neutrino oscillations.
A plethora of possible SM extensions have been proposed in the last decade. In this context, leptoquarks (LQs), i.e. hypothetical new particles which directly couple a quark to a lepton, are very interesting. They were first proposed within Grand Unified Theories (GUTs) [8; 9; 10; 11] and composite models [12; 13; 14; 15], and squarks can act as LQs within the R-parity violating MSSM (see e.g. Ref. [16] for a review). They were first classified in Ref. [17] into ten possible representations under the SM gauge group, of which five are scalars, and five are vector particles.
In recent years, there has been renewed interest in LQs as they give interesting effects in low energy observables in general [18; 19; 20; 21; 22; 23; 24], and could provide solutions to anomalies in semi-leptonic \(B\) decays [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123] in particular. The effects of LQs at colliders [124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148], in oblique electroweak parameters as well as Higgs couplings to gauge bosons [149; 150; 151; 152; 153; 154; 155; 156], electric dipole moments [157; 158] and proton decay [159; 160] were studied. LQs
were considered as a portal to dark matter [161; 162; 163; 164; 165; 166; 167; 168] and as the origin of neutrino masses [169; 170; 171; 172; 58; 61; 173; 174; 175; 176; 177]. Furthermore, they have been searched for at the LHC [173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192] resulting in bounds of the order of \(1-2\,\mathrm{TeV}\) (in the absence or suppression of couplings to first generation quarks [23; 140]).
As LQs are possible (light) remnants of a GUT, it is interesting to assess their impact on gauge and Yukawa coupling unification. In particular, scalar LQs (SLQs) could be light states of a GUT symmetry-breaking sector [50] and it might even be possible that they are the only new particles (in addition to the SM) up to the GUT scale. In this case, they must alter the (renormalization group evaluation) RGE sufficiently to lead to the required coupling unification [193].
In this article, we will analyze the RGE in models with SLQs. For this, we will calculate the two-loop anomalous dimensions as well as the one-loop threshold corrections (at the LQ scale) and apply these results to the study of gauge and Yukawa coupling unification. For this, the article is structured as follows: In Sec. 2, we present our setup and conventions. In Sec. 3, we derive the \(\beta\)-function, give their analytic expression for some simple cases and examine gauge and Yukawa coupling unification for some specific examples. In Sec. 4, we summarize and discuss the results of this paper. In Appendix A, we give the full SLQ lagrangian, including all five possible SLQs. In Appendix B and C, we give the anomalous dimensions for the gauge (two-loop) and Yukawa couplings (one-loop), respectively. In Appendix D, we collect the one-loop threshold corrections of SM parameters on matching SLQ models with the SM. A notebook with our full results, including Higgs and LQ (self-) interactions etc., is available at [194].
## 2 Setup and Conventions
The SM fields transform under the \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) gauge group as given in Table 1, where \(j=1,2,3\) is a flavour index. The left-handed fermions \(Q_{j}\) and \(L_{j}\) are doublets under \(SU(2)_{L}\) and decompose into their components as
\[Q_{f}=\begin{pmatrix}u_{j,L}\\ d_{j,L}\end{pmatrix},\hskip 28.452756ptL_{f}=\begin{pmatrix}\nu_{j,L}\\ \ell_{j,L}\end{pmatrix}\,, \tag{1}\]
w
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline SM fields & \(Q_{j}\) & \(L_{j}\) & \(u_{j}\) & \(d_{j}\) & \(\ell_{j}\) & \(H\) \\ \hline \(U(1)_{Y},SU(2)_{L},SU(3)_{c}\) & \(+\frac{1}{6}\,,\mathbf{2}\,,\mathbf{3}\) & \(-\frac{1}{2}\,,\mathbf{2}\,,\mathbf{1}\) & \(+\frac{2}{3}\,,\mathbf{1}\,,\mathbf{3}\) & \(-\frac{1}{3}\,,\mathbf{1}\,,\mathbf{3}\) & \(-1\,,\mathbf{1}\,,\mathbf{1}\) & \(+\frac{1}{2}\,,\mathbf{2}\,,\mathbf{1}\) \\ \hline Leptoquark & \(\Phi_{1}\) & \(\Phi_{\bar{1}}\) & \(\Phi_{2}\) & \(\Phi_{\bar{2}}\) & \(\Phi_{3}\) & \\ \hline \(U(1)_{Y},SU(2)_{L},SU(3)_{c}\) & \(-\frac{1}{3}\,,\mathbf{1}\,,\mathbf{3}\) & \(-\frac{4}{3}\,,\mathbf{1}\,,\mathbf{3}\) & \(+\frac{7}{6}\,,\mathbf{2}\,,\mathbf{3}\) & \(+\frac{1}{6}\,,\mathbf{2}\,,\mathbf{3}\) & \(-\frac{1}{3}\,,\mathbf{3}\,,\mathbf{3}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Representations of the SM and LQ fields under the \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) gauge group.
while the right-handed fields \(u_{j}\), \(d_{j}\) and \(\ell_{j}\) are \(SU(2)_{L}\) singlets which we write as
\[u_{j}=\left(u_{j,R}\right),\qquad\quad d_{j}=\left(\nu_{j,R}\right),\qquad\quad \ell_{j}=\left(\ell_{j,R}\right)\,. \tag{2}\]
The electric charge \(q\) is given by the Gell-Mann-Nishijima formula, \(q=Y+I_{3}\) where, \(I_{3}\) is the third-component of the weak isospin and \(Y\) is the hypercharge related to the gauge factor \(U(1)_{Y}\).
The SM Lagrangian is then written as
\[\begin{split}\mathcal{L}_{\text{SM}}=&-\frac{1}{4} B^{\mu\nu}B_{\mu\nu}-\frac{1}{4}W^{I,\mu\nu}W_{I,\mu\nu}-\frac{1}{4}G^{\alpha,\mu \nu}G_{\alpha,\mu\nu}\\ &+i\left(\bar{Q}_{f}\not{D}Q_{f}\right)+i\left(\bar{L}_{f}\not{D} L_{f}\right)+i\,\bar{u}_{f}\not{D}u_{f}+i\,\bar{d}_{f}\not{D}d_{f}+i\,\bar{\ell}_{f} \not{D}\ell_{f}\\ &+\left(\left(D^{\mu}H\right)^{\dagger}D_{\mu}H\right)+\mu_{H}^{ 2}\left(H^{\dagger}H\right)-\lambda_{H}\left(H^{\dagger}H\right)^{2}\\ &-\left(Y^{d}_{f_{i}f_{j}}\left(\bar{Q}_{f_{i}}H\right)d_{f_{j}}+ Y^{\ell}_{f_{i}f_{j}}\left(\bar{L}_{f_{i}}H\right)\ell_{f_{j}}+Y^{u}_{f_{i}f_{j}} \left(\bar{Q}_{f_{i}}i\sigma_{2}H^{\dagger}\right)u_{f_{j}}+\text{h.c.}\right) \\ &+\mathcal{L}_{\text{gauge}-\text{fixing}}\,,\end{split} \tag{3}\]
where \(Y^{d}\), \(Y^{\ell}\) and \(Y^{u}\) are the Yukawa couplings and \(B^{\mu\nu}\) and the field strength tensors defined as,
\[\begin{split}& B_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{ \mu},\\ & W^{I}_{\mu\nu}=\partial_{\mu}W^{I}_{\nu}-\partial_{\nu}W^{I}_{ \mu}+g_{2}f^{IJK}W^{J}_{\nu}W^{K}_{\nu},\qquad\quad I\in\{1,2,3\}\,,\\ & G^{\alpha}_{\mu\nu}=\partial_{\mu}G^{\alpha}_{\nu}-\partial_{ \nu}G^{\alpha}_{\mu}+g_{3}f^{\alpha\beta\gamma}G^{\beta}_{\mu}G^{\gamma}_{\nu }\,,\qquad\qquad\alpha\in\{1,..,8\}\,,\end{split} \tag{4}\]
corresponding to the gauge groups \(U(1)_{Y}\), \(SU(2)_{L}\) and \(SU(3)_{c}\), respectively. The structure constants of \(SU(2)_{L}\) and \(SU(3)_{c}\) are denoted by \(f^{IJK}\) and \(f^{\alpha\beta\gamma}\). For better readability, \(SU(2)_{L}\) indices inside parenthesis \((\cdots)\) are contracted such that they form gauge singlets. The covariant derivative for any field \(\phi\) is defined as
\[D_{\mu}\phi=\partial_{\mu}\phi-i\tilde{g}_{1}YB_{\mu}\phi-ig_{2}\tau_{I}W^{I}_ {\mu}\phi-ig_{3}T_{\alpha}G^{\alpha}_{\mu}\phi\,, \tag{5}\]
where \(Y\) is the \(U(1)_{Y}\) hypercharge of \(\phi\). \(\tilde{g}_{1}\), \(g_{2}\) and \(g_{3}\) are the gauge couplings associated with \(U(1)_{Y}\), \(SU(2)_{L}\) and \(SU(3)_{c}\) gauge groups, respectively. \(\tau_{I}\) and \(T_{\alpha}\) are the generators of \(SU(2)_{L}\) and \(SU(3)_{c}\) depending on the representation of \(\phi\). In the fundamental representation of \(SU(2)_{L}\) and \(SU(3)_{c}\) we have \(\tau_{I}=\frac{\sigma_{I}}{2}\) and \(T_{\alpha}=\frac{\lambda_{\alpha}}{2}\), where \(\sigma_{I}\) are the Pauli matrices
\[\sigma_{1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\ \ \sigma_{2}=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\ \ \sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\,, \tag{6}\]
and \(\lambda_{\alpha}\) are the Gell-Mann matrices.
We now add to the SM the five scalar SLQs which transform under the SM gauge groups as given in Table 1. For later convenience, we decompose the SLQ Lagrangian as
\[\mathcal{L}_{\text{LQ}}=\ \ \mathcal{L}_{1}+\mathcal{L}_{\bar{1}}+\mathcal{L}_{2 }+\mathcal{L}_{\bar{2}}+\mathcal{L}_{3}+\mathcal{L}_{1\,\bar{1}}+\mathcal{L}_{1 \,2}+\mathcal{L}_{1\,\bar{2}}+\mathcal{L}_{1\,3}\]
\[+\mathcal{L}_{\tilde{1}\,2}+\mathcal{L}_{\tilde{1}\,\tilde{2}}+ \mathcal{L}_{\tilde{1}\,3}+\mathcal{L}_{2\,\tilde{2}}+\mathcal{L}_{2\,3}+ \mathcal{L}_{\tilde{2}\,3}\] \[+\mathcal{L}_{\tilde{1}\,2\,\tilde{2}}+\mathcal{L}_{1\,\tilde{1} \,2}+\mathcal{L}_{1\,2\,3}+\mathcal{L}_{1\,\tilde{2}\,3}+\mathcal{L}_{\tilde{ 1}\,2\,3}\] \[+\mathcal{L}_{\tilde{1}\,2\,\tilde{2}\,3}+\mathcal{L}_{1\,\tilde{ 1}\,2\,\tilde{2}}\,, \tag{7}\]
where, the terms \(\mathcal{L}_{ij...}\) contain terms involving only the LQs \(\{\Phi_{i},\Phi_{j},...\}\). The explicit expressions, as already presented in Ref. [195], for each Lagrangian term are given in Appendix A. Note that for our purpose, it is imperative to have the complete Lagrangian, e.g. terms involving LQ-Higgs interactions, such that the full set of counterterms needed to cancel all generated divergences is available.
## 3 Renormalization Group Evolution and Phenomenological Analysis
We calculate the complete two-loop \(\beta\)-functions and the one-loop threshold corrections of all couplings of our LQ Lagrangian in the \(\overline{\text{MS}}\)-scheme. For the anomalous dimensions, we used the Python package PyR@TE[196].1 The PyR@TE model files and full analytic expressions for the \(\beta\)-functions, as well as the threshold corrections, are available online publicly on Github [194]. The one-loop threshold corrections are presented in Appendix D, which were derived using Matchete[198] and cross-checked for some selected cases with matchmakereft[199]. We also checked that including the threshold corrections reduce, as expected, the matching scale dependence. The QCD contribution to some LQ Yukawa couplings was cross-checked with Ref. [200].
Footnote 1: We cross-checked our results using the _Mathematica_ package RGBeta[197].
We use the following convention throughout this paper for the \(\beta\)-function for any parameter \(A\)
\[\beta\left(A\right)\equiv\mu\frac{dA}{d\mu}\equiv\frac{1}{\left(4\pi\right)^ {2}}\beta^{(1)}(A)+\frac{1}{\left(4\pi\right)^{4}}\beta^{(2)}(A)+...\,. \tag{8}\]
where \(\mu\) is the renormalization scale and \(\beta^{(n)}(A)\) denotes the contribution at \(n\)-loop order and the dots are the higher-order terms. The \(\beta\)-functions form a system of coupled differential equations which we solve numerically, as explained below.
At the one-loop level, we have the following coefficients for the gauge couplings
\[\beta^{(1)}(g_{1}) =\frac{41}{10}g_{1}^{3}+\left(\frac{n_{1}}{15}+\frac{16n_{\tilde {1}}}{15}+\frac{49n_{2}}{30}+\frac{n_{\tilde{2}}}{30}+\frac{n_{3}}{5}\right)g _{1}^{3}\,, \tag{9}\] \[\beta^{(1)}(g_{2}) =-\frac{19}{6}g_{2}^{3}+\left(\frac{n_{2}}{2}+\frac{n_{\tilde{2} }}{2}+2n_{3}\right)g_{2}^{3}\,,\] (10) \[\beta^{(1)}(g_{3}) =-7g_{3}^{3}+\left(\frac{n_{1}}{6}+\frac{n_{\tilde{1}}}{6}+\frac {n_{2}}{3}+\frac{n_{\tilde{2}}}{3}+\frac{n_{3}}{2}\right)g_{3}^{3}\,, \tag{11}\]
where, \(n_{1}\), \(n_{\tilde{1}}\), \(n_{2}\), \(n_{\tilde{2}}\) and \(n_{3}\) denote the number of copies, i.e. generations, of \(\Phi_{1}\), \(\Phi_{\tilde{1}}\), \(\Phi_{2}\), \(\Phi_{\tilde{2}}\) and \(\Phi_{3}\), respectively. The first term is the well know SM term, and we have used the \(SU(5)\) GUT normalization \(g_{1}=\sqrt{\frac{5}{3}}\,\tilde{g}_{1}\) w.r.t. the values given in Table 1. The one-loop expressions for the \(\beta\)-function for the Yukawa couplings and the corresponding threshold
correction are provided in Appendixes C and D, respectively. The two-loop \(\beta\)-function coefficients for the gauge couplings are given in Appendix B.
With these results at hand, we can next study the phenomenology of some selected cases. Here, we focus primarily on the evolution of the gauge and Higgs Yukawa couplings from the electroweak scale to the GUT scale.2 As the starting point for the evolution, we take the following initial values of the SM parameters at the top scale, above the top threshold (i.e. with 6 active flavours) in the \(\overline{\rm MS}\)-scheme [201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235]
Footnote 2: Note that without a full specification of the GUT theory and its breaking sector, we cannot calculate the one-loop threshold corrections at the GUT scale which would be necessary to obtain a scheme and scale independent result. However, as no large logarithms are involved here, and the gauge coupling is already significantly smaller than the strong coupling at the EW scale, we assume these effects as uncertainties in the calculation.
\[g_{1}=0.3585\times\sqrt{\frac{5}{3}},\;\;\;g_{2}=0.648,\;\;\;g_{3} =1.16,\] \[y_{t}=0.935,\;\;\;y_{b}=0.015,\;\;\;y_{\tau}=0.01,\;\;\;\lambda= 0.126\,. \tag{12}\]
We evolve the couplings using the \(\beta\)-functions of the SM up to the LQ scale \(m_{x}\) (with
Figure 1: Renormalization group evolution of the gauge couplings \(\left(\alpha_{i}=g_{i}^{2}/4\pi\right)\) in SM and its extensions with the different LQ representations. In order to better visualize the effect of adding LQ, we considered three generations of each LQ of mass 3 TeV. For simplicity, we assumed all Yukawa and Higgs couplings involving LQs to be zero (at the LQ scale).
\(x=1,\tilde{1},2,\tilde{2},3\)). Not that here we neglected the tiny Yukawa couplings of the first two generations.
After evolving the couplings to the LQ scale, we include the one-loop threshold corrections determined by comparing the theory with SLQs, to the one without them (i.e. the SM at the LQ scale). This gives a shift to the SM fermion Yukawa couplings depending mainly on the initial values (at the LQ scale) of the LQ Yukawa couplings.3 We then run all couplings, the SM ones as well as the ones including LQs, using the \(\beta\)-functions of the full model from the LQ scale to the high scale, for which we take as an upper limit the Planck scale (\(\approx 10^{19}\)) where gravitational effects would become important.
Footnote 3: Note that for our SLQ models, the gauge couplings do not receive threshold corrections at one-loop in the \(\overline{\text{MS}}\)-scheme.
As a first step, we illustrate the running of the gauge coupling as a function of the scale \(Q\) in Fig. 1 for the five different LQ representations for a fixed LQ scale of \(3\,\text{TeV}\), which is compatible with direct LHC searches [173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192]. In order to highlight and enhance the impact of adding the LQs, we considered three generations of the same LQ representation. As the LQ couplings have a minimal effect on the running of gauge couplings, we take their initial value to be zero at the LQ scale.
We also observe that no single LQ representation at the TeV scale can lead to gauge coupling unification. Nonetheless, if we allow for higher LQ masses, it turns out that a single generation of \(\Phi_{3}\) can lead to gauge coupling unification at \(\approx 10^{14}\,\text{GeV}\) for an LQ mass of around \(10^{6}\,\text{TeV}\), as shown in Fig. 2 (left). However, such a low unification scale would conflict with the limits from proton decay [236], at least for a standard \(SU(5)\) GUT. In Fig. 2 (left), we also show the impact of the LQ Yukawa couplings on SM fermions
Figure 2: Left: Two-loop renormalization group evolution of gauge couplings for SM+\(\Phi_{3}\). We observe gauge unification at around \(10^{14}\,\text{GeV}\) when we set the LQ scale to \(m_{3}\approx 10^{6}\,\text{TeV}\). We vary the initial value of the three diagonal components of \(Y_{3}^{LL}\) between \(-0.6\) to \(0.6\) (keeping the initial value of all other couplings to zero) to show that their effect on running is minimal. Right: Two-loop renormalization group running of gauge couplings for SM+\(\Phi_{2}+\Phi_{3}\). We observe gauge unification for \(m_{2}\)= \(3\,\text{TeV}\) and \(m_{3}\)= \(4\times 10^{3}\,\text{TeV}\).
(which are free parameters). One can see that for the gauge coupling evolution, their effect is suppressed since they only enter at the two-loop level. In fact, we scanned over the initial values of all the three diagonal elements of \(Y_{3}^{LL}\) between \(-0.6\) and \(0.6\) (while keeping the initial value of other couplings to zero), leading to a slight thickening of the curves. However, for larger values of the LQ Yukawa couplings large non-perturbative values for some couplings are induced below the GUT scale, setting an upper limit on the Yukawa couplings if perturbativity is required.
We now illustrate the RGE of bottom and tau Yukawa couplings in Fig. 3. While in the SM, bottom-tau Yukawa coupling unification already happens around \(10^{6}\,\mathrm{GeV}\), in LQ models, all representations have a small impact on the RGE of bottom Yukawa. Furthermore, the same is true for the tau one, except for \(\Phi_{1}\) and \(\Phi_{2}\). For them, the one-loop threshold corrections to the tau Yukawa coupling can be sizable and their running strongly changes (w.r.t. the SM).4 The running of the tau Yukawa coupling of SM+\(\Phi_{1}\)
Figure 3: Two-loop renormalization group running of bottom and tau Yukawa couplings in SM+\(\Phi_{1}\) and SM+\(\Phi_{2}\) for a single-generation SLQ of mass \(3\,\mathrm{TeV}\). The dotted line denotes the tau-Yukawa running when we set the initial value of the \((3,3)\) components of \(Y_{1}^{RR}\), \(Y_{1}^{LL}\), \(Y_{2}^{RL}\) and \(Y_{2}^{LR}\) to \(0.1\). For solid lines, we reverse the sign of the \((3,3)\) components of \(Y_{1}^{LL}\) and \(Y_{2}^{LR}\). The blue bands show the variation of running in SM+\(\Phi_{i}\), for \(i=1,\tilde{1},2,\tilde{2},3\) for bottom Yukawa and \(i=\tilde{1},\tilde{2},3\) for tau Yukawa, while varying the \((3,3)\) components of LQ Yukawa couplings.
and SM+\(\Phi_{2}\) are shown in Fig. 3. The tau threshold correction strongly depends on our choice of the initial value of \((3,3)\) component of \(Y_{1}^{RR}\), \(Y_{1}^{LL}\), \(Y_{2}^{RL}\) and \(Y_{2}^{LR}\) which is set to \(0.105\) for dotted lines and the sign of \(Y_{1}^{LL}\) and \(Y_{2}^{LR}\) reversed for the solid lines. The initial value of the remaining LQ couplings was set to zero. In fact, as can be seen in Fig. 3, the trajectories all lie within the quite narrow blue band except for, \(\Phi_{1}\) and \(\Phi_{2}\). Here a very strong impact on the evaluation of the tau Yukawa is possible due to the chiral enhancement.
Footnote 1: The \(\Phi_{2}\) and \(\Phi_{3}\) couplings are given in Eq. (18).
We next consider the case, motivated by \(SU(5)\) GUTs [50], where SM is extended by \(\Phi_{2}\) and \(\Phi_{3}\). Here, we find that for \(m_{2}\approx 3\,\mathrm{TeV}\) and \(m_{3}\approx 10^{3.5}\,\mathrm{TeV}\), we can achieve gauge coupling unification at \(\approx 10^{13}\,\mathrm{TeV}\) as shown in Fig. 2 (right) as well as Yukawa coupling unification around the same energy scale. However, this scale is again naively in conflict with proton decay. Finally, we consider the case where we extend the SM with (a single representation) all five possible SLQs (see Table 1). In this case, we obtain gauge coupling unification at around \(10^{13}\,\mathrm{TeV}\) for a common LQ scale of \(125\,\mathrm{TeV}\) as shown in Fig. 4 (top). We also show in Fig. 4 (bottom) that bottom-tau Yukawa coupling unification is possible nearly at the same energy scale for the following initial values of the \((3,3)\) component of the following LQ Yukawa couplings at the LQ scale
\[Y_{1}^{RR}{}_{[3,3]}=0.1\,,\,Y_{1}^{LL}{}_{[3,3]}=-0.1\,,\,Y_{1}^{ RL}{}_{[3,3]}=-0.04\,,\,Y_{1}^{LR}{}_{[3,3]}=0.1\,,\,Y_{3}^{LL}{}_{[3,3]}=0.45\,, \tag{19}\]
while taking the initial value of other LQ couplings to be zero.
Figure 4: Two-loop renormalization group running of gauge couplings (left), and bottom and tau Yukawa couplings (right) for SM+\(\Phi_{1}+\Phi_{\tilde{1}}+\Phi_{2}+\Phi_{\tilde{2}}+\Phi_{3}\). We observe gauge coupling unification and bottom-tau unification at around the same energy scale \(\approx 10^{13}\,\mathrm{GeV}\) when we set the mass of all LQs to \(\approx 125\,\mathrm{TeV}\) and take the non-zero initial values of LQ couplings given in Eq. (19).
Conclusions
LQs are well-motivated extensions of the SM. They arise in composite or extra-dimensional setups and, most importantly, are predicted by GUTs. Furthermore, they have been under intensified investigation in the last years as they are potential candidates for describing several tensions between SM measurements and experiments.
In this article, we computed the two-loop renormalization group evolution as well as the one-loop threshold corrections, for all parameters within generic SLQ models. This includes gauge couplings, SM Yukawa couplings, LQ Yukawa couplings (to quarks and leptons), Higgs and LQ (self-)interactions. The appendices collect the full SLQ Lagrangian, two-loop \(\beta\)-functions of the gauge couplings, one-loop threshold corrections and one-loop \(\beta\)-functions for the SM Yukawa couplings. The full analytic expressions, together with the necessary model files, can be obtained from [194].
In our phenomenological analysis, we considered on the case in which one or more SLQs are light remnants of a symmetry-breaking sector of a GUT. In this setup, we focused on the renormalisation group evolution of the gauge and the Yukawa couplings, examining if unification can be achieved. Several simple scenarios were studied: 1) If one adds one of the 5 possible LQ representations to the SM, only \(\Phi_{3}\) (with a mass around \(10^{6}\,\mathrm{TeV}\)) can lead to gauge coupling unification at \(\approx 10^{15}\,\mathrm{GeV}\). 2) Extending the SM by all 5 possible LQ representations with a common mass scale \(\approx 150\,\mathrm{TeV}\), unification at \(\approx 10^{14}\,\mathrm{GeV}\) is achieved. 3) Adding the GUT-motivated LQs \(\Phi_{2}\) and \(\Phi_{3}\) to the SM, with masses of \(m_{2}\)= \(3\,\mathrm{TeV}\) and \(m_{3}\)= \(4\times 10^{3}\,\mathrm{TeV}\), respectively, gauge coupling unification occurs at \(\approx 10^{13}\,\mathrm{GeV}\). 4) Concerning the bottom-tau Yukawa unification, which happens at far too low scales in the SM, only the LQs \(\Phi_{1}\) and \(\Phi_{2}\) can lead to a sizable modification, due to possible chiral enhancement. Therefore, by choosing the initial value of LQ Yukawa couplings properly, one can always achieve bottom-tau unification at any desired (high) scale.
## Acknowledgements
We would like to thank Lohan Sartore for help in using PyR@TE and Anders E. Thomsen for help in using RGBeta and Matchete. We thank Michael Spira for useful comments on the draft. The work of A.C. is supported by a professorship grant from the Swiss National Science Foundation (No. PP00P21_76884).
## Appendix A Lagrangian terms involving SLQs
This appendix collects the terms in the SLQ Lagrangian in Eq. (7) following the convention in [195]. We begin with the terms \(\mathcal{L}_{1}\), \(\mathcal{L}_{\bar{1}}\), \(\mathcal{L}_{2}\), \(\mathcal{L}_{\bar{2}}\) and \(\mathcal{L}_{3}\) which involve only single LQ interactions that include kinetic terms, mass terms, Yukawa interaction with SM fermions,
quartic interaction with Higgs field and self-quartic interactions.
\[\mathcal{L}_{1}= \left(D_{\mu}\Phi_{1}\right)^{\dagger}D^{\mu}\Phi_{1}-m_{1}^{2} \Phi_{1}^{\dagger}\Phi_{1}-Y_{1}\left(H^{\dagger}H\right)\Phi_{1}^{\dagger} \Phi_{1}+\left[Y_{1,ij}^{RR}\,\tilde{u}_{i}^{c}\ell_{j}\Phi_{1}^{\dagger}+Y_{1, ij}^{LL}\,\left(\tilde{Q}_{i}^{\tau\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
\[{\cal L}_{1\,2}=Y_{12}^{(1)}\left(\Phi_{1,c_{1}}^{\dagger}\Phi_{1,c_{1}}\right) \left(\Phi_{2,c_{2}}^{\dagger}\Phi_{2,c_{2}}\right)+Y_{12}^{\prime(1)}\left(\Phi_ {1,c_{1}}^{\dagger}\Phi_{1,c_{2}}\right)\left(\Phi_{2,c_{2}}^{\dagger}\Phi_{2,c _{1}}\right) \tag{11}\]
\[{\cal L}_{1\,\tilde{2}}= Y_{1\tilde{2}}^{(1)}\left(\Phi_{1,c_{1}}^{\dagger}\Phi_{1,c_{1}} \right)\left(\Phi_{\tilde{2},c_{2}}^{\dagger}\Phi_{\tilde{2},c_{2}}\right)+Y_{ 1\tilde{2}}^{\prime(1)}\left(\Phi_{1,c_{1}}^{\dagger}\Phi_{1,c_{2}}\right) \left(\Phi_{\tilde{2},c_{2}}^{\dagger}\Phi_{\tilde{2},c_{1}}\right)\] \[+\left[A_{1\tilde{2}\tilde{2}}\ \Phi_{1,c_{1}}\left(\Phi_{\tilde{2},c_{2}}^{ \intercal}i\sigma_{2}\Phi_{\tilde{2},c_{3}}\right)\epsilon^{c_{1}c_{2}c_{3}}- A_{1\tilde{2}}\,\Phi_{1}\left(\Phi_{\tilde{2}}^{\dagger}H\right)+{\rm h.c.}\right] \tag{12}\]
\[{\cal L}_{1\,3}= Y_{13}^{(1)}\left(\Phi_{1,c_{1}}^{\dagger}\Phi_{1,c_{1}}\right) \left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{2}}\right)+Y_{13}^{\prime(1)}\left( \Phi_{1,c_{1}}^{\dagger}\Phi_{1,c_{2}}\right)\left(\Phi_{3,c_{2}}^{\dagger} \Phi_{3,c_{1}}\right)+\left[Y_{13}\big{(}H^{\dagger}\left(\sigma\cdot\Phi_{3} \right)H\big{)}\Phi_{1}^{\dagger}\right.\] \[+\frac{1}{2}Y_{1313}\left(\Phi_{1,c_{1}}^{\dagger}\Phi_{3,c_{1}}^ {I}\Phi_{1,c_{2}}^{\dagger}\Phi_{3,c_{2}}^{I}\right)+Y_{1333}\left(\Phi_{1,c_ {1}}^{\dagger}\Phi_{3,c_{1}}^{I}\Phi_{3,c_{2}}^{J\dagger}\Phi_{3,c_{2}}^{K}i \epsilon^{IJK}\right)+{\rm h.c.}\right] \tag{13}\]
\[{\cal L}_{\tilde{1}\,2}=Y_{\tilde{1}2}^{(1)}\left(\Phi_{\tilde{1},c_{1}}^{ \dagger}\Phi_{\tilde{1},c_{1}}\right)\left(\Phi_{2,c_{2}}^{\dagger}\Phi_{2,c_ {2}}\right)+Y_{\tilde{1}2}^{\prime(1)}\left(\Phi_{\tilde{1},c_{1}}^{\dagger} \Phi_{\tilde{1},c_{2}}\right)\left(\Phi_{2,c_{2}}^{\dagger}\Phi_{2,c_{1}}\right) \tag{14}\]
\[{\cal L}_{\tilde{1}\,3}= Y_{\tilde{1}3}^{(1)}\left(\Phi_{\tilde{1},c_{1}}^{\dagger}\Phi_{ \tilde{1},c_{1}}\right)\left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{2}}\right)+Y_{ \tilde{1}3}^{\prime(1)}\left(\Phi_{\tilde{1},c_{1}}^{\dagger}\Phi_{\tilde{1}, c_{2}}\right)\left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{1}}\right)\] \[+\left[Y_{\tilde{1}3}\big{(}H^{\intercal}i\sigma_{2}\left(\sigma \cdot\Phi_{3}\right)^{\dagger}H\big{)}\Phi_{\tilde{1}}+{\rm h.c.}\right] \tag{15}\]
\[{\cal L}_{2\,\tilde{2}}= Y_{2\tilde{2}}^{(1)}\left(\Phi_{2,c_{1}}^{\dagger}\Phi_{2,c_{1}} \right)\left(\Phi_{\tilde{2},c_{2}}^{\dagger}\Phi_{\tilde{2},c_{2}}\right)+Y_{2 \tilde{2}}^{\prime(1)}\left(\Phi_{2,c_{1}}^{\dagger}\Phi_{2,c_{2}}\right)\left( \Phi_{\tilde{2},c_{2}}^{\dagger}\Phi_{\tilde{2},c_{1}}\right)\] \[+Y_{2\tilde{2}}^{(3)}\left(\Phi_{2,c_{1}}^{\dagger}\Phi_{\tilde{2},c_{1}}\right)\left(\Phi_{2,c_{2}}^{\dagger}\Phi_{2,c_{2}}\right)+Y_{2\tilde{2 }}^{\prime(3)}\left(\Phi_{2,c_{1}}^{\dagger}\Phi_{\tilde{2},c_{2}}\right)\left( \Phi_{\tilde{2},c_{2}}^{\dagger}\Phi_{2,c_{1}}\right)\] \[+\left[Y_{2\tilde{2}}\left(\Phi_{2}^{\dagger}H\right)\left(H^{ \intercal}i\sigma_{2}\Phi_{\tilde{2}}\right)+{\rm h.c.}\right] \tag{16}\]
\[{\cal L}_{2\,3}= Y_{23}^{(1)}\left(\Phi_{2,c_{1}}^{\dagger}\Phi_{2,c_{1}}\right) \left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{2}}\right)+Y_{23}^{\prime(1)}\left( \Phi_{2,c_{1}}^{\dagger}\Phi_{2,c_{2}}\right)\left(\Phi_{3,c_{2}}^{\dagger} \Phi_{3,c_{1}}\right)\] \[+Y_{23}^{(3)}\left(\Phi_{2,c_{1}}^{\dagger}\sigma^{I}\Phi_{2,c_{1 }}\right)\left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{3,c_{2}}^{K}\right)+ Y_{23}^{\prime(3)}\left(\Phi_{2,c_{1}}^{\dagger}\sigma^{I}\Phi_{2,c_{2}}\right) \left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{3,c_{1}}^{K}\right)\] \[+\left[Y_{233}\ \left(H^{\dagger}\sigma^{I}\Phi_{2,c_{1}}\right)\left(\Phi_{3,c_{2 }}^{J}i\epsilon^{IJK}\Phi_{3,c_{3}}^{K}\right)\epsilon^{c_{1}c_{2}c_{3}}+{\rm h.c.}\right] \tag{17}\]
\[{\cal L}_{\tilde{2}\,3}= Y_{23}^{(1)}\left(\Phi_{\tilde{2},c_{1}}^{\dagger}\Phi_{\tilde{2},c_{1}} \right)\left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{2}}\right)+Y_{23}^{\prime(1)} \left(\Phi_{2,c_{1}}^{\dagger}\Phi_{\tilde{2},c_{2}}\right)\left(\Phi_{3,c_{2}}^{ \dagger}\Phi_{3,c_{1}}\right)\] \[+Y_{23}^{(3)}\left(\Phi_{2,c_{1}}^{\dagger}\sigma^{I}\Phi_{2,c_{1 }}\right)\left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{3,c_{2}}^{K} \right)+Y_{23}^{\prime(3)}\left(\Phi_{2,c_{1}}^{\dagger}\sigma^{I}\Phi_{2,c_{2}} \right)\left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{3,c_{1}}^{K}\right)\] \[+\left[Y_{233}\ \left(H^{\dagger}\sigma^{I}\Phi_{2,c_{1}}\right)\left(\Phi_{3,c_{2 }}^{J}i\epsilon^{IJK}\Phi_{3,c_{3}}^{K}\right)\epsilon^{c_{1}c_{2}c_{3}}+{\rm h.c.}\right] \tag{18}\]
\[{\cal L}_{\tilde{2}\,3}= Y_{23}^{(1)}\left(\Phi_{\tilde{2},c_{1}}^{\dagger}\Phi_{\tilde{2},c_{1}} \right)\left(\Phi_{3,c_{2}}^{\dagger}\Phi_{3,c_{2}}\right)+Y_{23}^{\prime(1)} \left(\Phi_{2,c_{1}}^{\dagger}\Phi_{\tilde{2},c_{2}}\right)\left(\Phi_{3,c_{2 }}^{\dagger}\Phi_{3,c_{1}}\right) \tag{19}\]
\[+Y_{23}^{\prime(3)}\left(\Phi_{\bar{2},c_{1}}^{\dagger}\sigma^{I} \Phi_{\bar{2},c_{2}}\right)\left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{3,c_{1}}^{K}\right)+Y_{23}^{(3)}\left(\Phi_{\bar{2},c_{1}}^{\dagger}\sigma^{I} \Phi_{\bar{2},c_{1}}\right)\left(\Phi_{3,c_{2}}^{J\dagger}i\epsilon^{IJK}\Phi_{ 3,c_{2}}^{K}\right)\] \[+\left[Y_{\bar{2}33}\ \left(\Phi_{\bar{2},c_{1}}^{\dagger}i\sigma_{2} \sigma^{I}H\right)\left(\Phi_{3,c_{2}}^{J}i\epsilon^{IJK}\Phi_{3,c_{3}}^{K} \right)\epsilon^{c_{1}c_{2}c_{3}}+A_{\bar{2}3}\left(\Phi_{\bar{2}}^{\dagger} \left(\sigma\cdot\Phi_{3}\right)H\right)+\text{h.c.}\right] \tag{115}\]
The next set of terms \(\mathcal{L}_{\bar{1}\,2\,\bar{2}}\), \(\mathcal{L}_{1\,\bar{1}\,2}\), \(\mathcal{L}_{1\,2\,3}\), \(\mathcal{L}_{1\,\bar{2}\,3}\) and \(\mathcal{L}_{\bar{1}\,2\,3}\) contain interactions involving precisely three LQs.
\[\mathcal{L}_{\bar{1}\,2\,\bar{2}}=A_{\bar{1}2\bar{2}}\ \Phi_{\bar{1},c_{1}}\left(\Phi_{2,c_{2}}^{ \intercal}i\sigma_{2}\Phi_{\bar{2},c_{3}}\right)\epsilon^{c_{1}c_{2}c_{3}}+ \text{h.c.} \tag{116}\]
\[\mathcal{L}_{1\,\bar{1}\,2}=Y_{1\bar{1}2}\ \Phi_{1,c_{1}}\Phi_{1,c_{2}} \left(\Phi_{2,c_{3}}^{\intercal}i\sigma_{2}H\right)\epsilon^{c_{1}c_{2}c_{3}}+ \text{h.c.} \tag{117}\]
\[\mathcal{L}_{1\,2\,3}= Y_{123}\ \Phi_{1,c_{1}}\left(H^{\dagger}\left(\sigma\cdot\Phi_{3,c_{3}} \right)\Phi_{2,c_{2}}\right)\epsilon^{c_{1}c_{2}c_{3}}+Y_{1223}\Phi_{1,c_{1}}^{ \dagger}\left(\Phi_{2,c_{2}}^{\dagger}\big{(}\sigma\cdot\Phi_{3,c_{2}}\big{)} \Phi_{2,c_{1}}\right)\] \[+Y_{1223}^{\prime}\Phi_{1,c_{1}}^{\dagger}\left(\Phi_{2,c_{2}}^{ \dagger}\big{(}\sigma\cdot\Phi_{3,c_{1}}\big{)}\Phi_{2,c_{2}}\right)+\text{h.c.} \tag{118}\]
\[\mathcal{L}_{1\,\bar{2}\,3}= Y_{\bar{1}23}\ \Phi_{1,c_{1}}\left(\Phi_{\bar{2},c_{2}}^{\intercal}i \sigma_{2}\left(\sigma\cdot\Phi_{3,c_{3}}\right)H\right)\epsilon^{c_{1}c_{2}c_ {3}}+Y_{1\bar{2}\bar{2}3}\Phi_{1,c_{1}}^{\dagger}\left(\Phi_{\bar{2},c_{2}}^{ \dagger}\big{(}\sigma\cdot\Phi_{3,c_{2}}\big{)}\Phi_{\bar{2},c_{1}}\right)\] \[+Y_{12\bar{2}3}^{\prime}\Phi_{1,c_{1}}^{\dagger}\left(\Phi_{\bar{2 },c_{2}}^{\dagger}\big{(}\sigma\cdot\Phi_{3,c_{1}}\big{)}\Phi_{\bar{2},c_{2}} \right)+\text{h.c.} \tag{119}\]
\[\mathcal{L}_{\bar{1}\,2\,3}=Y_{\bar{1}23}\ \Phi_{\bar{1},c_{1}}\left(\Phi_{2,c_{2}}^{ \intercal}i\sigma_{2}\left(\sigma\cdot\Phi_{3,c_{3}}\right)H\right)\epsilon^{c _{1}c_{2}c_{3}}+\text{h.c.} \tag{120}\]
Finally, the interaction terms \(\mathcal{L}_{\bar{1}\,2\,\bar{2}\,3}\) and \(\mathcal{L}_{1\,\bar{1}\,2\,\bar{2}}\) which involve four different SLQs are as follows,
\[\mathcal{L}_{\bar{1}\,2\,\bar{2}\,3}=Y_{\bar{1}\bar{2}23}\Phi_{ \bar{1},c_{1}}^{\dagger}\left(\Phi_{2,c_{2}}^{\dagger}\big{(}\sigma\cdot\Phi_{ 3,c_{2}}\big{)}\Phi_{\bar{2},c_{1}}\right)+Y_{\bar{1}\bar{2}23}^{\prime}\Phi_{ \bar{1},c_{1}}^{\dagger}\left(\Phi_{2,c_{2}}^{\dagger}\big{(}\sigma\cdot\Phi_{3,c_{1}}\big{)}\Phi_{\bar{2},c_{2}}\right)+\text{h.c.} \tag{121}\]
\[\mathcal{L}_{1\,\bar{1}\,2\,\bar{2}}=Y_{1\bar{1}2\bar{2}}\Phi_{ \bar{1},c_{1}}^{\dagger}\Phi_{\bar{1},c_{1}}\left(\Phi_{\bar{2},c_{2}}^{ \dagger}\Phi_{2,c_{2}}\right)+Y_{1\bar{1}2\bar{2}}^{\prime}\Phi_{1,c_{1}}^{ \dagger}\Phi_{\bar{1},c_{2}}\left(\Phi_{\bar{2},c_{2}}^{\dagger}\Phi_{2,c_{1}} \right)+\text{h.c.} \tag{122}\]
## Appendix B Two-loop \(\beta\)-functions of gauge couplings
In this Appendix, we collect the two-loop \(\beta\)-functions of gauge couplings of some of the SLQ models considered in this article.
\(\bullet\) SM \(+\,\Phi_{1}\) \[\beta^{(2)}(g_{1})= \,+\frac{121}{30}g_{1}^{5}+\frac{27}{10}g_{1}^{3}g_{2}^{2}+\frac{148 }{15}g_{1}^{3}g_{3}^{2}-\frac{17}{10}g_{1}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u }\right)-\frac{1}{2}g_{1}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{3}{2}g_{1}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{13}{5}g_{1}^{3}{\rm Tr}\left(Y_{1}^{RR{\dagger}}Y_{1}^{RR}\right)-g_{1} ^{3}{\rm Tr}\left(Y_{1}^{LL{\dagger}}Y_{1}^{LL}\right)\] \[\,-\frac{4}{5}g_{1}^{3}{\rm Tr}\left(Y_{1}^{Q,LL}{}^{*}Y_{1}^{Q, LL}\right)-2g_{1}^{3}{\rm Tr}\left(Y_{1}^{Q,RR{\dagger}}Y_{1}^{Q,RR}\right)\] (14) \[\beta^{(2)}(g_{2})= \,+\frac{9}{10}g_{1}^{2}g_{2}^{3}+\frac{35}{6}g_{2}^{5}+12g_{2}^{3 }g_{3}^{2}-\frac{3}{2}g_{2}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-\frac {3}{2}g_{2}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{1}{2}g_{2}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)-3 g_{2}^{3}{\rm Tr}\left(Y_{1}^{LL{\dagger}}Y_{1}^{LL}\right)-12g_{2}^{3}{\rm Tr }\left(Y_{1}^{Q,LL}{}^{*}Y_{1}^{Q,LL}\right)\] (15) \[\beta^{(2)}(g_{3})= \,+\frac{37}{30}g_{1}^{2}g_{3}^{3}+\frac{9}{2}g_{2}^{2}g_{3}^{3}- \frac{67}{3}g_{3}^{5}-2g_{3}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{ 3}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{1}{2}g_{3}^{3}{\rm Tr}\left(Y_{1}^{RR{\dagger}}Y_{1}^{RR} \right)-g_{3}^{3}{\rm Tr}\left(Y_{1}^{LL{\dagger}}Y_{1}^{LL}\right)-8g_{3}^{3} {\rm Tr}\left(Y_{1}^{Q,LL}{}^{*}Y_{1}^{Q,LL}\right)\] \[\,-2g_{3}^{3}{\rm Tr}\left(Y_{1}^{Q,RR{\dagger}}Y_{1}^{Q,RR}\right)\] (16)
* SM \(+\,\Phi_{\tilde{1}}\) \[\beta^{(2)}(g_{1})= \,+\frac{529}{30}g_{1}^{5}+\frac{27}{10}g_{1}^{3}g_{2}^{2}+\frac{38 8}{15}g_{1}^{3}g_{3}^{2}-\frac{17}{10}g_{1}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y _{u}\right)-\frac{1}{2}g_{1}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{3}{2}g_{1}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)-2 g_{1}^{3}{\rm Tr}\left(Y_{\tilde{1}}^{RR{\dagger}}Y_{1}^{RR}\right)+\frac{32}{5}g_{1}^{3 }{\rm Tr}\left(Y_{\tilde{1}}^{Q,RR}{}^{*}Y_{\tilde{1}}^{Q,RR}\right)\] (17) \[\beta^{(2)}(g_{2})= \,+\frac{9}{10}g_{1}^{2}g_{2}^{3}+\frac{35}{6}g_{2}^{5}+12g_{2}^{3 }g_{3}^{2}-\frac{3}{2}g_{2}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)- \frac{3}{2}g_{2}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{1}{2}g_{2}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)\] (18) \[\beta^{(2)}(g_{3})= \,+\frac{97}{30}g_{1}^{2}g_{3}^{3}+\frac{9}{2}g_{2}^{2}g_{3}^{3}- \frac{67}{3}g_{3}^{5}-2g_{3}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{ 3}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{1}{2}g_{3}^{3}{\rm Tr}\left(Y_{\tilde{1}}^{RR{\dagger}}Y _{\tilde{1}}^{RR}\right)+4g_{3}^{3}{\rm Tr}\left(Y_{\tilde{1}}^{Q,RR}{}^{*}Y _{\tilde{1}}^{Q,RR}\right)\] (19)
* SM \(+\,\Phi_{2}\) \[\beta^{(2)}(g_{1})= \,+\frac{1499}{75}g_{1}^{5}+\frac{87}{5}g_{1}^{3}g_{2}^{2}+\frac {524}{15}g_{1}^{3}g_{2}^{2}-\frac{17}{10}g_{1}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y _{u}\right)-\frac{1}{2}g_{1}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{3}{2}g_{1}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{5}{2}g_{1}^{3}{\rm Tr}\left(Y_{2}^{RL{\dagger}}Y_{2}^{RL}\right)-\frac{ 37}{10}g_{1}^{3}{\rm Tr}\left(Y_{2}^{LR{\dagger}}Y_{2}^{LR}\right)\] (20) \[\beta^{(2)}(g_{2})= \,+\frac{29}{5}g_{1}^{2}g_{2}^{3}+\frac{37}{3}g_{2}^{5}+20g_{2}^{3 }g_{3}^{2}-\frac{3}{2}g_{2}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-\frac{3 }{2}g_{2}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-\frac{1}{2}g_{2}^{3}{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{3}{2}g_{2}^{3}{\rm Tr}\left(Y_{2}^{RL{\dagger}}Y_{2}^{RL}\right)-\frac{3 }{2}g_{2}^{3}{\rm Tr}\left(Y_{2}^{LR{\dagger}}Y_{2}^{LR}\right)\] (21) \[\beta^{(2)}(g_{3})= \,+\frac{131}{30}g_{1}^{2}g_{3}^{3}+\frac{15}{2}g_{2}^{2}g_{3}^{3} -\frac{56}{3}g_{3}^{5}-2g_{3}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{ 3}^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[\,-g_{3}^{3}{\rm Tr}\left(Y_{2}^{RL{\dagger}}Y_{2}^{RL}\right)-g_{ 3}^{3}{\rm Tr}\left(Y_{2}^{LR{\dagger}}Y_{2}^{LR}\right)\] (22)
\(\bullet\) SM \(+\Phi_{2}\) \[\beta^{(2)}(g_{1})= +\frac{299}{75}g_{1}^{5}+3g_{1}^{3}g_{2}^{2}+\frac{28}{3}g_{1}^{3}g_ {3}^{2}-\frac{17}{10}g_{1}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-\frac{1 }{2}g_{1}^{3}\text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-\frac{3}{2}g_{1}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{13}{10}g_{1}^{3}\text{Tr}\left(Y_{2}^{RL\dagger}Y_{2}^{RL}\right)\] (142) \[\beta^{(2)}(g_{2})= +g_{1}^{2}g_{2}^{3}+\frac{37}{3}g_{2}^{5}+20g_{2}^{3}g_{3}^{2}- \frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-\frac{3}{2}g_{2 }^{3}\text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-\frac{1}{2}g_{2}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{2}^{RL\dagger}Y_{2}^{RL}\right)\] (143)
\[\beta^{(2)}(g_{3})= +\frac{7}{6}g_{1}^{2}g_{3}^{3}+\frac{15}{2}g_{2}^{2}g_{3}^{3}- \frac{56}{3}g_{3}^{5}-2g_{3}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_ {3}^{3}\text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-g_{3}^{3}\text{Tr}\left(Y_{2}^{RL\dagger}Y_{2}^{RL}\right) \tag{144}\]
\[\beta^{(2)}(g_{3})= +\frac{3}{2}g_{1}^{2}g_{3}^{3}+\frac{33}{2}g_{2}^{2}g_{3}^{3}- 15g_{3}^{5}-2g_{3}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{3}^{3} \text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-3g_{3}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)-3g_{1}^{3} \text{Tr}\left(Y_{3}^{LL\dagger}Y_{3}^{LL}\right)+\frac{12}{5}g_{1}^{3}\text{ Tr}\left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right) \tag{145}\]
\[\beta^{(2)}(g_{2})= +\frac{5}{2}g_{1}^{2}g_{2}^{3}+\frac{371}{6}g_{2}^{5}+44g_{2}^{3} g_{3}^{2}-\frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)- \frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-\frac{1}{2}g_{2}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)-9 g_{2}^{3}\text{Tr}\left(Y_{3}^{LL\dagger}Y_{3}^{LL}\right)+36g_{2}^{3}\text{Tr} \left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right) \tag{146}\] \[\beta^{(2)}(g_{3})= +\frac{3}{2}g_{1}^{2}g_{3}^{3}+\frac{33}{2}g_{2}^{2}g_{3}^{3}-15g _{3}^{5}-2g_{3}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{3}^{3}\text{ Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-3g_{3}^{3}\text{Tr}\left(Y_{3}^{LL\dagger}Y_{3}^{LL}\right)+24g_ {3}^{3}\text{Tr}\left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right) \tag{147}\]
\(\bullet\) SM \(+\Phi_{2}\) \[\beta^{(2)}(g_{1})= +\frac{1511}{75}g_{1}^{5}+\frac{111}{5}g_{1}^{3}g_{2}^{2}+\frac{57 2}{15}g_{1}^{3}g_{3}^{2}-\frac{17}{10}g_{1}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_ {u}\right)-\frac{1}{2}g_{1}^{3}\text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-\frac{3}{2}g_{1}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{5}{2}g_{1}^{3}\text{Tr}\left(Y_{2}^{RL\dagger}Y_{2}^{RL}\right)-\frac{3 7}{10}g_{1}^{3}\text{Tr}\left(Y_{2}^{LR\dagger}Y_{2}^{LR}\right)\] \[-3g_{1}^{3}\text{Tr}\left(Y_{3}^{LL\dagger}Y_{3}^{LL}\right)+\frac {12}{5}g_{1}^{3}\text{Tr}\left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right)\] (148)
\[\beta^{(2)}(g_{2})= +\frac{37}{5}g_{1}^{2}g_{2}^{3}+\frac{205}{3}g_{2}^{5}+52g_{2}^{3} g_{3}^{2}-\frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-\frac{3}{2}g_{2}^{3} \text{Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-\frac{1}{2}g_{2}^{3}\text{Tr}\left(Y_{e}^{\dagger}Y_{e}\right)- \frac{3}{2}g_{2}^{3}\text{Tr}\left(Y_{2}^{RL\dagger}Y_{2}^{RL}\right)-\frac{3 }{2}g_{2}^{3}\text{Tr}\left(Y_{2}^{LR\dagger}Y_{2}^{LR}\right)\] \[-9g_{2}^{3}\text{Tr}\left(Y_{3}^{LL\dagger}Y_{3}^{LL}\right)+36g_{2 }^{3}\text{Tr}\left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right) \tag{149}\]
\[\beta^{(2)}(g_{3})= +\frac{143}{30}g_{1}^{2}g_{3}^{3}+\frac{39}{2}g_{2}^{2}g_{3}^{3}- \frac{23}{3}g_{3}^{5}-2g_{3}^{3}{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)-2g_{3} ^{3}{\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)\] \[-g_{3}^{3}{\rm Tr}\left(Y_{2}^{RL\,\dagger}Y_{2}^{RL}\right)-g_{3 }^{3}{\rm Tr}\left(Y_{2}^{LR\,\dagger}Y_{2}^{LR}\right)-3g_{3}^{3}{\rm Tr} \left(Y_{3}^{LL\,\dagger}Y_{3}^{LL}\right)\] \[+24g_{3}^{3}{\rm Tr}\left(Y_{3}^{Q,LL\,*}Y_{3}^{Q,LL}\right) \tag{101}\]
## Appendix C One-Loop \(\beta\)-functions of SM Yukawa couplings
This Appendix presents the one-loop \(\beta\)-functions of the SM Yukawa couplings. We use \(n_{1}\), \(n_{\bar{1}}\), \(n_{2}\), \(n_{\bar{2}}\) and \(n_{3}\) to denote the number of generations of \(\Phi_{1}\), \(\Phi_{\bar{1}}\), \(\Phi_{2}\), \(\Phi_{\bar{2}}\) and \(\Phi_{3}\), respectively.
\[\beta^{(1)}(Y_{u})=\frac{3}{2}Y_{u}Y_{u}^{\dagger}Y_{u}-\frac{3}{2 }Y_{d}Y_{d}^{\dagger}Y_{u}+3{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)Y_{u}+3{ \rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)Y_{u}+{\rm Tr}\left(Y_{e}^{\dagger}Y_{ e}\right)Y_{u}\] \[-\frac{17}{20}g_{1}^{2}Y_{u}-\frac{9}{4}g_{2}^{2}Y_{u}-8g_{3}^{2} Y_{u}+n_{1}\bigg{(}\frac{1}{2}Y_{u}Y_{1}^{RR\,*}Y_{1}^{RR\,\rm T}+Y_{u}Y_{1}^{Q,RR\,*}Y_{1}^ {Q,RR\,\rm T}\] \[+2Y_{1}^{LL\,*}Y_{e}^{*}Y_{1}^{RR\,\rm T}+\frac{1}{2}Y_{1}^{LL\,*} Y_{1}^{LL\,\rm T}Y_{u}+8Y_{1}^{Q,LL\,*}Y_{d}^{*}Y_{1}^{Q,RR\,\rm T}+4Y_{1}^{Q,LL\,*}Y_{ 1}^{Q,LL}Y_{u}\bigg{)}\] \[-n_{\bar{1}}\bigg{(}4Y_{u}Y_{\bar{1}}^{Q,RR\,*}Y_{\bar{1}}^{Q,RR} \bigg{)}+n_{2}\bigg{(}Y_{u}Y_{2}^{RL}Y_{2}^{RL\,\dagger}+2Y_{2}^{LR}Y_{e}^{ \dagger}Y_{2}^{RL\,\dagger}+\frac{1}{2}Y_{2}^{LR}Y_{2}^{LR\,\dagger}Y_{u}\bigg{)}\] \[+n_{3}\bigg{(}\frac{3}{2}Y_{3}^{LL\,*}Y_{3}^{LL\,\rm T}Y_{u}-12Y_{ 3}^{Q,LL\,*}Y_{3}^{Q,LL}Y_{u}\bigg{)} \tag{102}\]
\[\beta^{(1)}(Y_{d})=\frac{3}{2}Y_{d}Y_{d}^{\dagger}Y_{d}-\frac{3} {2}Y_{u}Y_{u}^{\dagger}Y_{d}+3{\rm Tr}\left(Y_{u}^{\dagger}Y_{u}\right)Y_{d}+3 {\rm Tr}\left(Y_{d}^{\dagger}Y_{d}\right)Y_{d}+{\rm Tr}\left(Y_{e}^{\dagger}Y_{ e}\right)Y_{d}-\frac{1}{4}g_{1}^{2}Y_{d}\] \[-\frac{9}{4}g_{2}^{2}Y_{d}-8g_{3}^{2}Y_{d}+n_{1}\bigg{(}Y_{d}Y_{1 }^{Q,RR\,\dagger}Y_{1}^{Q,RR}+\frac{1}{2}Y_{1}^{LL\,*}Y_{1}^{LL\,TL}Y_{d}+8Y_ {1}^{Q,LL\,*}Y_{u}^{*}Y_{1}^{Q,RR}\] \[+4Y_{1}^{Q,LL\,*}Y_{1}^{Q,LL}Y_{d}\bigg{)}+n_{\bar{1}}\bigg{(} \frac{1}{2}Y_{d}Y_{\bar{1}}^{RR\,*}Y_{\bar{1}}^{RR\,\rm T}\bigg{)}+n_{2}\bigg{(} \frac{1}{2}Y_{2}^{LR}Y_{2}^{LR\,\dagger}Y_{d}\bigg{)}+n_{\bar{2}}\bigg{(}Y_{d} Y_{2}^{RL\,\rm T}y_{2}^{RL\,\dagger}\bigg{)}\] \[+n_{3}\bigg{(}\frac{3}{2}Y_{3}^{LL\,*}Y_{3}^{LL\,\rm T}Y_{d}-12Y_{ 3}^{Q,LL\,*}Y_{3}^{Q,LL}Y_{d}\bigg{)} \tag{103}\]
\[\beta^{(1)}(Y_{e})=\frac{3}{2}Y_{e}Y_{e}^{\dagger}Y_{e}+3{\rm Tr }\left(Y_{u}^{\dagger}Y_{u}\right)Y_{e}+3{\rm Tr}\left(Y_{d}^{\dagger}Y_{d} \right)Y_{e}+{\rm Tr}\left(Y_{e}^{\dagger}Y_{e}\right)Y_{e}-\frac{9}{4}g_{1}^{ 2}Y_{e}-\frac{9}{4}g_{2}^{2}Y_{e}\] \[+n_{1}\bigg{(}+\frac{3}{2}Y_{e}Y_{1}^{RR\,\dagger}Y_{1}^{RR}+6Y_{ 1}^{LL\,\dagger}Y_{u}^{*}Y_{1}^{RR}+\frac{3}{2}Y_{1}^{LL\,\dagger}Y_{1}^{LL\, \dagger}Y_{e}\bigg{)}+n_{\bar{1}}\bigg{(}\frac{3}{2}Y_{e}Y_{\bar{1}}^{RR\, \dagger}Y_{\bar{1}}^{RR}\bigg{)}\] \[+n_{2}\bigg{(}3Y_{e}Y_{2}^{LR\,\dagger}Y_{2}^{LR}+6Y_{2}^{RL\, \dagger}Y_{u}^{\dagger}Y_{2}^{LR}+\frac{3}{2}Y_{2}^{RL\,\dagger}Y_{2}^{RL\, \dagger}Y_{e}\bigg{)}+n_{\bar{2}}\bigg{(}\frac{3}{2}Y_{\bar{2}}^{RL\,\dagger}Y_{ \bar{2}}^{RL}Y_{e}\bigg{)}\] \[+n_{3}\bigg{(}\frac{9}{2}Y_{3}^{LL\,\dagger}Y_{3}^{LL\,\rm T}Y_{e} \bigg{)} \tag{104}\]
One-Loop threshold corrections
In this Appendix, we collect the one-loop threshold corrections of the SM gauge, Yukawa, and quartic couplings on matching the SM with SM\(+\Phi_{1}+\cdots+\Phi_{3}\). To simplify the expressions, we assumed all the LQs to have the same mass \(m\) (for unequal mass, see Ref. [194]) and set the dimensionful trilinear couplings to zero. The superscript \(0\) denotes the original parameters in the SM lagrangian and we use the shorthand notation \(L_{zm}=1+x\log\left(\frac{\mu^{2}}{m^{2}}\right).\)
\[g_{1}^{(0)}=g_{1}-\frac{3g_{1}^{3}\log\left(\frac{\mu}{m^{2}} \right)}{32\pi^{2}},\hskip 14.226378ptg_{2}^{(0)}=g_{2}-\frac{3g_{2}^{3}\log \left(\frac{\mu}{m^{2}}\right)}{32\pi^{2}},\hskip 14.226378ptg_{3}^{(0)}=g_{3}- \frac{3g_{3}^{3}\log\left(\frac{\mu}{m^{2}}\right)}{64\pi^{2}} \tag{12}\]
\[\lambda^{(0)} =\lambda-\frac{3Y_{\bar{1}3}Y_{\bar{1}3}^{*}\log\left(\frac{\mu} {m^{2}}\right)}{8\pi^{2}}-\frac{3Y_{\bar{2}2}Y_{\bar{2}\bar{2}}^{*}\log\left( \frac{\mu}{m^{2}}\right)}{16\pi^{2}}-\frac{3Y_{\bar{1}}^{2}\log\left(\frac{\mu }{m^{2}}\right)}{32\pi^{2}}-\frac{3Y_{\bar{2}}^{2}\log\left(\frac{\mu}{m^{2}} \right)}{16\pi^{2}}\] \[-\frac{3Y_{\bar{2}2}^{2}\log\left(\frac{\mu}{m^{2}}\right)}{32 \pi^{2}}-\frac{3Y_{\bar{2}}Y_{\bar{2}\bar{2}}\log\left(\frac{\mu}{m^{2}} \right)}{16\pi^{2}}-\frac{3Y_{13}Y_{13}^{*}\log\left(\frac{\mu}{m^{2}}\right) }{16\pi^{2}}-\frac{3Y_{1}^{2}\log\left(\frac{\mu}{m^{2}}\right)}{32\pi^{2}}\] \[-\frac{3Y_{2}^{2}\log\left(\frac{\mu}{m^{2}}\right)}{16\pi^{2}}- \frac{9Y_{3}^{2}\log\left(\frac{\mu}{m^{2}}\right)}{32\pi^{2}}-\frac{3Y_{22}^ {2}\log\left(\frac{\mu}{m^{2}}\right)}{32\pi^{2}}-\frac{3Y_{33}^{2}\log\left( \frac{\mu}{m^{2}}\right)}{16\pi^{2}}-\frac{3Y_{2}Y_{22}\log\left(\frac{\mu}{m^ {2}}\right)}{16\pi^{2}} \tag{13}\]
\[Y_{u}^{(0)} =Y_{u}+\frac{Y_{u}Y_{\bar{1}}^{Q,RR^{\dagger}}Y_{\bar{1}}^{Q,RR^ {T}}}{16\pi^{2}}L_{2m}-\frac{Y_{1}^{Q,LL^{*}}Y_{d}^{*}Y_{1}^{Q,RR^{T}}}{4\pi^{ 2}}L_{m}-\frac{Y_{1}^{LL^{*}}Y_{e}^{*}Y_{1}^{RR^{T}}}{16\pi^{2}}L_{m}\] \[-\frac{Y_{1}^{LL^{*}}Y_{1}^{LL^{T}}Y_{u}}{128\pi^{2}}L_{2m}-\frac{ 3Y_{3}^{LL^{*}}Y_{3}^{LL^{*}}Y_{u}}{128\pi^{2}}L_{2m}-\frac{Y_{1}^{Q,LL^{*}}Y_ {1}^{Q,LL^{T}}Y_{u}}{16\pi^{2}}L_{2m}\] \[+\frac{3Y_{3}^{Q,LL^{*}}Y_{3}^{Q,LL^{*}}Y_{u}}{16\pi^{2}}L_{2m}- \frac{Y_{u}Y_{1}^{Q,RR^{*}}Y_{1}^{Q,RR^{T}}}{64\pi^{2}}L_{2m}-\frac{Y_{u}Y_{1 }^{RR^{*}}Y_{1}^{RR^{T}}}{128\pi^{2}}L_{2m}\] \[+\frac{Y_{2}^{LRY_{e}^{\dagger}}Y_{2}^{RL^{\dagger}}}{16\pi^{2}} L_{m}-\frac{Y_{2}^{LRY_{2}^{LR^{\dagger}}}Y_{u}}{128\pi^{2}}L_{2m}-\frac{Y_{u}Y_{2 }^{RL}Y_{2}^{RL^{\dagger}}}{64\pi^{2}}L_{2m} \tag{14}\]
\[Y_{d}^{(0)} =Y_{d}-\frac{Y_{d}Y_{\bar{1}}^{RR^{*}}Y_{\bar{1}}^{RR^{T}}}{128 \pi^{2}}L_{2m}-\frac{Y_{d}Y_{2}^{RLY_{2}^{RL^{\dagger}}}}{64\pi^{2}}L_{2m}- \frac{Y_{1}^{LL^{*}}Y_{1}^{LL^{*}}Y_{d}}{128\pi^{2}}L_{2m}\] \[-\frac{3Y_{3}^{LL^{*}}Y_{3}^{LL^{*}}Y_{d}}{128\pi^{2}}L_{2m}- \frac{Y_{1}^{Q,LL^{*}}Y_{1}^{Q,LL^{T}}Y_{d}}{16\pi^{2}}L_{2m}+\frac{3Y_{3}^{Q, LL^{*}}Y_{3}^{Q,LL}Y_{d}}{16\pi^{2}}L_{2m}\] \[-\frac{Y_{1}^{Q,LL^{*}}Y_{u}^{*}Y_{1}^{Q,RR}}{4\pi^{2}}L_{m}- \frac{Y_{2}^{LRY_{2}^{LR^{\dagger}}}Y_{d}}{128\pi^{2}}L_{2m}-\frac{Y_{d}Y_{1}^{ Q,RR^{\dagger}}Y_{1}^{Q,RR}}{64\pi^{2}}L_{2m} \tag{15}\]
\[Y_{e}^{(0)}=Y_{e}-\frac{3Y_{\bar{2}}^{RL^{\dagger}}Y_{\bar{2}}^{RL }Y_{e}}{128\pi^{2}}L_{2m}-\frac{3Y_{e}Y_{\bar{1}}^{RR^{\dagger}}Y_{\bar{1}}^{RR} }{128\pi^{2}}L_{2m}-\frac{3Y_{1}^{LL^{\dagger}}Y_{u}^{*}Y_{1}^{RR}}{16\pi^{2}}L _{m}\]
\[-\frac{3Y_{1}^{LL^{\dagger}}Y_{1}^{LL^{\dagger}}Y_{e}}{128\pi^{2}}L_{ 2m}-\frac{9Y_{3}^{LL^{\dagger}}Y_{3}^{LL^{\dagger}}Y_{e}}{128\pi^{2}}L_{2m}- \frac{3Y_{e}Y_{2}^{LR^{\dagger}}Y_{2}^{LR}}{64\pi^{2}}L_{2m}\] \[-\frac{3Y_{2}^{RL^{\dagger}}Y_{2}^{RL^{\dagger}}Y_{e}}{128\pi^{2} }L_{2m}-\frac{3Y_{e}Y_{1}^{RR^{\dagger}}Y_{1}^{RR}}{128\pi^{2}}L_{2m}+\frac{3 Y_{2}^{RL^{\dagger}}Y_{u}^{\dagger}Y_{2}^{LR}}{16\pi^{2}}L_{m} \tag{100}\]
|
2304.00921 | Abstraqt: Analysis of Quantum Circuits via Abstract Stabilizer
Simulation | Stabilizer simulation can efficiently simulate an important class of quantum
circuits consisting exclusively of Clifford gates. However, all existing
extensions of this simulation to arbitrary quantum circuits including
non-Clifford gates suffer from an exponential runtime.
To address this challenge, we present a novel approach for efficient
stabilizer simulation on arbitrary quantum circuits, at the cost of lost
precision. Our key idea is to compress an exponential sum representation of the
quantum state into a single abstract summand covering (at least) all occurring
summands. This allows us to introduce an abstract stabilizer simulator that
efficiently manipulates abstract summands by over-approximating the effect of
circuit operations including Clifford gates, non-Clifford gates, and (internal)
measurements.
We implemented our abstract simulator in a tool called Abstraqt and
experimentally demonstrate that Abstraqt can establish circuit properties
intractable for existing techniques. | Benjamin Bichsel, Anouk Paradis, Maximilian Baader, Martin Vechev | 2023-04-03T12:23:57Z | http://arxiv.org/abs/2304.00921v2 | # Abstraqt: Analysis of Quantum Circuits via Abstract Stabilizer Simulation
# Abstraqt: Analysis of Quantum Circuits via Abstract Stabilizer Simulation
Benjamin Bichsel
ETH Zurich
Switzerland
Maximilian Baader
ETH Zurich
Switzerland
Anouk Paradis
ETH Zurich
Switzerland
Martin Vechev
ETH Zurich
Switzerland
###### Abstract
Stabilizer simulation can efficiently simulate an important class of quantum circuits consisting exclusively of Clifford gates. However, all existing extensions of this simulation to arbitrary quantum circuits including non-Clifford gates suffer from an exponential runtime.
In this work, we address this challenge by presenting a novel approach for efficient stabilizer simulation on arbitrary quantum circuits, at the cost of lost precision. Our key idea is to compress an exponential sum representation of the quantum state into a single _abstract_ summand covering (at least) all occurring summands. This allows us to introduce an _abstract stabilizer simulator_ that efficiently manipulates abstract summands by _over-abstracting_ the effect of circuit operations including Clifford gates, non-Clifford gates, and (internal) measurements.
We implemented our abstract simulator in a tool called Abstraqt and experimentally demonstrate that Abstraqt can establish circuit properties intractable for existing techniques.
## 1 Introduction
Stabilizer simulation [1] is a promising technique for efficient classical simulation of quantum circuits consisting exclusively of _Clifford_ gates. Unfortunately, generalizing stabilizer simulation to arbitrary circuits including non-Clifford gates requires exponential time [2, 3, 4, 5, 6, 7]. Specifically, the first such generalization by Aaronson and Gottesman [2, SSVII-C] tracks the quantum state \(\rho\) at any point in the quantum circuit as a sum whose number of summands \(m\) grows exponentially with the number of non-Clifford gates:
\[\rho=\sum_{i=1}^{m}c_{i}P_{i}\prod_{j=1}^{n}\tfrac{1+(-1)^{k_{ij}}Q_{j}}{2}. \tag{1}\]
Here, while \(c_{i}\), \(P_{i}\), \(b_{ij}\), and \(Q_{j}\) can be represented efficiently (see SS2), the overall representation is inefficient due to exponentially large \(m\).
Abstraction.The key idea of this work is to avoid tracking the exact state \(\rho\) of a quantum system and instead only track key aspects of \(\rho\), by _over-approximating_ the set of possible summands.
To this end, we rely on the established framework of abstract interpretation [8, 9], which is traditionally used to analyze classical programs [10, 11] or neural networks [12] by describing sets of possible states without explicitly enumerating all of them. Here, we use abstract interpretation to describe sets of possible summands.
Merging Summands.This allows us to curb the exponential blow-up of stabilizer simulation by merging multiple summands in Eq. (1) into a single summand which over-approximates all covered summands, at the cost of lost precision. The key technical challenge addressed by our work is designing a suitable _abstract domain_ to describe sets of summands, accompanied by the corresponding _abstract transformers_ to over-approximate the actions performed by the original exponential stabilizer simulation on individual summands.
As a result, our approach is both efficient and exact on Clifford circuits, as these circuits never require merging summands. On non-Clifford circuits, merging summands trades precision for efficiency. Moreover, our approach naturally allows us to merge the possible outcomes of a measurement into a single abstract state, preventing an exponential path explosion when simulating multiple internal measurements.
Main Contributions.Our main contributions are:
* An abstract domain (SS4) to over-approximate a quantum state represented by Eq.1.
* Abstract transformers (SS5) to simulate quantum circuits, including gate applications and measurements.
* An efficient implementation1 of our approach in a tool called Abstraqt (SS6), together with an evaluation showing that Abstraqt can establish circuit properties that are intractable for existing tools (SS7).
Footnote 1: Our implementation is available at [https://github.com/eth-sri/abstraqt](https://github.com/eth-sri/abstraqt).
Outlook.This work trades precision for efficiency by over-abstracting the very first stabilizer simulation generalized to non-Clifford gates by Aaronson and Gottesman [2, SSVII-C], see Eq.1. As discussed in SS7.4, we believe that our encouraging results pave the way to introduce analogous abstraction to various follow-up works which improve upon this simulation [4, 5, 6, 7]. As these more recent works scale better than [2, SSVII-C], we expect that a successful application of abstract interpretation to them will yield even more favorable trade-offs between precision and efficiency.
## 2 Background
We first introduce the necessary mathematical concepts.
Basic Notation.We use \(\mathbb{Z}_{n}:=\mathbb{Z}/(n\mathbb{Z})\), define \(\mathsf{B}:=\mathbb{Z}_{2}\), and write \(2^{S}\) for the power set of the set \(S\). We represent an \(n\)-qubit quantum state either using vectors \(\psi\in\mathbb{C}^{2^{n}}\) or density matrices \(\rho\in\mathbb{C}^{2^{n}\times 2^{n}}\). We denote the embedding of a \(k\)-qubit gate \(U\in\mathcal{U}(2^{k})\) as an \(n\)-qubit gate by \(U_{(i)}:=\mathbb{l}_{2^{i}}\otimes U\otimes\mathbb{l}_{2^{n-i-k}}\), where \(\mathbb{l}_{l}\) denotes the \(l\times l\) identity matrix.
Stabilizer Simulation.The key idea of stabilizer simulation [1, 2] is representing quantum states \(\rho=\psi\psi^{\dagger}\) implicitly, by _stabilizers_\(Q\) which stabilize the state \(\psi\), that is \(Q\psi=\psi\). As shown in [2], appropriately selecting \(n\) stabilizers \(Q_{j}\) then specifies a unique \(n\)-qubit state \(\rho=\prod_{j=1}^{n}\frac{1+Q_{j}}{2}\).
In stabilizer simulation, all \(Q_{j}\) are _Pauli elements_ from \(\mathcal{P}_{n}\) of the form \(\mathrm{i}^{v}\cdot P^{(0)}\otimes\cdots\otimes P^{(n-1)}\), where \(P^{(j)}\in\{X,Y,Z,\mathbb{l}_{2}\}\) and \(\mathrm{v}\in\mathbb{Z}_{4}\). This directly implies that all stabilizers for some state \(\psi\) commute, that is \(Q_{i}Q_{j}=Q_{j}Q_{i}\), as elements from the Pauli group \(\mathcal{P}_{n}\) either commute or anti-commute. These elements can be represented efficiently in memory by storing \(v\) and \(P^{(0)},\ldots,P^{(n-1)}\). In App.B, we list states stabilized by Pauli matrices (Tab.6) and the results of multiplying Pauli matrices (Tab.7). Further, for this work we use the functions _bare_\(\mathsf{b}\colon\mathcal{P}_{n}\to\mathcal{P}_{n}\) and _prefactor_\(\mathsf{f}\colon\mathcal{P}_{n}\to\mathbb{Z}_{4}\) which extract the Pauli matrices without the prefactor and the prefactor, respectively:
\[\mathsf{f}(\mathrm{i}^{v}P^{(0)}\otimes\cdots\otimes P^{(n-1)}) =v, \tag{2}\] \[\mathsf{b}(\mathrm{i}^{v}P^{(0)}\otimes\cdots\otimes P^{(n-1)}) =P^{(0)}\otimes\cdots\otimes P^{(n-1)}. \tag{3}\]
Applying gate \(U\) to state \(\rho\) can be reduced to conjugating the stabilizers \(Q_{j}\) with \(U\):
\[U\rho U^{\dagger}=U\Big{(}\prod_{j=1}^{n}\tfrac{\mathbb{l}+Q_{j}}{2}\Big{)}U^{ \dagger}\stackrel{{\text{\tiny\@@cite[cite]{[\@@bibref{}{ }{}, Sec.~{}\ref{}{}]}}}}{{=}}\prod_{j=1}^{n}\tfrac{\mathbb{l}+UQ_{j}U^{ \dagger}}{2}. \tag{4}\]
While Eq.4 holds for any gate \(U\), stabilizer simulation can only exploit it if \(UQ_{j}U^{\dagger}\in\mathcal{P}_{n}\). _Clifford gates_ such as \(S\), \(H\), \(CNOT\), \(\mathbb{l}\), \(X\), \(Y\), and \(Z\) satisfy this for any \(Q_{j}\in\mathcal{P}_{n}\).
To also support the application of non-Clifford gates such as \(T\) gates, we follow [2, Sec. VII.C] and represent \(\rho\) more generally as
\[\rho=\sum_{i=1}^{m}c_{i}P_{i}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2},\]
for \(c_{i}\in\mathbb{C}\), \(P_{i}\in\mathcal{P}_{n}\), \(b_{ij}\in\mathbb{B}\), and \(Q_{j}\in\mathcal{P}_{n}\). Here, applying \(U\) to \(\rho\) amounts to replacing \(P_{i}\) by \(UP_{i}U^{\dagger}\) and \(Q_{j}\) by \(UQ_{j}U^{\dagger}\), which we can exploit if both \(UP_{i}U^{\dagger}\) and \(UQ_{j}U^{\dagger}\) lie in \(\mathcal{P}_{n}\).
Otherwise, we decompose2\(U\) to the sum \(\sum_{p}d_{p}R_{p}\), where \(d_{p}\in\mathbb{C}\) and \(R_{p}\in\mathfrak{b}(\mathcal{P}_{n})\) are bare Pauli elements, which have a prefactor of \(\mathrm{i}^{0}=1\). Then,
Footnote 2: This decomposition always exists and is unique, as bare Pauli elements span (more than) \(\mathcal{U}(2^{n})\).
\[U\rho U^{\dagger}=\Big{(}\sum_{p}d_{p}R_{p}\Big{)}\Big{(}\sum_{i}c_{i}P_{i} \prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}\Big{)}\Big{(}\sum_{q}d_{q}R_{q} \Big{)}^{\dagger}\stackrel{{[\ref{eq:2}]}}{{=}}\sum_{piq}c_{piq}P _{piq}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}, \tag{5}\]
for \(c_{piq}=d_{p}c_{i}d_{q}^{*}\in\mathbb{C}\), \(P_{piq}=R_{p}P_{i}R_{q}\in\mathcal{P}_{n}\), and \(b_{ijq}=b_{ij}+Q_{j}\circ R_{q}\in\mathbb{B}\). Here, \(d_{q}^{*}\) denotes the complex conjugate of \(d_{q}\), \(+\) denotes addition modulo \(2\), and \(Q_{j}\circ R_{q}\) is the commutator defined as \(0\) if \(Q_{j}\) and \(R_{q}\) commute and \(1\) otherwise. Note that \(\cdot\circ\cdot:\mathcal{P}_{n}\times\mathcal{P}_{n}\to\mathbb{B}\) has the highest precedence.
Overall, the decomposition of a \(k\)-qubit non-Clifford gate results in at most \(4^{k}\) summands, thus blowing up the number of summands in our representation by at most \(4^{k}\cdot 4^{k}=16^{k}\). In practice, the blow-up is typically smaller, e.g., decomposing a \(T\) gate only requires \(2\) summands, while decomposing a \(CCNOT\) gate requires \(8\) summands.
Measurement.Measuring in bare Pauli basis \(P\in\mathfrak{b}(\mathcal{P}_{n})\) yields one of two possible quantum states. They can be computed by applying the two _projections_\(P_{+}:=\tfrac{1+P}{2}\) and \(P_{-}=\tfrac{1-P}{2}\), resulting in states \(\rho_{+}=P_{+}\rho P_{+}\) and \(\rho_{-}=P_{-}\rho P_{-}\), respectively. For example, collapsing the \(i^{\text{th}}\) qubit to \(|0\rangle\) or \(|1\rangle\) corresponds to measuring in Pauli basis \(Z_{(i)}\). The probability of outcome \(\rho_{+}\) is \(\operatorname{tr}\left(\rho_{+}\right)\), and analogously for \(\rho_{-}\). We discuss in SS5 how measurements are performed in [2, Sec. VII.C].
Abstract Interpretation.Abstract interpretation [8] is a framework for formalizing approximate but sound calculation. An _abstraction_ consists of ordered sets \((2^{X},\subseteq)\) and \((\boldsymbol{\mathcal{X}},\leq)\), where \(\mathcal{X}\) and \(\boldsymbol{\mathcal{X}}\) are called _concrete set_ and _abstract set_ respectively together with a _concretization function_\(\gamma\colon\boldsymbol{\mathcal{X}}\to 2^{\mathcal{X}}\) which indicates which concrete elements \(x=\gamma(\boldsymbol{x})\subseteq\mathcal{X}\) are represented by the abstract element \(\boldsymbol{x}\). Additionally, \(\bot\in\boldsymbol{\mathcal{X}}\) refers to \(\emptyset=\gamma(\bot)\subseteq\mathcal{X}\) and \(\top\in\boldsymbol{\mathcal{X}}\) refers to \(\mathcal{X}=\gamma(\top)\).
An abstract transformer \(f^{\sharp}\colon\boldsymbol{\mathcal{X}}\to\boldsymbol{\mathcal{X}}\) of a function \(f\colon\mathcal{X}\to\mathcal{X}\) satisfies \(\gamma\circ f^{\sharp}(\boldsymbol{x})\supseteq f\circ\gamma(\boldsymbol{x})\) for all \(\boldsymbol{x}\in\boldsymbol{\mathcal{X}}\), where \(f\) was lifted to operate on subsets of \(\mathcal{X}\). This ensures that \(f^{\sharp}\) (over-)approximates \(f\), a property referred to as _soundness_ of \(f^{\sharp}\). Abstract transformers can analogously be defined for functions \(f\colon\mathcal{X}^{n}\to\mathcal{X}\). Further, we introduce \(join\sqcup\colon\boldsymbol{\mathcal{X}}\times\boldsymbol{\mathcal{X}}\to \boldsymbol{\mathcal{X}}\), satisfying \(\gamma(\boldsymbol{x})\cup\gamma(\boldsymbol{y})\subseteq\gamma(\boldsymbol{x} \sqcup\boldsymbol{y})\). Throughout this work, we distinguish abstract objects \(\boldsymbol{x}\in\boldsymbol{\mathcal{X}}\) and concrete objects \(x\in\mathcal{X}\) by stylizing them in bold or non-bold respectively.
As an example, a common abstraction is the interval abstraction with \(\mathcal{X}=\mathbb{R}\). The abstract set is the set of intervals \(\boldsymbol{\mathcal{X}}=\{(l,u)\mid l,u\in\mathbb{R}\cup\{\pm\infty\}\}\), where \(\boldsymbol{x}=(l,u)\) is to be understood as a tuple and not as an open interval. The concretization function \(\gamma\colon\boldsymbol{\mathcal{X}}\to\mathcal{X}\) maps these tuples to sets: \(\gamma(\boldsymbol{x})=[l,u]=\{y\in\mathbb{R}\mid l\leq y\leq u\}\). Further, \(\top=(-\infty,\infty)\) and \(\bot=(l,u)\) for \(l>u\). Common abstract transformers for the interval abstraction are shown in Tab. 1.
The transformers in Tab. 1 are _precise_, meaning that for \(f\colon\mathbb{R}\to\mathbb{R}\), we have that \(f^{\sharp}((l,u))=(\min_{l\leq v\leq u}f(v),\max_{l\leq v\leq u}f(v))\) and analogously for \(f\colon\mathbb{R}^{n}\to\mathbb{R}\). An abstract transformer for
\begin{table}
\begin{tabular}{c l l} \hline \hline Function & Abstract Transformer & Efficient Closed form \\ \hline \(+\) & \([l_{1},u_{1}]+^{\sharp}[l_{2},u_{2}]=[l^{\prime},u^{\prime}]\) & \(l^{\prime}=l_{1}+l_{2}\) and \(u^{\prime}=u_{1}+u_{2}\) \\ \(\cdot\) & \([l_{1},u_{1}]\cdot^{\sharp}[l_{2},u_{2}]=[l^{\prime},u^{\prime}]\) & \(l^{\prime}=\min(l_{1}l_{2},l_{1}u_{2},u_{1}l_{2},u_{1}u_{2})\), analogously for \(u^{\prime}\) \\ \(\exp\) & \(\exp^{\sharp}([l,u])=[l^{\prime},u^{\prime}]\) & \(l^{\prime}=\exp(l)\) and \(u^{\prime}=\exp(u)\) \\ \(\cos\) & \(\cos^{\sharp}([l,u])=[l^{\prime},u^{\prime}]\) & exists, several case distinctions necessary \\ \(\cup\) & \([l_{1},u_{1}]\sqcup[l_{2},u_{2}]=[l^{\prime},u^{\prime}]\) & \(l^{\prime}=\min(l_{1},l_{2})\) and \(u^{\prime}=\max(u_{1},u_{2})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Transformers for the interval abstraction.
a composition of functions \(f\circ g\) is the composition of the abstract transformer. Although this is sound, it is not necessarily precise: let \(g\colon\mathbb{R}\to\mathbb{R}^{2}\) with \(g(x)=\left(\begin{smallmatrix}x\\ x\end{smallmatrix}\right)\) and \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) with \(f(x,y)=x\cdot y\), then \(f\circ g(x)=x^{2}\), but \(f^{\sharp}\circ g^{\sharp}((-2,2))=(-4,4)\) whereas a precise transformer would map \((-2,2)\) to \((0,4)\).
Notational Convention.In slight abuse of notation, throughout this work we may write the concretization of abstract elements instead of the abstract element itself. For example, for \((0,1)\in\leavevmode\hbox{\hbox to 0.0pt{\vbox to 1.29pt{\pgfpicture\makeatletter\hbox to 0.0pt{\pgfsys@beginscope{} \definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}{} \pgfsys@color@rgb@fill{0}{0}{0}{}\pgfsys@setlinewidth{0.
Clifford Gate Application.First, the circuit applies one Hadamard gate \(H\) to each qubit. This corresponds to the unitary operator \(H_{(0)}H_{(1)}\), yielding updated abstract density matrix \(\boldsymbol{\rho_{B}}=(H_{(0)}H_{(1)})\boldsymbol{\rho_{A}}(H_{(0)}H_{(1)})^{\dagger}\). Just as for concrete density matrices (see SS2), this amounts to replacing
\[\{\mathbb{l}\}\ \ \text{by}\ \ (H_{(0)}H_{(1)})\{\mathbb{l}\}(H_{(0)}H_{(1)})^{ \dagger}=\{\mathbb{l}\},\] \[Z_{(0)}\ \ \text{by}\ \ (H_{(0)}H_{(1)})Z_{(0)}(H_{(0)}H_{(1)})^{ \dagger}=X_{(0)},\text{ and}\] \[Z_{(1)}\ \ \text{by}\ (H_{(0)}H_{(1)})Z_{(1)}(H_{(0)}H_{(1)})^{ \dagger}=X_{(1)}.\]
We hence get \(\boldsymbol{\rho_{B}}=e^{[0,0]+[0,0]}\{\mathbb{l}\}\frac{1+(-1)^{(0)}X_{(0)}}{2 }\frac{1+(-1)^{(0)}X_{(1)}}{2}\).
Non Clifford Gate Application.Next, the circuit applies gate \(T\) on the upper qubit. To this end, we again follow the simulation described in SS2. We first decompose \(T\) into Pauli elements: \(T_{(0)}=d_{1}\mathbb{l}+d_{2}Z_{(0)}\), where \(d_{1}\approx e^{-0.1+0.4\mathbb{i}}\) and \(d_{2}\approx e^{-1.0-1.2\mathbb{i}}\). Replacing \(T\) with its decomposition, we can then write \(\boldsymbol{\rho_{T}}=T\boldsymbol{\rho_{B}}T^{\dagger}\), using Eq. (5), as:
\[\boldsymbol{\rho_{T}}=\left(d_{1}\mathbb{l}+d_{2}Z_{(0)}\right)\left(e^{[0,0]+ [0,0]}\{\mathbb{l}\}\frac{1+(-1)^{(0)}X_{(0)}}{2}\frac{1+(-1)^{(0)}X_{(1)}}{2 }\right)\left(d_{1}\mathbb{l}+d_{2}Z_{(0)}\right)^{\dagger}.\]
Analogously to SS2, we can rewrite this to:
\[\boldsymbol{c_{1}}\{\mathbb{l}\}\frac{1+(-1)^{(0)}X_{(0)}}{2} \frac{1+(-1)^{(0)}X_{(1)}}{2}\] \[+\boldsymbol{c_{2}}\{Z_{(0)}\}\frac{1+(-1)^{(1)}X_{(0)}}{2} \frac{1+(-1)^{(0)}X_{(1)}}{2}\] \[+\boldsymbol{c_{3}}\{Z_{(0)}\}\frac{1+(-1)^{(0)}X_{(0)}}{2} \frac{1+(-1)^{(0)}X_{(1)}}{2}\] \[+\boldsymbol{c_{4}}\{\mathbb{l}\}\frac{1+(-1)^{(1)}X_{(0)}}{2} \frac{1+(-1)^{(0)}X_{(1)}}{2},\]
where
\[\boldsymbol{c_{1}} =d_{1}e^{[0,0]+[0,0]}d_{1}^{*}\approx e^{[-0.2,-0.2]+[0,0]},\] \[\boldsymbol{c_{2}} =d_{1}e^{[0,0]+[0,0]}d_{2}^{*}\approx e^{[-1.1,-1.1]+[1.6,1.6]},\] \[\boldsymbol{c_{3}} =d_{2}e^{[0,0]+[0,0]}d_{1}^{*}\approx e^{[-1.1,-1.1]+[-1.6,-1.6]},\] \[\boldsymbol{c_{4}} =d_{2}e^{[0,0]+[0,0]}d_{2}^{*}\approx e^{[-2.0,-2.0]+[0,0]}.\]
Merging Summands.Unfortunately, simply applying \(T\) gates as shown above may thus increase the number of summands in the abstract density matrix by a factor of \(4\). To counteract this, our key idea is to merge summands, by allowing a single abstract summand to represent multiple concrete ones, resulting in reduced computation overhead at the cost of lost precision. Our abstract representation allows for a straightforward merge: we take the union of sets and join intervals. Specifically, for complex numbers, we join the intervals in their representation, obtaining:
\[\boldsymbol{c}:=\boldsymbol{c_{1}}\sqcup\boldsymbol{c_{2}}\sqcup\boldsymbol{ c_{3}}\sqcup\boldsymbol{c_{4}}=e^{[-2.0,-0.2]+[-1.6,1.6]\mathbb{i}}.\]
Finally, we introduce the symbol \(\star\) to denote how many concrete summands an abstract summand represents. Altogether, merging the summands in \(\boldsymbol{\rho_{T}}\) yields:
\[\boldsymbol{\rho_{C}}=4\star e^{[-2.0,-0.2]+[-1.6,1.6]}\{\mathbb{l},Z_{(0)}\} \frac{1+(-1)^{(0,1)}X_{(0)}}{2}\frac{1+(-1)^{(0)}X_{(1)}}{2}.\]
Note that for an abstract element \(\boldsymbol{x}\), \(r\star\boldsymbol{x}\) is not equivalent to \(r\cdot\boldsymbol{x}\). For example, \(2\star\{0,1\}=\{0,1\}+\{0,1\}=\{0,1,2\}\), while \(2\cdot\{0,1\}=\{0,2\}\).
Measurement.After the \(T\) gate, the circuit applies two additional \(CNOT\) gates, resulting in the updated density matrix:
\[\boldsymbol{\rho_{D}}=4\star e^{[-2.0,-0.2]+[-1.6,1.6]\mathbb{i}}\{\mathbb{l}, Z_{(0)}\}\frac{1+(-1)^{(0,1)}X_{(0)}X_{(1)}}{2}\frac{1+(-1)^{(0)}X_{(0)}}{2}.\]
Finally, the circuit applies the projection \(M_{-}=\frac{1-X_{(0)}}{2}\). To update the density matrix accordingly, we closely follow [2], which showed that measurement can be reduced to simple state updates through a case distinction on \(M_{-}\) and the state \(\rho\). If (i) the measurement Pauli (here \(-X_{(0)}\)) commutes with the product Paulis (here \((-1)^{\{0,1\}}X_{(0)}X_{(1)}\) and \((-1)^{\{0\}}X_{(1)}\)) and (ii) the measurement Pauli cannot be written as a product of the product Paulis, the density matrix after measurement is \(0\). We will explain in SS5.2 how our abstract domain allows both of these checks to be performed efficiently.
Here, both conditions are satisfied, and we hence get the final state \(\mathbf{\rho_{M1}}=0\). We can then compute the probability of such an outcome by \(p=\operatorname{tr}\left(\mathbf{\rho_{M1}}\right)=0\). Thus, our abstract representation was able to provide a fully precise result.
Imprecise Measurement.Suppose now that instead of the measurement in Fig. 1, we had collapsed the lower qubit to \(\ket{0}\) by applying projection \(M_{0}=\frac{1+Z_{(1)}}{2}\).
To derive the resulting state, we again follow [2] closely. We note that the measurement Pauli \(+Z_{(1)}\) (i) anticommutes with the first product Pauli \((-1)^{\{0,1\}}X_{(0)}X_{(1)}\) and commutes with the second one \((-1)^{\{0\}}X_{(1)}\) and (ii) commutes with the initial Paulis \(\{1,Z_{(1)}\}\). In this case, we get that the density matrix is unchanged, thus \(\mathbf{\rho_{M2}}=\mathbf{\rho_{D}}\). To compute the trace of this matrix, we follow the procedure outlined in SS5.4. We omit intermediate steps here and get: 3
Footnote 3: We used the precise interval bounds for \(\mathbf{c}\) here, not the rounded values provided earlier.
\[p=\operatorname{tr}\left(\mathbf{\rho_{M2}}\right)=4\operatorname{Re}(\mathbf{c}) \approx[0,1.7].\]
Thus, our abstraction here is highly imprecise and does not yield any information on the measurement result (we already knew that the probability must lie in \([0,1]\)).
## 4 Abstract Domains
In the following, we formalize all abstract domains (Tab. 2) underlying our abstract representation of density matrices \(\mathbf{\rho}\) along with key abstract transformers operating on them (Tab. 3). We note that all abstract transformers introduced here naturally also support (partially) concrete arguments.
Example Elements.Tab. 2 provides example elements of each abstract domain and exemplifies the respective concretization functions \(\gamma\colon\mathbf{\mathcal{X}}\to 2^{\chi}\). While Tab. 2 correctly distinguishes abstract elements from their concretization, in the following, when describing operators we write concretizations instead of abstract elements (as announced in SS2).
Booleans and \(\mathbb{Z}_{4}\).Abstract booleans \(\mathbf{b}\in\blacksquare=2^{\mathsf{B}}\) are subsets of \(\mathsf{B}\), as exemplified in Tab. 2. The addition of two abstract booleans naturally lifts boolean addition to sets and is clearly sound:
\[\mathbf{b}+\mathbf{c}=\{b+c\mid b\in\mathbf{b},c\in\mathbf{c}\}. \tag{7}\]
We define multiplication of abstract booleans analogously. Further, we define the join of two abstract booleans as their set union.
Analogously to booleans, our abstract domain \(\mathsf{Z}_{4}\) consists of subsets of \(\mathbb{Z}_{4}\), where addition, subtraction, multiplication, and joins works analogously to abstract booleans. Further, we can straight-forwardly embed abstract booleans into \(\mathsf{Z}_{4}\) by mapping \(0\) to \(0\) and \(1\) to \(1\).
Real Numbers.We abstract real numbers by intervals \([\underline{a},\overline{a}]\subseteq\mathbb{R}\cup\{\pm\infty\}\), and denote the set of such intervals by \(\mathsf{R}\). Interval addition, interval multiplication, and the cosine and exponential transformer on intervals are defined in their standard way, see SS2.
Complex Numbers.We parametrize complex numbers \(c\in\mathbb{C}\) in polar coordinates (with magnitude in log-space), as \(c=e^{r+\varphi{\rm i}}\) for \(r,\varphi\in\mathbb{R}\). For example, we parametrize \(0\) as \(e^{-\infty+0{\rm i}}\).
Based on this parametrization, we abstract complex numbers using two real intervals for \(r\) and \(\varphi\) respectively, as exemplified in Tab. 2. Formally, we interpret \(\mathbf{c}\in\,\)\(\mathsf{C}\) as the set of all possible outcomes when instantiating both intervals:
\[\gamma(\mathbf{c})=e^{[\underline{r},\overline{r}]+[\underline{\varphi}, \overline{\varphi}]{\rm i}}=\left\{e^{r+\varphi{\rm i}}\;\middle|\;r\in[ \underline{r},\overline{r}],\varphi\in[\underline{\varphi},\overline{\varphi} ]\right\}.\]
We can compute the multiplication and join of two abstract complex numbers \(\mathbf{c}=e^{[\underline{r},\overline{r}]+[\underline{\varphi},\overline{ \varphi}]{\rm i}}\) and \(\mathbf{c^{\prime}}=e^{[\underline{r}^{\prime},\overline{r}^{\prime}]+[\underline {\varphi}^{\prime},\overline{\varphi}^{\prime}]{\rm i}}\) as
\[\mathbf{c}\cdot\mathbf{c}^{\prime} =e^{[\underline{r}+\underline{r}^{\prime},\overline{r}+\overline{ r}^{\prime}]+[\underline{\varphi}+\underline{\varphi}^{\prime},\overline{ \varphi}+\overline{\varphi}^{\prime}]{\rm i}}\text{ and } \tag{8}\] \[\mathbf{c}\sqcup\mathbf{c}^{\prime} =e^{[\min(\underline{r},\underline{r}^{\prime}),\max(\overline{ r},\overline{r}^{\prime})]+[\min(\underline{\varphi},\underline{\varphi}^{\prime}),\max(\overline{\varphi},\overline{\varphi}^{\prime})]{\rm i}}. \tag{9}\]
Again, simple arithmetic shows that Eqs. (8)-(9) are sound.
We compute the real part of an abstract complex number \(\mathbf{c}=e^{[\underline{r},\overline{r}]+[\underline{\varphi},\overline{ \varphi}]{\rm i}}\) as
\[\operatorname{Re}(\mathbf{c})=\exp([\underline{r},\overline{r}])\cdot\cos\bigl{(} [\underline{\varphi},\overline{\varphi}]\bigr{)}, \tag{10}\]
where we rely on interval transformers to evaluate the right-hand side. The soundness of Eq. (10) follows from the standard formula to extract the real part from a complex number in polar coordinates. We will later use Eq. (10) to compute \(\operatorname{tr}\left(\mathbf{\rho}\right)\). To this end, we also need the transformer
\[{\rm i}^{\mathbf{b}}=\bigsqcup_{b\in\mathbf{b}}\llbracket{\rm i}^{\rm b}\in\,\mathsf{C}. \tag{11}\]
Pauli Elements.Recall that a Pauli element \(P\in\mathcal{P}_{n}\) has the form \(P={\rm i}^{v}\cdot P^{(0)}\otimes\cdots\otimes P^{(n-1)}\), for \(v\) in \(\mathbb{Z}_{4}\) and \(P^{(k)}\in\{\mathbb{I},X,\,Y,Z\}\). We therefore parametrize \(P\) as a prefactor \(v\) (in log\({}_{i}\) space) and \(n\) bare Paulis \(P^{(k)}\).
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Dom.** & **Example element** & **Concretization** & \\ \hline \(\blacksquare\) & \(\{0,1\}\) & \(\{0,1\}\) & \(\{0,3\}\) & \\ \(\operatorname{\mathbb{Z}_{4}}\) & \(\{0,3\}\) & \(\{0,3\}\) & \(\{0,3\}\) & \\ \(\operatorname{\mathbb{R}}\) & \((0,1)\) & \([0,1]\) & \(=\{r\;\middle|\;0\leq r\leq 1\}\) & \\ \(\operatorname{\mathbb{C}}\) & \((0,1,\pi,2\pi)\) & \(e^{[0,1]+[\underline{r},2\pi]{\rm i}}=\left\{e^{r+\varphi{\rm i}}\; \middle|\;0\leq r\leq 1,\pi\leq\varphi\leq 2\pi\right\}\) & \\ \(\operatorname{\mathbb{P}_{2}}\) & \((\{0,3\},\{Z,Y\},\{X\})\) & \({\rm i}^{[0,3]}\cdot\{Z,Y\}\otimes\{X\}=\left\{{\rm i}^{\rm b}\cdot P^{(1)} \otimes P^{(2)}\;\middle|\;\begin{array}{l}b\in\{0,3\},\\ P^{(1)}\in\{Z,Y\},P^{(2)}\in\{X\}\end{array}\right.\) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example elements on abstract domains.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Transformers** & **Domains** & **Definition** \\ \hline \(\mathbf{b}+\mathbf{c}\in\,\blacksquare,\mathbf{b}\cdot\mathbf{c}\in\,\blacksquare\) & \(\mathbf{b},\mathbf{c}\in\,\blacksquare\) & Lifting to sets, Eq. (7) \\ \(\mathbf{b}\sqcup c\in\,\blacksquare\) & \(\mathbf{b},\mathbf{c}\in\,\blacksquare\) & \(\mathbf{b}\cup\mathbf{c}\) & \\ \(\mathbf{b}+\mathbf{c}\in\,\operatorname{\mathbb{Z}_{4}},\mathbf{b}-\mathbf{c}\in\,\operatorname{ \mathbb{Z}_{4}},\mathbf{b}\cdot\mathbf{c}\in\,\operatorname{\mathbb{Z}_{4}}\) & \(\mathbf{b},\mathbf{c}\in\,\operatorname{\mathbb{Z}_{4}}\) & Lifting to sets \\ \(\mathbf{b}\sqcup c\in\,\operatorname{\mathbb{Z}_{4}}\) & \(\mathbf{b},\mathbf{c}\in\,\operatorname{\mathbb{Z}_{4}}\) & \(\mathbf{b}\cup\mathbf{c}\) & \\ \(\mathbf{b}\in\,\operatorname{\mathbb{Z}_{4}}\) & \(\mathbf{b}\in\,\operatorname{\mathbb{B}}\) & Embedding \\ \(\mathbf{c}\cdot\mathbf{d}\in\,\operatorname{\mathbb{C}}\) & \(\mathbf{c},\mathbf{d}\in\,\operatorname{\mathbb{C}}\) & Eq. (8) \\ \(\mathbf{c}\sqcup d\in\,\operatorname{\mathbb{C}}\) & \(\mathbf{c},\mathbf{d}\in\,\operatorname{\mathbb{C}}\) & Eq. (9) \\ \(\operatorname{Re}(\mathbf{c})\in\,\blacksquare\) & \(\mathbf{c}\in\,\operatorname{\mathbb{C}}^{n}\) & Eq. (10) \\ \({\rm i}^{\mathbf{b}}\in\,\operatorname{\mathbb{C}}\) & \(\mathbf{b}\in\,\blacksquare\) & Eq. (11) \\ \(\mathbf{PQ}\in\,\operatorname{\mathbb{P}_{n}}\) & \(\mathbf{P},\mathbf{Q}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (12) \\ \(\mathfrak{f}(\mathbf{PQ})\in\,\operatorname{\mathbb{Z}_{4}}\) & \(\mathbf{P},\mathbf{Q}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (13) \\ \(U_{(i)}PU_{(i)}^{\dagger}\in\,\operatorname{\mathbb{P}_{n}}\) & \(U\in\mathcal{U}(2^{\rm i}),\mathbf{P}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (14) \\ \(\mathbf{P}\circ\mathbf{Q}\in\,\blacksquare\) & \(\mathbf{P},\mathbf{Q}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (15) \\ \(P\sqcup Q\in\,\operatorname{\mathbb{P}_{n}}\) & \(\mathbf{P},\mathbf{Q}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (16) \\ \((-1)^{\mathbf{b}}\cdot\mathbf{P}\) & \(\mathbf{b}\in\,\mathbf{\mathsf{B}},\mathbf{P}\in\,\operatorname{\mathbb{P}_{n}}\) & Eq. (17) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of abstract transformers.
Accordingly, we parametrize abstract Pauli elements \(\mathbf{P}\in\mathbf{\mathsf{P}}_{n}\) as \(\mathrm{i}^{\mathbf{v}}\cdot\mathbf{P^{(0)}}\otimes\cdots\otimes\mathbf{P^{(n-1)}}\), where \(\mathbf{v}\in\mathbf{\mathsf{Z}}_{4}\) is a set of possible prefactors and \(\mathbf{P^{(k)}}\subseteq\{X,\,Y,Z,\mathbb{I}_{2}\}\) are sets of possible Pauli matrices. Formally, we interpret \(\mathbf{P}\) as the set of all possible outcomes when instantiating all sets:
\[\gamma(\mathbf{P})=\left\{\mathrm{i}^{v}\cdot\underset{i=0}{\overset{n-1}{\otimes} }P^{(i)}\ \middle|\ v\in\mathbf{v},P^{(i)}\in\mathbf{P^{(i)}}\right\}.\]
We define the product of two abstract Pauli elements as:
\[\mathbf{P}\mathbf{Q}=\mathrm{i}^{\mathbf{(PQ)}}\underset{i=0}{\overset{n-1}{\otimes}} \mathsf{b}\Big{(}\mathbf{P^{(i)}}\mathbf{Q^{(i)}}\Big{)}. \tag{12}\]
To this end, we evaluate the prefactor induced by multiplying Paulis as
\[\mathsf{f}(\mathbf{P}\mathbf{Q})=\mathsf{f}(\mathbf{P})+\mathsf{f}(\mathbf{Q})+\sum_{i=1}^{n} \mathsf{f}(\mathbf{P^{(i)}}\mathbf{Q^{(i)}}), \tag{13}\]
where we can evaluate the summands in the right-hand side of Eq. (13) by precomputing them for all possible sets of Pauli matrices \(\mathbf{P^{(i)}}\) and \(\mathbf{Q^{(i)}}\). Then, we compute the sum using Eq. (7). Analogously, we can evaluate \(\mathsf{b}\Big{(}\mathbf{P^{(i)}}\mathbf{Q^{(i)}}\Big{)}\) by precomputation. The soundness of Eq. (12) follows from applying the multiplication component-wise, and then separating out prefactors from bare Paulis.
We also define the conjugation of an abstract Pauli element \(\mathbf{P}\) with \(k\)-qubit gate \(U\) padded to \(n\) qubits as:
\[U_{(i)}\mathbf{P}U_{(i)}^{\dagger} =U_{(i)}\Big{(}\mathrm{i}^{\mathbf{v}}\cdot\mathbf{P^{(0;i)}}\otimes\mathbf{ P^{(ii+k)}}\otimes\mathbf{P^{(i+k;n)}}\Big{)}U_{(i)}^{\dagger}\] \[=\mathrm{i}^{\mathbf{v}+\mathsf{f}(U\mathbf{P^{(ii+4)}}U^{\dagger})}\cdot \mathbf{P^{(0;i)}}\otimes\mathsf{b}(U\mathbf{P^{(ii+k)}}U^{\dagger})\otimes\mathbf{P^{(i+ k;n)}}, \tag{14}\]
where \(\mathbf{P^{(i;j)}}\) denotes \(\mathbf{P^{(i)}}\otimes\cdots\otimes\mathbf{P^{(j-1)}}\). Because \(k\) is typically small, and all possible gates \(U\) are known in advance, we can efficiently precompute \(\mathsf{f}(U\mathbf{P^{(ii+k)}}U^{\dagger})\) and \(\mathsf{b}(U\mathbf{P^{(ii+k)}}U^{\dagger})\). We note that this only works if the result of conjugation is indeed an (abstract) Pauli element--if not, this operation throws an error. The soundness from Eq. (14) follows from applying \(U\) to qubits \(i\) through \(i+k\), and then separating out prefactors from bare Paulis.
We define the commutator \(\mathbf{P}\circ\mathbf{Q}\) of two abstract Pauli elements \(\mathbf{P}\) and \(\mathbf{Q}\) as
\[\Big{(}\mathrm{i}^{\mathbf{v}}\cdot\underset{i=0}{\overset{n-1}{\otimes}}\mathbf{P^{ (i)}}\Big{)}\diamond\Big{(}\mathrm{i}^{\mathbf{v}}\cdot\underset{i=0}{\overset{n-1 }{\otimes}}\mathbf{Q^{(i)}}\Big{)}=\sum_{i=1}^{n}\mathbf{P^{(i)}}\diamond\mathbf{Q^{(i)}}. \tag{15}\]
Here, we evaluate the sum using Eq. (7), and efficiently evaluate \(\mathbf{P^{(i)}}\diamond\mathbf{Q^{(i)}}\in\mathbf{\blacksquare}\) by precomputing:
\[\mathbf{P^{(i)}}\diamond\mathbf{Q^{(i)}}=\left\{P^{(i)}\diamond Q^{(i)}\ \middle|\ P^{(i)}\in\mathbf{P^{(i)}},Q^{(i)}\in\mathbf{Q^{(i)}}\right\}.\]
The soundness of Eq. (15) can be derived from the corresponding concrete equation, which can be verified through standard linear algebra manipulations.
We define the join of abstract Pauli elements as
\[\Big{(}\mathrm{i}^{\mathbf{v}}\underset{i=0}{\overset{n-1}{\otimes}}\mathbf{P^{(i)}} \Big{)}\sqcup\Big{(}\mathrm{i}^{\mathbf{w}}\underset{i=0}{\overset{n-1}{\otimes}} \mathbf{Q^{(i)}}\Big{)}=\mathrm{i}^{\mathbf{v}\sqcup\mathbf{w}}\underset{i=0}{\overset{n-1 }{\otimes}}\Big{(}\mathbf{P^{(i)}}\cup\mathbf{Q^{(i)}}\Big{)}, \tag{16}\]
where \(\mathbf{P^{(i)}}\cup\mathbf{Q^{(i)}}\subseteq\{\mathrm{l},\,X,\,Y,Z\}\). Clearly, this join is sound.
Finally, we define an abstract transformer for modifying the sign of an abstract Pauli element \(\mathbf{P}\) by:
\[(-1)^{\mathbf{b}}\cdot\Big{(}\mathrm{i}^{\mathbf{v}}\cdot\underset{i=0}{\overset{n-1}{ \otimes}}\mathbf{P^{(i)}}\Big{)}=\mathrm{i}^{\mathbf{v}+2\cdot\mathbf{b}}\cdot\underset{ i=0}{\overset{n-1}{\otimes}}\mathbf{P^{(i)}} \tag{17}\]
The soundness of Eq. (17) follows directly from \((-1)^{v}=\mathrm{i}^{2v}\).
Abstract Density Matrices.The concrete and abstract domains introduced previously allow us to represent an abstract density matrix \(\mathbf{\rho}\in\mathsf{B}\) as
\[\mathbf{\rho}=r\star\mathbf{c}\cdot\mathbf{P}\cdot\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{j}}Q_{j}} {2}. \tag{18}\]
where \(r\in\mathbb{N}\), \(\mathbf{c}\in\mathsf{C}\), \(\mathbf{P}\in\mathsf{P}_{n}\), \(\mathbf{b}_{j}\in\blacksquare\), and \(Q_{j}\in\mathcal{P}_{n}\). Note that \(Q_{j}\) are concrete Pauli elements, while \(\mathbf{P}\) is abstract. Here, the integer counter \(r\) records how many concrete summands were abstracted. Specifically, \(r\star\mathbf{x}\) is defined as \(\sum_{i=1}^{r}\mathbf{x}\). Overall, we interpret \(\mathbf{\rho}\) as:
\[\gamma(\mathbf{\rho})=\left\{\sum_{i=1}^{r}c_{i}P_{i}\prod_{j=1}^{n}\tfrac{1+(-1) ^{b_{ij}}Q_{j}}{2}\;\middle|\;c_{i}\in\gamma(\mathbf{c}),P_{i}\in\gamma(\mathbf{P}),b_{ ij}\in\gamma(\mathbf{b}_{j})\right\},\]
relying on the previously discussed interpretations of \(\mathsf{C}\), \(\mathcal{P}_{n}\), and \(\mathsf{B}\).
## 5 Abstract Transformers
We now formalize the abstract transformers used by Abstraqt to simulate quantum circuits. The soundness of all transformers is straightforward, except for the trace transformer (SS5.4) which we discuss in App. A.
Initialization.We start from initial state \(\otimes_{i=1}^{n}\ket{0}\), which corresponds to density matrix
\[\mathbf{\rho}=\prod_{j=1}^{n}\tfrac{1+Z_{(j)}}{2}=1\star e^{[0,0]+\ket{0,0}}\cdot \mathrm{i}^{\{0\}}\{\mathbb{I}\}\prod_{j=1}^{n}\tfrac{1+(-1)^{\{0\}}Z_{(j)}}{2},\]
as established in [2, Sec. III]. We note that we can prepare other starting states by applying appropriate gates to the starting state \(\otimes_{i=1}^{n}\ket{0}\).
### Gate Application
Analogously to the concrete case discussed in SS2, applying a unitary gate \(U\) to \(\mathbf{\rho}\) yields:
\[U\mathbf{\rho}U^{\dagger}=r\star\mathbf{c}\mathbf{P^{\prime}}\prod_{j=1}^{n}\tfrac{1+(-1)^ {b_{j}}Q_{j}^{\prime}}{2}, \tag{19}\]
for \(\mathbf{P^{\prime}}=U\mathbf{P}U^{\dagger}\) and \(Q_{j}^{\prime}=UQ_{j}U^{\dagger}\).
If either \(U\mathbf{P}U^{\dagger}\not\subseteq\mathcal{P}_{n}\) or \(UQ_{j}U^{\dagger}\not\subseteq\mathcal{P}_{n}\), Eq. (19) still holds, but we cannot represent the resulting matrices efficiently. In this case, again analogously to SS2, we instead decompose the offending gate as \(U=\sum_{p}d_{p}R_{p}\), with \(R_{p}\in\mathcal{P}_{n}\) and obtain
\[U\mathbf{\rho}U^{\dagger}=\sum_{pq}r\star\mathbf{c}_{pq}\mathbf{P}_{pq}\prod_{j=1}^{n} \tfrac{1+(-1)^{b_{jq}}Q_{j}}{2}, \tag{20}\]
for \(\mathbf{c}_{pq}=d_{p}cd_{q}^{*}\), \(\mathbf{P}_{pq}^{\prime}=R_{p}\mathbf{P}R_{q}\), and \(\mathbf{b_{jq}}=\mathbf{b_{j}}+Q_{j}\diamond R_{q}\).
Overall, we can evaluate Eqs. (19)-(20) by relying on the abstract transformers from SS4.
Compression.To prevent an exponential blow-up of the number of summands and to adhere to the abstract domain of \(\mathbf{\rho}\) which does not include a sum, we compress all summands to a single one. Two summands can be joined as follows:
\[\left(r_{1}\star\mathbf{c}_{1}\mathbf{P}_{1}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{1j}}Q_{j}} {2}\right)\sqcup\left(r_{2}\star\mathbf{c}_{2}\mathbf{P}_{2}\prod_{j=1}^{n}\tfrac{1+(- 1)^{b_{2j}}Q_{j}}{2}\right)=r\star\mathbf{c}\mathbf{P}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_ {j}}Q_{j}}{2},\]
where \(r=r_{1}+r_{2}\), \(\mathbf{c}=\mathbf{c}_{1}\sqcup\mathbf{c}_{2}\), \(\mathbf{b}_{j}=\mathbf{b}_{1j}\sqcup\mathbf{b}_{2j}\), and \(\mathbf{P}=\mathbf{P}_{1}\sqcup\mathbf{P}_{2}\). The key observation here is that the concrete \(Q_{j}\) are independent of the summand, and thus need not be joined.
We note that we could also only merge _some_ summands and leave the others precise--investigating the effect of more flexible merging strategies could be interesting future research.
### Measurement
We now describe how to perform Pauli measurements, by extending the (concrete) stabilizer simulation to abstract density matrices. The correctness of the concrete simulation was previously established in [2, Sec. VII.C]--the correctness of the abstraction is immediate.
Simulating Measurement.Applying a Pauli measurement in basis \(R\in\mathfrak{b}(\mathcal{P}_{n})\) has a probabilistic outcome and transforms \(\rho\) to \(\rho_{+}=\frac{1+R}{2}\rho^{\frac{1+R}{2}}\) with probability \(\mathrm{tr}(\rho_{+})\) or \(\rho_{-}=\frac{1-R}{2}\rho^{\frac{1-R}{2}}\) with probability \(\mathrm{tr}(\rho_{-})\). We describe how to compute \(\rho_{+}\). Computing \(\rho_{-}\) works analogously by using \(-R\) instead of \(R\).
In the following, we will consider a concrete state \(\rho\) as defined in SS2 and an abstract state \(\mathbf{\rho}\) as defined in Eq. (18):
\[\rho=\sum_{i=1}^{m}c_{i}P_{i}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2} \quad\text{and}\quad\mathbf{\rho}=r\star\mathbf{c}\mathbf{P}\prod_{j=1}^{n}\tfrac{1+(-1)^{ b_{j}}Q_{j}}{2}. \tag{21}\]
Concrete simulation of measurement distinguishes two cases: either (i) \(R\) commutes with all \(Q_{j}\) or (ii) \(R\) anti-commutes with at least one \(Q_{j}\). Note that as the \(Q_{j}\) are concrete in an abstract state \(\mathbf{\rho}\), those two cases translate directly to the abstract setting. We now describe both cases for concrete and abstract simulation.
Background: Concrete Case (i).In this case, we assume \(R\) commutes with all \(Q_{j}\). Focusing on a single summand \(\rho_{i}\) of \(\rho\), measurement maps it to (see [2]):
\[\rho_{i,+}=c_{i}\tfrac{1+R}{2}P_{i}\tfrac{1+R}{2}\prod_{j=1}^{n}\tfrac{1+(-1) ^{b_{ij}}Q_{j}}{2}. \tag{22}\]
Let us first introduce the notation \(\{(-1)^{b_{ij}}Q_{j}\}\rightsquigarrow R\), denoting that \(R\) can be written as a product of selected Pauli elements from \(\{(-1)^{b_{ij}}Q_{j}\}\). Symmetrically, we write \(\{(-1)^{b_{ij}}Q_{j}\}\rightsquigarrow R\) if \(R\) cannot be written as such a product. As shown in [2], if \(\{(-1)^{b_{ij}}Q_{j}\}\rightsquigarrow R\) then \(\tfrac{1+R}{2}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}\) is equal to \(\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}\) and if \(\{(-1)^{b_{ij}}Q_{j}\}\rightsquigarrow R\) then \(\tfrac{1+R}{2}\prod_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}\) is null. Further, using that \(R^{2}=\mathfrak{l}\), we get from Eq. (22) that if \(P_{i}\) commutes with \(R\), \(\rho_{i,+}\) is equal to \(\rho_{i}\), otherwise, \(P_{i}\) anti-commutes with \(R\) and \(\rho_{i,+}\) is null. Putting it all together, we finally get:
\[\rho_{+}=\sum_{i=1}^{m}\rho_{i,+}=\sum_{i=1}^{m}\begin{cases}c_{i}P_{i}\prod \limits_{j=1}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2}&\text{if }\{(-1)^{b_{ij}}Q_{j}\} \rightsquigarrow R\text{ and }R\diamond P_{i}=0,\\ 0&\text{if }\{(-1)^{b_{ij}}Q_{j}\}\rightsquigarrow R\text{ or }R\diamond P_{i}=1.\end{cases} \tag{23}\]
Abstract Case (i).Let us first define \(\rightsquigarrow^{u}\) and \(\rightsquigarrow^{u}\) for a concrete \(R\), concrete \(Q_{j}\) and abstract \(\mathbf{b_{j}}\). We say \(\{(-1)^{\mathbf{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\) if for all \(j\), for all \(b_{j}\in\gamma(\mathbf{b_{j}})\), we have \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\). Similarly, we say \(\{(-1)^{\mathbf{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\) if for all \(j\), for all \(b_{j}\in\gamma(\mathbf{b_{j}})\), we have \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\). Note that \(\rightsquigarrow^{u}\) and \(\rightsquigarrow^{u}\) are under-approximations, and there can exist some \(R\) and \(\{(-1)^{\mathbf{b_{j}}}Q_{j}\}\) such that neither apply. Using those two abstract relations, we get the abstract transformer:
\[\mathbf{\rho_{+}}=r\star\begin{cases}\mathbf{c}\mathbf{P}\prod\limits_{j=1}^{n}\tfrac{1+(- 1)^{\mathbf{b_{j}}}Q_{j}}{2}&\text{if }\{(-1)^{\mathbf{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\text{ and }R\diamond\mathbf{P}=\{0\},\\ 0&\text{if }\{(-1)^{\mathbf{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\text{ or }R\diamond\mathbf{P}=\{1\},\\ \left(\mathbf{c}\sqcup\{0\}\right)\mathbf{P}\prod\limits_{j=1}^{n}\tfrac{1+(-1)^{b_{j} }Q_{j}}{2}&\text{otherwise.}\end{cases} \tag{24}\]
We can evaluate Eq. (24) by relying on the abstract transformers from Tab. 3 and by evaluating \(\rightsquigarrow^{u}\) as discussed shortly.
Background: Concrete Case (ii).We now suppose \(R\) anti-commutes with at least one \(Q_{j}\). In this case, we can rewrite \(\rho\) such that \(R\) anti-commutes with \(Q_{1}\), and commutes with all other \(Q_{j}\). Specifically, we can select any \(Q_{j^{*}}\) which anti-commutes with \(R\), swap \(b_{ij^{*}}\) and \(Q_{j^{*}}\) with \(b_{i1}\) and \(Q_{1}\), and replace all other \(Q_{j}\) anti-commuting with \(R\) by \(Q_{1}Q_{j}\) (and analogously \(b_{ij}\) by \(b_{ij}+b_{i1}\)), which leaves \(\rho\) invariant (see [2]). Assuming \(\rho\) is the result after this rewrite, we have:
\[\rho_{+}=\sum_{i}\tfrac{1}{2}c_{i}P^{\prime}_{i}\tfrac{1+(-1)^{0}R}{2}\prod_ {j=2}^{n}\tfrac{1+(-1)^{b_{ij}}Q_{j}}{2},\text{ where }P^{\prime}_{i}=\begin{cases}P_{i}&\text{ if }R \diamond P_{i}=0,\\ (-1)^{b_{i1}}P_{i}Q_{1}&\text{ if }R\diamond P_{i}=1.\end{cases} \tag{25}\]
Overall, after rewriting \(\rho\) as above, Eq. (25) replaces \(P_{i}\) by \(P^{\prime}_{i}\), \(b_{i1}\) by \(0\), and \(Q_{1}\) by \(R\).
Abstract Case (ii).After applying the same rewrite as in the concrete case, directly abstracting Eq. (25) yields:
\[\boldsymbol{\rho_{+}}=r\star\tfrac{1}{2}c\boldsymbol{P^{\prime}}\tfrac{1+(-1) ^{\{0\}}R}{2}\prod_{j=2}^{n}\tfrac{1+(-1)^{b_{j}}Q_{j}}{2},\text{ where }\boldsymbol{P^{\prime}}=\begin{cases}\boldsymbol{P}&\text{ if }R \diamond\boldsymbol{P}=\{0\},\\ (-1)^{\boldsymbol{b_{1}}}\boldsymbol{P}Q_{1}&\text{ if }R\diamond\boldsymbol{P}=\{1\}, \\ \boldsymbol{P}\sqcup(-1)^{\boldsymbol{b_{1}}}\boldsymbol{P}Q_{1}&\text{ otherwise.}\end{cases} \tag{26}\]
Again, we can evaluate Eq. (26) by relying on the abstract transformers from Tab. 3.
Joining Both Measurement Results.For measurements occurring within a quantum circuit, stabilizer simulation generally requires randomly selecting either \(\rho_{+}\) or \(\rho_{-}\) with probability \(\operatorname{tr}(\rho_{+})\) and \(\operatorname{tr}(\rho_{-})\), respectively, and then continues only with the selected state. In contrast, Abstraqt can join both measurement outcomes into a single abstract state \(\boldsymbol{\rho_{+}}\sqcup\boldsymbol{\rho_{-}}\), as the \(Q_{j}\) are the same in both. This allows us to pursue both measurement outcomes simultaneously, as we demonstrate in SS7.
### Efficiently computing \(\rightsquigarrow\)
To simulate the result of a measurement, we introduced operator \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\), denoting that some Pauli \(R\) can be written as a product of \(\{(-1)^{b_{j}}Q_{j}\}\). We now show how to compute \(\rightsquigarrow\) efficiently.
Background: Concrete case.We first note that \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\) holds if and only if there exist some \(x\in\mathbb{B}\) such that:
\[R\stackrel{{!}}{{=}}\prod_{j=1}^{n}\left((-1)^{b_{j}}Q_{j}\right)^ {x_{j}}. \tag{27}\]
Further, this solution \(x\) would satisfy:
\[\mathfrak{b}(R)\stackrel{{!}}{{=}}\mathfrak{b}\left(\prod_{j=1}^ {n}\left((-1)^{b_{j}}Q_{j}\right)^{x_{j}}\right) \tag{28}\]
Eq. (28) has a solution if and only if \(R\) commutes with all the \(Q_{j}\), in which case this solution \(x\) is unique (see [2]). Hence, to check if \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\), we can first verify whether \(R\diamond Q_{j}=0\) for all \(j\), and if so, check if the unique \(x\) satisfying Eq. (28) also satisfies Eq. (27).
Background: Finding \(x\) for Eq. (28).To compute this solution \(x\), the stabilizer simulation relies critically on an isomorphism \(\mathfrak{g}\) between Pauli matrices \(\{\!1,X,\,Y,Z\}\) and \(\mathbb{B}^{2}\).
Specifically, \(\mathfrak{g}\) maps \(I\) to \(\binom{0}{0}\), \(X\) to \(\binom{1}{0}\), \(Y\) to \(\binom{1}{1}\), and \(Z\) to \(\binom{0}{1}\). Further, \(\mathfrak{g}\) extends naturally to bare Pauli elements \(R\in\mathfrak{b}(\mathcal{P}_{n})\) and tuples \(Q=(Q_{1},\dots,Q_{n})\in\mathfrak{b}(\mathcal{P}_{n})^{n}\) by:
\[\mathfrak{g}(R)=\begin{pmatrix}\mathfrak{g}^{(R^{(0)})}\\ \vdots\\ \mathfrak{g}(R^{(n-1)})\end{pmatrix}\text{ and }\mathfrak{g}(Q)=\begin{pmatrix} \mathfrak{g}^{(Q_{1}^{(0)})}&\cdots&\mathfrak{g}^{(Q_{n}^{(0)})}\\ \vdots&\ddots&\vdots\\ \mathfrak{g}(Q_{1}^{(n-1)})&\cdots&\mathfrak{g}(Q_{n}^{(n-1)})\end{pmatrix},\]
where \(\mathfrak{g}(R)\in\mathbb{B}^{2n\times 1}\) and \(\mathfrak{g}(Q)\in\mathbb{B}^{2n\times n}\). We can naturally extend \(\mathfrak{g}\) to \(\mathcal{P}_{n}\), by defining \(\mathfrak{g}(R)=\mathfrak{g}(\mathfrak{b}(R))\).
This isomorphism \(\mathfrak{g}\) is designed so that the product of bare Pauli elements ignoring prefactors corresponds to a component-wise addition of encodings:
\[\mathfrak{g}(P_{1}P_{2})=\mathfrak{g}(P_{1})+\mathfrak{g}(P_{2}). \tag{29}\]
Using Eq. (29), we can obtain solution candidates \(x\) for Eq. (28) by solving a system of linear equations using Gaussian elimination modulo 2:
\[\mathfrak{g}\left(R\right)\overset{!}{=}\mathfrak{g}\left(\prod\limits_{j=1}^ {n}Q_{j}^{x_{j}}\right)=\sum\limits_{j=1}^{n}\mathfrak{g}(Q_{j})x_{j}= \mathfrak{g}(Q)x. \tag{30}\]
Because in our case, \(\mathfrak{g}(Q)\) is over-determined and has full rank, Eq. (30) either has no solution, or a unique solution \(x\).
Background: Checking prefactors.Once we have found the unique \(x\) (if it exists) satisfying Eq. (28) as described above, we need to check if it also satisfies Eq. (27). It is enough to check if the prefactors match:
\[\mathfrak{f}\left(R\right)\overset{!}{=}\mathfrak{f}\left(\prod\limits_{j}(-1 )^{b_{j}x_{j}}Q_{j}^{x_{j}}\right),\]
or equivalently:
\[\mathfrak{f}\left(R\right)-\mathfrak{f}\left(\prod\limits_{j}Q_{j}^{x_{j}} \right)-2\sum\limits_{j}b_{j}x_{j}\overset{!}{=}0,\]
where the subtraction and sum operations are over \(\mathbb{Z}_{4}\).
Putting it all together, we can define \(\mathfrak{S}\): \(\mathcal{P}_{n}\times\mathcal{P}_{n}^{n}\times\mathbb{B}^{n}\to\mathbb{Z}_{4} \cup\{\boldsymbol{f}\}\) with
\[\mathfrak{S}(R,Q,b)=\begin{cases}\boldsymbol{f}&\text{if }\exists j,R\diamond Q _{j}=1,\\ \mathfrak{f}(R)-\mathfrak{f}\left(\prod\limits_{j=1}^{n}Q_{j}^{x_{j}}\right)-2 \sum\limits_{j=1}^{n}x_{j}b_{j}&\text{otherwise},\end{cases} \tag{31}\]
where \(x\) is the unique value such that \(\mathfrak{g}(R)=\mathfrak{g}(Q)x\) and \(\boldsymbol{f}\) indicates there is no such \(x\). We then have that \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\) if and only if \(\mathfrak{S}(R,Q,b)=0\), and \(\{(-1)^{b_{j}}Q_{j}\}\rightsquigarrow R\) if and only if \(\mathfrak{S}(R,Q,b)\neq 0\).
\(\mathfrak{S}\) for abstract \(b_{j}\).For abstract values \(\boldsymbol{b_{j}}\), we define \(\mathfrak{S}\): \(\mathcal{P}_{n}\times\mathcal{P}_{n}^{n}\times\blacksquare^{n}\to 2^{\mathbb{Z}_{4} \cup\{\ell\}}\) as follows:
\[\mathfrak{S}(R,Q,\boldsymbol{b})=\begin{cases}\{\boldsymbol{f}\}&\text{if } \exists j,R\diamond Q_{j}=1,\\ \mathfrak{f}(R)-\mathfrak{f}\left(\prod\limits_{j=1}^{n}Q_{j}^{x_{j}}\right)- 2\sum\limits_{j=1}^{n}x_{j}\boldsymbol{b_{j}}&\text{otherwise}.\end{cases} \tag{32}\]
Following the same reasoning as above, we have that \(\{(-1)^{\boldsymbol{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\) if and only if \(\mathfrak{S}(R,Q,\boldsymbol{b})=\{0\}\) and \(\{(-1)^{\boldsymbol{b_{j}}}Q_{j}\}\rightsquigarrow^{u}R\) if and only if \(\mathfrak{S}(R,Q,\boldsymbol{b})\cap\{0\}=\emptyset\).
\(\mathfrak{S}\) for abstract \(b_{j}\) and \(R\).To compute the trace of a state (see SS5.4), we further extend Eq. (31) to abstract \(\boldsymbol{b_{j}}\) and abstract \(\boldsymbol{R}\), defining \(\mathfrak{S}\): \(\boldsymbol{\mathsf{P}}_{n}\times\mathcal{P}_{n}^{n}\times\blacksquare^{n}\to 2^{ \mathbb{Z}_{4}\cup\{\ell\}}\) as:
\[\mathfrak{S}(\boldsymbol{R},Q,\boldsymbol{b}) =\begin{cases}\{\boldsymbol{f}\}&\text{if }\exists j. \boldsymbol{R}\diamond Q_{j}=\{1\},\\ \mathfrak{f}(\boldsymbol{R})-\mathfrak{f}\left(\prod\limits_{j=1}^{n}Q_{j}^{ \boldsymbol{x_{j}}}\right)-2\sum\limits_{j=1}^{n}\boldsymbol{x_{j}} \boldsymbol{b_{j}}&\text{if }\forall j.\boldsymbol{R}\diamond Q_{j}=\{0\},\\ \mathfrak{f}(\boldsymbol{R})-\mathfrak{f}\left(\prod\limits_{j=1}^{n}Q_{j}^{ \boldsymbol{x_{j}}}\right)-2\sum\limits_{j=1}^{n}\boldsymbol{x_{j}} \boldsymbol{b_{j}}\cup\{\boldsymbol{f}\}&\text{otherwise},\end{cases} \tag{33}\] \[\text{for }\mathfrak{g}(\boldsymbol{R}) =\mathfrak{g}(Q)\boldsymbol{x}. \tag{34}\]
Here, evaluating Eq. (33) requires evaluating \(Q_{j}^{\mathbf{b}}\) for an abstract boolean \(\mathbf{b}\), which we define naturally as
\[Q_{j}^{\mathbf{b}}:=\begin{cases}\{Q_{j}\}&\text{if }\mathbf{b}=\{1\},\\ \{\mathbb{I}\}&\text{if }\mathbf{b}=\{0\},\\ \{Q_{j},\mathbb{I}\}&\text{if }\mathbf{b}=\{0,1\}.\end{cases}\]
Further, Eq. (34) requires over-approximating all \(x\) which satisfy \(\mathfrak{g}(\mathbf{R})=\mathfrak{g}(Q)x\). Here, we naturally extend \(\mathfrak{g}\) to abstract Paulis by joining their images. For instance, we have that \(\mathfrak{g}(\{X,Y\})=\{\binom{1}{0}\}\sqcup\{\binom{1}{1}\}=\binom{\{1\}}{ \{0,1\}}\). We then view \(\mathfrak{g}(\mathbf{R})=\mathfrak{g}(Q)x\) as a system of linear equations \(\mathbf{b}=Ax\), where the left-hand side consists of abstract booleans \(\mathbf{b}\in\leavevmode\hbox{\small\vbox{\hbox{\scalebox{.5}{$\bullet$}}\hbox{ \scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}}^{2n}\). We then drop all equations in this equation system where the left-hand side is \(\{0,1\}\), as they do not constrain the solution space. This updated system is fully concrete, hence we can solve it using Gaussian elimination. We get either no solution, or a solution space \(y+\sum_{k=1}^{p}\lambda_{k}u_{k}\), where \(y\) is a possible solution and \(u_{1},...,u_{p}\) is a possibly empty basis of the null solution space. In the case of no solution, \(\mathbf{x}\) is not needed in Eq. (33). Otherwise, we can compute \(\mathbf{x_{j}}\) as \(\{y_{j}+\sum_{k=1}^{m}\lambda_{k}u_{k,j}\mid\lambda_{k}\in\leavevmode\hbox{ \small\vbox{\hbox{\scalebox{.5}{$\bullet$}}\hbox{\scalebox{.5}{$\bullet$}}} \kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$ \bullet$}}}\}\).
### Trace
Recall that the probability of obtaining state \(\rho_{+}\) when measuring \(\rho\) is \(\operatorname{tr}\left(\rho_{+}\right)\). We now describe how to compute this trace using \(\mathfrak{S}\) defined above.
Background: Concrete Trace.Following [2], we compute the trace of a density matrix \(\rho\) by:
\[\operatorname{tr}\left(\rho\right)=\sum_{i=1}^{m}\operatorname{Re}\Big{(}c_{i }\mathfrak{i}^{\mathfrak{S}(P,Q,b_{i})}\Big{)}, \tag{35}\]
where we define \(\mathfrak{i}^{f}:=0\). Because the trace of a density matrix is always real, \(\operatorname{Re}(\cdot)\) is redundant, but will be convenient to avoid complex traces in our abstraction.
Abstract Trace.For an abstract state \(\rho\), we define:
\[\operatorname{tr}\left(\mathbf{\rho}\right)=r\cdot\operatorname{Re}\Big{(} \mathfrak{c}\mathfrak{i}^{\mathfrak{S}(P,Q,\mathbf{b})}\Big{)}, \tag{36}\]
where we use \(\leavevmode\hbox{\small\vbox{\hbox{\scalebox{.5}{$\bullet$}}\hbox{ \scalebox{.5}{$\bullet$}}\hbox{\scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox {.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}}(\cdot)\) as defined in Eq. (33).
## 6 Implementation
In the following, we discuss our implementation of the abstract transformers from SS4 and SS5 in Abstraqt.
Language and Libraries.We implemented Abstraqt in Python 3.8, relying on Qiskit 0.40.0 [14] for handling quantum circuits, and a combination of NumPy 1.20.0 [15] and Numba 0.54 [16] to handle matrix operations.
Bit Encodings.An abstract density matrix \(\mathbf{\rho}=r\star\mathbf{c}\cdot\mathbf{P}\cdot\prod_{j=1}^{n}\frac{1+(-1)^{\mathfrak{ b}_{j}}Q_{j}}{2}\) is encoded as a tuple \((r,\mathbf{c},\mathbf{P},\mathbf{b}_{1},...,\mathbf{b}_{n},Q_{1},\ldots,Q_{n})\). To encode the concrete Pauli matrices \(Q_{j}\), we follow concrete stabilizer simulation encodings such as [17] and encode Pauli matrices \(P\) using two bits \(\mathfrak{g}(P)\) (see SS5.3). To encode abstract elements of a finite set we use bit patterns. For example, we encode \(\mathbf{b_{1}}=\{1,0\}\in\leavevmode\hbox{\small\vbox{\hbox{\scalebox{.5}{$ \bullet$}}\hbox{\scalebox{.5}{$\bullet$}}\hbox{\scalebox{.5}{$\bullet$}}} \kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$ \bullet$}}}\) as \(11_{2}\), where the least significant bit (i.e. the right-most bit) indicates that \(0\in\mathbf{b_{1}}\). Analogously, we encode \(\mathbf{v}=\{3,0\}\in\leavevmode\hbox{\small\vbox{\hbox{\scalebox{.5}{$\bullet$}} \hbox{\scalebox{.5}{$\bullet$}}\hbox{\scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{ \scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}}\) as \(1001_{2}\). Further, we encode \(\{Z,Y\}\) as \(1100_{2}\), where the indicator bits correspond to \(\textit{Z, Y},\leavevmode\hbox{\small\vbox{\hbox{\scalebox{.5}{$\bullet$}}\hbox{ \scalebox{.5}{$\bullet$}}\hbox{\scalebox{.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox {.5}{$\bullet$}}}\kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}\kern-0.4pt\hbox{ \scalebox{.5}{$\bullet$}}\kern-0.4pt\hbox{\scalebox{.5}{$\bullet$}}\kern-0.4pt \hbox{\scalebox{.
Implementing Transformers.The abstract transformers on abstract density matrices can be implemented using operations in, and, and, as are small finite domains, we can implement operations in these domains using lookup tables, which avoids the need for bit manipulation tricks. While such tricks are applicable in our context (e.g., [2] uses bit manipulations to compute, they are generally hard to come up with [18]. In contrast, the efficiency of our lookup tables is comparable to that of bit manipulation tricks, without requiring new insights for new operations.
For example, to evaluate over using Eq. (7), we encode the first argument as 00 and the second argument as 01. Looking up entry in a two-dimensional pre-computed table then yields 00, the encoding of the correct result. We note that we cannot implement this operation directly using a XOR instruction on encodings, as this would yield incorrect results: 00 XOR, which is incorrect.
Gaussian Elimination.To efficiently solve equations modulo two as discussed in SS5, we implemented a custom Gaussian elimination relying on bit-packing (i.e., storing 32 boolean values in a single 32-bit integer). In the future, it would be interesting to explore if Gaussian elimination could be avoided altogether, as suggested by previous works [2, 17].
Testing.To reduce the likelihood of implementation errors, we have complemented Abstraqt with extensive automated tests. We test that abstract transformers are sound with respect to concrete functions, that is to say that
We check this inclusion for multiple selected samples of and (typically corner cases).
This approach is highly effective at catching implementation errors, which we have found in multiple existing tools as shown in SS7.
## 7 Evaluation
We now present our evaluation of Abstraqt, demonstrating that it can establish circuit properties no existing tool can establish.
### Benchmarks
To evaluate Abstraqt, we generated 8 benchmark circuits, summarized and visualized in Tab. 4.
Benchmark Circuit Generation.Each circuit operates on 62 qubits, partitioned into 31 _upper_ qubits and 31 _lower_ qubits. We picked the limit of 62 qubits because our baseline ESS (discussed shortly) only supports up to 63 qubits; Abstraqt is not subject to such a limitation.
Each circuit operates on initial state and is constructed to ensure that all lower qubits are eventually reverted to state. We chose this invariant as it can be expressed for each of the evaluated tools, as we will show in SS7.2. Further, as some tools can only check this for one qubit at a time, we only check if the very last qubit is reverted to, instead of running 31 independent checks (which would artificially slow down some baselines). Note that this check is of equivalent difficulty for all lower qubits.
Most of the circuits are built from three concatenated subcircuits. First which only modifies the upper qubits, then which only modifies the lower qubits (potentially using gates controlled by the upper qubits) and finally which is generated by inverting and optimizing the result using PyZX [19]. Thus, running on initial state reverts the _lower_ qubits to, but this is hard to establish as is obfuscated by an optimization pass. Further, in all but the first two circuits, the upper and lower qubits are entangled with each other. Tab. 4 details how and were generated for each circuit. Note that Cliff+T;H;CZ+RX and CCX+H;Cliff lighter differ from this construction. In the former, the circuit is built as, where, applies an gate to each of the lower qubits in the circuit. In the latter, the circuit also modifies the upper
\begin{table}
\begin{tabular}{l l}
**Circuit** & **Generation** \\ \hline Cliff;Cliff & \\ \(c_{2}\in\left(\big{\{}\{o(q)\mid o\in\{H,S\},q\in\text{lower}\}\cup\{CX(q_{1},q_{ 2})\mid q_{1},q_{2}\in\text{lower}\}\right)^{10^{4}}\) & 26k \(\times\) Clifford \\ \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & \\ & return \(c_{1};c_{2};c_{3}\) \\ Cliff+T;Cliff & \\ \(c_{2}\in\left(\big{\{}\{o(q)\mid o\in\{H,S\},q\in\text{lower}\}\cup\{CX(q_{1},q_ {2})\mid q_{1},q_{2}\in\text{lower}\}\right)^{10^{4}}\) & 23k \(\times\) Clifford, \\ & \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & \\ Cliff+T;Cliff & \\ \(c_{2}\in\left(\big{\{}\{o(q)\mid o\in\{H,S\},q\in\text{lower}\}\cup\{CX(q_{1}, q_{2})\mid q_{1},q_{2}\in\text{lower}\}\right)^{10^{4}}\) & 25k \(\times\) Clifford, \\ & \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & \\ & return \(c_{1};c_{2};c_{3}\) \\ Cliff+T;CX+T & \\ \(c_{2}\in\left(\big{\{}CX(q_{1},q_{2})\mid q_{1}\in\text{upper},q_{2}\in\text{lower} \}\cup\{T(q)\mid q\in\text{lower}\}\right)^{10^{4}}\) & 18k \(\times\) Clifford, \\ & \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & \\ & return \(c_{1};c_{2};c_{3}\) \\ Cliff+T;CX+T & \\ \(c_{2}\in\left(\big{\{}CX(q_{1},q_{2})\mid q_{1}\in\text{upper},q_{2}\in\text{ lower}\}\cup\{T(q)\mid q\in\text{lower}\}\right)^{10^{4}}\) & 18k \(\times\) Clifford, \\ & \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & 9k \(\times\)\(T\), 40 \(\times\)\(T^{\dagger}\) \\ & return \(c_{1};c_{2};c_{3}\) \\ \(c_{1}\in\left(\big{\{}\{o(q)\mid o\in\{H,S\},q\in\text{upper}\}\cup\{CX(q_{1}, q_{2})\mid q_{1},q_{2}\in\text{upper}\}\right)^{10^{4}}\) & \\ Cliff+T;H;CZ+RX & \\ \(c_{2}\in\left(\big{\{}CX(q_{1},q_{2})\mid q_{1}\in\text{upper},q_{2}\in\text{ lower}\}\cup\{RX_{\frac{\pi}{4}}(q)\mid q\in\text{lower}\}\right)^{10^{4}}\) & 5k \(\times\)\(RX_{\frac{\pi}{4}}\), \\ & \(c_{3}=\text{opt}(c_{2}^{\dagger})\); & 4k \(\times\)\(T\), 45 \(\times\)\(T^{\dagger}\) \\ & return \(c_{1};c_{h};c_{2};c_{3};c_{h}\) \\ & \\ \(c_{1}\in\left(\big{\{}CCX(q_{1},q_{2},q_{3})\mid q_{1},q_{2},q_{3}\in\text{ upper}\}\cup\{H(q)\mid q\in\text{upper}\}\right)^{10^{4}}\) & \\ CCX+H;Cliff & \\ \(c_{2}\in\left(\big{\{}\{o(q)\mid o\in\{H,S\},q\in\text{lower}\}\cup\text{ \
qubits using gates controlled by the lower ones. Overall, our benchmark covers various gates, with all applying Clifford gates, three applying \(T\) gates, two applying \(CCX\) gates, one applying \(RX_{\frac{\pi}{2}}\) gates (one qubit gate, rotation around the \(X\) axis of \(\frac{\pi}{4}\) radians), and one applying \(RZ_{2}\) gates (one qubit gate, rotation around the \(Z\) axis of 2 radians).
The last benchmark applies internal measurements. It first generates a \(GHZ\) state \(\frac{1}{\sqrt{2}}\ket{0\cdots 0}+\frac{1}{\sqrt{2}}\ket{1\cdots 1}\), and collapses it to \(\ket{0\cdots 0}\) or \(\ket{1\cdots 1}\) by measuring the first qubit. Then, it resets all qubits to \(\ket{0}\) except for the first one. It then repeats this process, with the first qubit starting in either \(\ket{0}\) or \(\ket{1}\). Thus, the state before measurement is either \(\frac{1}{\sqrt{2}}\ket{0\cdots 0}+\frac{1}{\sqrt{2}}\ket{1\cdots 1}\) or \(\frac{1}{\sqrt{2}}\ket{0\cdots 0}-\frac{1}{\sqrt{2}}\ket{1\cdots 1}\), but every repetition still resets all lower qubits to \(\ket{0}\).
Discussion.Overall, all benchmarks are constructed to revert the lower qubits to \(\ket{0}\), but in a non-obvious way. As fully precise simulation of most benchmarks is unrealistic, we expect that over-abstraction is typically necessary to establish this fact.
### Baselines
We now discuss how we instantiated existing tools to establish that a circuit \(c\) evolves a qubit \(q\) to state \(\ket{0}\). Overall, we considered two tools based on stabilizer simulation (ESS [5] and QuiZX [4]), one tool based on abstract interpretation (YP21 [20], in two different modes), and one tool based on state vectors (Statevector as implemented by Qiskit [14]).
Ess.Qiskit [14] provides an extended stabilizer simulator implementing the ideas published in [5] which (i) decomposes quantum circuits into Clifford circuits, (ii) simulates these circuits separately, and (iii) performs measurements by an aggregation across these circuits. To check if a circuit \(c\) consistently evolves a qubit \(q\) to \(\ket{0}\), we check if \(c\) extended by a measurement of \(q\) always yields \(0\). To run our simulation, we used default parameters.
QuiZX.QuiZX [4] improves upon [5] by alternating between decomposing circuits (splitting non-Clifford gates into Clifford gates) and optimizing the decomposed circuits (which may further reduce non-Clifford gates). We can use QuiZX to establish that a qubit is in state \(\ket{0}\) by "plugging" output \(q\) as \(\ket{1}\) and establishing that the probability of this output is zero. 4
Footnote 4: The use of plugging is described on [https://github.com/Quantomatic/quizx/issues/9](https://github.com/Quantomatic/quizx/issues/9).
Yp21.Like Abstraqt,YP21 [20] also uses abstract interpretation, but relies on projectors instead of stabilizer simulation. Specifically, it encodes the abstract state of selected (small) subsets of qubits as projectors \(\{P_{j}\}_{j\in\mathcal{J}}\), which constrain the state of these qubits to the range of \(P_{j}\).
To check if a qubit \(q\) is in state \(\ket{0}\), we check if the subspace resulting from intersecting the range of all \(P_{j}\) is a subset of the range of \(\mathbb{I}+Z_{(q)}\)--an operation which is natively supported byYP21.
When runningYP21, we used the two execution modes suggested in its original evaluation [20]. The first mode tracks the state of all pairs of qubits, while the second considers subsets of 5 qubits that satisfy a particular condition (for details, see [20, SS9]). Because [20] does not discuss which execution mode to pick for new circuits, we evaluated all circuits in both modes.
We note that becauseYP21 does not support \(CX(a,b)\) for \(a>b\), we instead encoded such gates as \(H(a);H(b);CX(b,a);H(b);H(a)\).
Statevector.Qiskit [14] further provides a simulator based on state vectors, which we also used for completeness.
Abstraqt.In Abstraqt, we can establish that a qubit is in state \(\ket{0}\) by measuring the final abstract state \(\mathbf{\rho}\) in basis \(Z_{(i)}\) and checking if the probability of obtaining \(\ket{1}\) is \(0\).
Experimental Setup.We executed all experiments on a desktop machine with 16 GB RAM and 4 cores at 3.4 GHz, running Ubuntu 18.04. Because some tools consumed excessive amounts of memory, we limited them to 12 GB of RAM. This was not necessary for Abstraqt, which never required more than 600 MB of RAM.
### Results
Tab. 5 summarizes the results when using all tools discussed in SS7.2 to establish that the last qubit in each circuit is in state \(|0\rangle\). Overall, it demonstrates that while Abstraqt can establish this for all benchmarks within minutes, QuiZX can only establish it for one benchmark and all other tools cannot establish it for any circuit. Further, we found that for some circuits the established simulation tool ESS yields incorrect results. We now discuss the results of each tool in more details.
MeasureGHZ.Importantly, no baseline tool except Abstraqt can simultaneously simulate both outcomes of a measurement, without incurring an exponential blow-up. Therefore, for MeasureGHZ, we consider internal measurements as an unsupported operation in these tools. We note that we could randomly select one measurement outcome and simulate the remainder of the circuit for it, but then we can only establish that the final state is \(|0\rangle\)_for a given sequence of measurement outcomes_. In contrast, a single run of Abstraqt can establish that the final state is \(|0\rangle\) for all possible measurement outcomes (see also SS5.2).
Ess.Surprisingly, ESS does not simulate circuits Cliff;Cliff and Cliff+T;Cliff correctly. Instead, it samples the impossible measurement of 1 around 50% of cases. Interestingly, smaller circuits generated with the same process are handled correctly. It is reassuring to see that Abstraqt allows us to discover such instabilities in established tools.
Note that it may be surprising that ESS can simulate circuits containing many \(T\) gates--this is because Qiskit can establish that the Clifford+T part of circuit Cliff+T;Cliff is irrelevant when measuring the last qubit. In contrast, for most remaining circuits, ESS runs out of memory as it decomposes the circuit into exponentially many Clifford circuits.
QuiZX.QuiZX also fails to simulate Cliff;Cliff, which we conjecture is due to a bug for circuits that do not contain non-Clifford gates. After adding a single \(T\) gate, or when running Cliff+T;Cliff, simulation is successful. However, we note that Abstraqt is significantly faster than QuiZX, possibly because it does not need to repeatedly optimize circuits.
The results on the remaining circuits are analogous to ESS, except that QuiZX sometimes times out instead of running out of memory. Further, we note that QuiZX runs into an internal error when simulating \(\texttt{RZ}_{2}\)+H;CX.
Yp21.YP21 typically either times out, throws an internal error, does not support a relevant operation (e.g., measurements or \(RZ_{2}\)), or returns incorrect results. The latter is because on some circuits, mode 2 choses an empty set of projectors, which leads to trivially unsound results. When Yp21 does terminate, it is too imprecise to establish that the last qubit is in state \(|0\rangle\).
\begin{table}
\begin{tabular}{l l l l l l l l}
**Label** & **Abstraqt** & **ESS** & **QuiZX** & **YP21 (mode 1)** & **YP21 (mode 2)** & **Statevec.** \\ \hline Cliff+Cliff & ✓ (13s) & ✗ (12s, incorr.) & ✗ (0s, error) & ✗ (3.1h, imprec.) & ✗ (5s, incorr.) & ✗ (OOM) \\ Cliff+T;Cliff & ✓ (11s) & ✗ (12s, incorr.) & ✓ (3.1h) & ✗ (3.0h, imprec.) & ✗ (5s, incorr.) & ✗ (OOM) \\ Cliff+T;CX+T & ✓ (18s) & ✗ (OOM) & ✗ (OOM) & ✗ (3.1h, imprec.) & ✗ (5s, incorr.) & ✗ (OOM) \\ Cliff+T;B+C+2K & ✗ (21s) & ✗ (OOM) & ✗ (5-6h) & ✗ (3.4h, imprec.) & ✗ (6s, incorr.) & ✗ (OOM) \\ CCK+Cliff & ✓ (70s) & ✗ (OOM) & ✗ (>6h) & ✗ (>6h) & ✗ (OOM) & ✗ (OOM) \\ CCK+BC+T & ✓ (77s) & ✗ (OOM) & ✗ (OOM) & ✗ (>6h) & ✗ (OOM) & ✗ (OOM) \\ RX2+H;CX+T & ✓ (77s) & ✗ (OOM) & ✗ (OOM) & ✗ (>6h) & ✗ (OOM) & ✗ (OOM) \\ RZ\({}_{2}\)+H;CX & ✓ (12s) & ✗ (OOM) & ✗ (error, 0s) & ✗ (unsupp.) & ✗ (unsupp.) & ✗ (OOM) \\ MeasureGHZ & ✓ ( 4s) & ✗ (unsupp.) & ✗ (unsupp.) & ✗ (unsupp.) & ✗ (unsupp.) & ✗ (unsupp.) \\ \end{tabular}
\end{table}
Table 5: Results when running simulators on benchmarks from Tab. 4. OOM indicates running out of memory, unsupp. indicates the tool does not support an operation present in the circuit, and incorr. indicates incorrect simulation results.
Statevector.Unsurprisingly, statevector simulation cannot handle the circuits in Tab. 5. This is because it requires space exponential in the number of qubits, which precludes simulating any of the benchmarks.
### Discussion and Limitations
We note that our benchmarks are designed to showcase successful applications of Abstraqt where it outperforms existing tools. Of course, Abstraqt is not precise on all circuits--e.g., Abstraqt quickly loses precision on general Clifford+T circuits (analogously to the imprecise measurement discussed in SS3).
We expect that for many real-world circuits, existing approaches work better than the current implementation of Abstraqt. However, as Abstraqt only over-abstracts the first stabilizer simulation generalized to non-Clifford gates [2, SSVII-C], we believe it paves the way to also over-abstract more recent stabilizer simulators. For example, ESS [5] operates on so-called _CH-forms_ which, like the generalized stabilizer simulation underlying Abstraqt, can be encoded using bits and complex numbers. Hence, it seems plausible that our ideas could be adapted to abstract ESS. QuiZX operates on _ZX-diagrams_ consisting of graphs whose nodes are parametrized by rotation angles \(\alpha\). Again, a promising direction for future research is introducing abstract ZX-diagrams that support abstract rotation angles. This is particularly promising because both ESS and QuiZX scale better in number of \(T\) gates than [2, SSVII-C]: with \(2^{n}\) instead of \(4^{n}\).
Overall, we believe that all tools in Tab. 5 are valuable to analyze quantum circuits. We are hoping that addressing some limitations of the considered baselines (e.g., fixing bugs in QuiZX and ESS) and cross-pollinating ideas (e.g., extending QuiZX by abstract interpretation) will allow the community to benefit from the fundamentally different mathematical foundations of all tools.
## 8 Related Work
Here, we discuss works related to our goal and methods.
Quantum Abstract Interpretation.Some existing works have investigated abstract interpretation for simulating quantum circuits [20, 21, 22]. As [20] is not specialized for Clifford circuits, it is very imprecise on the circuits investigated in SS7: it cannot derive that the lower qubits are \(|0\rangle\) for any of them. While [21, 22] are inspired by stabilizer simulation, they only focus on determining if certain qubits are entangled or not, whereas Abstraqt can extract more precise information about the state. Further, both tools are inherently imprecise on non-Clifford gates--in contrast, a straight-forward extension of Abstraqt can treat some non-Clifford gates precisely at the exponential cost of not merging summands.
Stabilizer Simulation.The Gottesman-Knill theorem [1] established that stabilizers can be used to efficiently simulate Clifford circuits. Stim [17] is a recent implementation of such a simulator, which only supports non-Clifford gates and Pauli measurements.
Stabilizer simulation was extended to allow for non-Clifford gates at an exponential cost, while still allowing efficient simulation of Clifford gates [2, SSVII-C]. Various works build upon this insight, handling Clifford gates efficiently but suffering from an exponential blow-up on non-Clifford gates [3, 4, 5, 6, 7]. In our evaluation, we demonstrate that Abstraqt extends the reach of state-of-the-art stabilizer simulation by comparing to two tools from this category, ESS [5] (chosen because it is implemented in the popular Qiskit library) and QuiZX [4] (chosen because it is a recent tool reporting favorable runtimes).
Verifying Quantum Programs.Another approach to establishing circuit properties is end-to-end formal program verification, as developed in [23] for instance. However, this approach typically requires new insights for each program it is applied to. Even though recent works have greatly improved verification automation, proving even the simplest programs still requires a significant time investment [24], whereas our approach can analyze it without any human time investment.
Finally, the work [25] automatically generates rich invariants, but is exponential in the number of qubits, limiting its use to small circuits.
## 9 Conclusion
In this work, we have demonstrated that combining abstract interpretation with stabilizer simulation allows to establish circuit properties that are intractable otherwise.
Our key idea was to over-abstract the behavior of non-Clifford gates in the generalized stabilizer simulation of Aaronson and Gottesman [2] by merging summands in the sum representation of the quantum states density matrix. Our carefully chosen abstract domain allows us to define efficient abstract transformers that approximate each of the concrete stabilizer simulation functions, including measurement.
|
2310.02071 | Towards End-to-End Embodied Decision Making via Multi-modal Large
Language Model: Explorations with GPT4-Vision and Beyond | In this study, we explore the potential of Multimodal Large Language Models
(MLLMs) in improving embodied decision-making processes for agents. While Large
Language Models (LLMs) have been widely used due to their advanced reasoning
skills and vast world knowledge, MLLMs like GPT4-Vision offer enhanced visual
understanding and reasoning capabilities. We investigate whether
state-of-the-art MLLMs can handle embodied decision-making in an end-to-end
manner and whether collaborations between LLMs and MLLMs can enhance
decision-making. To address these questions, we introduce a new benchmark
called PCA-EVAL, which evaluates embodied decision-making from the perspectives
of Perception, Cognition, and Action. Additionally, we propose HOLMES, a
multi-agent cooperation framework that allows LLMs to leverage MLLMs and APIs
to gather multimodal information for informed decision-making. We compare
end-to-end embodied decision-making and HOLMES on our benchmark and find that
the GPT4-Vision model demonstrates strong end-to-end embodied decision-making
abilities, outperforming GPT4-HOLMES in terms of average decision accuracy
(+3%). However, this performance is exclusive to the latest GPT4-Vision model,
surpassing the open-source state-of-the-art MLLM by 26%. Our results indicate
that powerful MLLMs like GPT4-Vision hold promise for decision-making in
embodied agents, offering new avenues for MLLM research. Code and data are open
at https://github.com/pkunlp-icler/PCA-EVAL/. | Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Tianyu Liu, Baobao Chang | 2023-10-03T14:13:36Z | http://arxiv.org/abs/2310.02071v4 | Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
###### Abstract
In this study, we explore the potential of Multimodal Large Language Models (MLLMs) in improving embodied decision-making processes for agents. While Large Language Models (LLMs) have been widely used due to their advanced reasoning skills and vast world knowledge, MLLMs like GPT4-Vision offer enhanced visual understanding and reasoning capabilities. We investigate whether state-of-the-art MLLMs can handle embodied decision-making in an end-to-end manner and whether collaborations between LLMs and MLLMs can enhance decision-making. To address these questions, we introduce a new benchmark called **PCA-EVAL**, which evaluates embodied decision-making from the perspectives of Perception, **C**ognition, and **A**ction. Additionally, we propose **HOLMES**, a multi-agent cooperation framework that allows LLMs to leverage MLLMs and APIs to gather multimodal information for informed decision-making. We compare end-to-end embodied decision-making and HOLMES on our benchmark and find that the GPT4-Vision model demonstrates strong end-to-end embodied decision-making abilities, outperforming GPT4-HOLMES in terms of average decision accuracy (+3%). However, this performance is exclusive to the latest GPT4-Vision model, surpassing the open-source state-of-the-art MLLM by 26%. Our results indicate that powerful MLLMs like GPT4-Vision hold promise for decision-making in embodied agents, offering new avenues for MLLM research.
## 1 Introduction
The capacity to make well-informed decisions is essential for the survival and success of living organisms in their respective environments. Similarly, a major goal in embodied artificial intelligence is to develop agents, like robots, with sophisticated decision-making abilities. This could enable artificial agents to intelligently interact with their surroundings and efficiently accomplish a variety of real-world tasks such as autonomous driving (Hu et al., 2023; Wayve, 2023), domestic assistance (Kolve et al., 2017; Shridhar et al., 2020; Huang et al., 2022b), and game playing (Fan et al., 2022; Wang et al., 2023a; Zhu et al., 2023b). Recently, there has been a notable increase in leveraging exceptional reasoning capabilities and world knowledge of Large Language Models (LLMs) to enhance decision making in agents. However, LLMs are primarily designed to process textual context, creating a modality gap (Liang et al., 2022; Ren et al., 2023a) for the LLM-powered agent when dealing with multimodal observations in real-world scenarios.
To bridge this modality gap, a common approach is converting multimodal observations into text using various APIs (Wu et al., 2023; Yang et al., 2023). However, this conversion can result in information loss during the transition from multimodal to unimodal text. At the same time, recent advances in Multimodal Large Language Models (MLLMs), particularly Visual Large Language
Models (VLLMs) like GPT4-Vision (OpenAI, 2023a), have showcased impressive general-purpose visual understanding and reasoning abilities (Zhu et al., 2023; Dai et al., 2023; Liu et al., 2023; Li et al., 2023; Li et al., 2023; Zhao et al., 2023). These VLLMs can directly perceive the visual information rather than relying on textual intermediaries, potentially enabling more sophisticated reasoning and decision making for embodied agents operating in complex real-world environments. Considering these developments, two research questions naturally arise: **(1)** Can current state-of-the-art VLLMs perform various embodied decision making tasks in an end-to-end manner? What are the current strengths and limitations when compared to LLM-powered agents? **(2)** Can LLMs and VLLMs collaborate to enhance embodied decision-making capabilities?
However, addressing these questions is challenging due to the absence of an existing evaluation benchmark that satisfies the following criteria: **(1)** supporting end-to-end embodied decision making by providing agents with direct multimodal observations; **(2)** enabling multi-dimensional evaluation of the decision-making process, encompassing perception, reasoning, and action perspectives, rather than relying solely on final rewards or success rate; and **(3)** covering diverse domains, drawing from different areas of embodied AI. The development of more comprehensive benchmarks that meet these desiderata could substantially advance research on decision making in embodied systems.
In this paper, we propose a new benchmark, **PCA-EVAL**, for evaluating the embodied decision-making ability of agents from three perspectives, i.e., **P**erception, **C**ognition, and **A**ction. Our benchmark covers three domains as illustrated in Figure 1: autonomous driving, domestic assistance, and game-playing. The corresponding data are collected from real-world transportation scenes (Zhu et al., 2016), domestic housekeeper environment based on ALFRED (Shridhar et al., 2020), and Open-world environment Minedjo (Fan et al., 2022) based on the famous game Minecraft. This diverse set of domains allows for a comprehensive assessment of embodied decision-making capabilities across various contexts. Distinct from the MDP-based evaluation that solely focuses on maximizing cumulative rewards, we divide the sequential decision making process into multiple one-step decision problems based on a task-specific topology graph. Each instance in the benchmark consists of a 6-element tuple: \(<\)_image_, _question_, _action candidates_, _answer_, _reason_, _key concept_\(>\). Adopting this approach offers two major advantages: **(1)** It enables a more comprehensive evaluation of the decision-making process, with each decision step being assessed in terms of perception, cognition, and action. **(2)** The evaluation can be conducted outside complex simulation environments, simplifying the process of evaluating different agents and their performance.
Figure 1: Domain and required ability distribution of PCA-EVAL.
With the proposed benchmark, we conduct two series of evaluation: (1) We examine multiple state-of-the-art VLLMs, like InstructBLIP (Dai et al., 2023), MMICL (Zhao et al., 2023), QwenVL-Chat (Bai et al., 2023) and the latest GPT4-Vision (OpenAI, 2023a), in an end-to-end decision making context. (2) We introduce **HOLMES**,1 a multi-agent cooperation framework. In this framework, we provide large language models, such as ChatGPT (OpenAI, 2022), GPT4 (OpenAI, 2023b), and Vicuna (Chiang et al., 2023), with descriptions of vision models like image captioning, object detection, Optical Character Recognition (OCR), and traffic sign detection models. Additionally, we supply descriptions of valid APIs within the simulated environment. The large language model subsequently initiates a search for clues pertaining to the question by engaging in a multi-turn conversation. This process involves alternating between invoking models or APIs to find clues and analyzing the discovered clues to facilitate informed decision making.
Footnote 1: The system is aptly named after the renowned detective, Sherlock Holmes.
From our experimental results, we discerned that within the end-to-end framework, GPT4-Vision significantly outshines the contemporary state-of-the-art vision-language model, MMICL, boasting an average action accuracy improvement of 26%. Notably, GPT4-Vision can furnish a detailed rationale behind its embodied decision-making process, a feature absent in present open-source VLLMs. When assessing HOLMES models, GPT4 consistently emerges superior across all three domains. Drawing a comparison between GPT4-Vision and HOLMES, we observed that GPT4-Vision surpasses GPT4-HOLMES with multiple expert visual APIs in terms of cognition and action scores. This underscores its broad adaptability across a spectrum of visual tasks and its good fusion of visual understanding, world knowledge, and embodied decision making.
In summary, we introduce three key contributions in this study:
1. We propose PCA-EVAL, a novel evaluation benchmark for multi-domain embodied decision making that evaluates performance in perception, cognition, and action.
2. We present HOLMES, a multi-agent cooperation framework designed to tackle various embodied decision-making tasks that include multimodal observations. It mimics the process of playing a detective game in which the LLM uncovers clues by utilizing various multimodal models or APIs supplied by the environment.
3. We conducted a systematic comparison of two embodied decision-making methods: end2end and HOLMES, across various models. Our findings suggest that when utilizing MLLM with the end2end method, it not only achieves decision accuracy better than the top-performing model (GPT-4) in HOLMES but also secures a superior cognition score. However, this level of performance is exclusive to the latest GPT4-Vision model, which significantly outpaces the open-source state-of-the-art VLLMs.
We believe that powerful MLLMs like GPT4-Vision pave a new and promising way toward decision making in embodied agents using LLMs. It enables decisions across diverse domains to be made and justified seamlessly in an end-to-end manner. PCA-EVAL serves as an effective metric for evaluating the embodied decision-making capabilities of both end-to-end and HOLMES-based models.
## 2 Related Work
**Embodied Decision Making.** Research on embodied decision-making is an emerging trend for artificial intelligent agents to interact with their surroundings and accomplish numerous tasks. This necessitates proficiency in vision perception, world knowledge, and commonsense reasoning, areas where a large language model can provide some level of expertise. We group prior work on embodied decision-making with LLM into two main trends. The first trend is to transform multimodal information, including object and scenery identification, the current states of AI agents, and the feedback from the environments, to texts. Text-based LLMs can then reason over the textual clues to determine the next action towards completing a designated task (Huang et al., 2022; Li et al., 2022; Huang et al., 2022; Chen et al., 2023). This line of research divides the entire decision-making process into two phases: (1) information seeking, usually involving VLLMs to verbalize the current status of AI agents in the vision-based environment with natural language; (2) reasoning and planning with text-based LLMs to decide what the AI agent should do in the next step with textual clues. The other line of research uses multimodal LLMs directly for end-to-end decision making,
such as PALM-E (Driess et al., 2023b). The end-to-end decision making poses greater challenges to multimodal LLMs as it requires the combination of different functionalities including perception, cognition, and action, whereas decision making without explicit multiple steps mitigates the error propagation between information seeking and reasoning.
LLM-Powered Agents.Large language models pre-trained on large-scale multimodal (including text, image, video, etc.) corpus demonstrate impressive emergent abilities and immense popularity (Brown et al., 2020; Wei et al., 2022), and have seen tremendous success across various domains covering various natural language processing and computer vision tasks (Radford et al., 2019; Chowdhery et al., 2022; Touvron et al., 2023; Alayrac et al., 2022; Zhu et al., 2023a; Li et al., 2023a). Consequently, using LLMs to empower the AI agents (Xi et al., 2023; Liu et al., 2023b; Park et al., 2023; Wang et al., 2023d; Yuan et al., 2023) becomes more and more promising. Specifically, we can employ LLMs to enhance the decision making ability of the agents (Nakano et al., 2022; Yao et al., 2022; Li et al., 2023c; Song et al., 2023), expanding their perception and action space through strategies like tool utilization (Schick et al., 2023; Qin et al., 2023; Lu et al., 2023). Although LLM-based agents demonstrate reasoning and planning abilities through techniques like Chain of Thought or problem decomposition (Wei et al., 2023; Yao et al., 2023; Kojima et al., 2022), they inherently lack visual perception, and are limited to the discrete textual content. Therefore, integrating visual information or other modalities can offer agents a broader context and a more precise understanding (Driess et al., 2023a), enhancing their environmental perception. However, no evaluation protocol or benchmark is currently available to evaluate decision making within the multimodal context.
## 3 Pca-Eval
In this section, we propose to evaluate the decision-making ability of embodied agents from three perspectives: perception, cognition, and action. Accordingly, we present a novel benchmark named PCA-EVAL. Our PCA-EVAL benchmark consists of 300 multimodal multiple-choice questions with diverse embodied topics and annotations of their answers with corresponding explanations.
As shown in Figure 5, each instance in the benchmark consists of a 6-element tuple: _\(<\)image_, _question_, _action candidates_, _answer_, _reason_, _key concept\(>\). The image is collected from various embodied environments, like transportation scenes, housekeeper environments, and game worlds in Minecraft. Questions, action candidates, and answers are derived from real tasks within the corresponding environment. The reasoning explains why the answer is the best choice for the current image, while the key concept highlights the most question-related aspect in the image.
Unlike traditional visual question-answering datasets that emphasize visual perception (e.g., VQA (Goyal et al., 2017)), visual reasoning (e.g., NLVR (Suhr et al., 2017)), or world knowledge (e.g., OKVQA (Marino et al., 2019)), the most distinctive characteristic of PCA-EVAL is its grounding in embodied actions. Compared to embodied simulation environments like ALFRED (Shridhar et al., 2020) and Minedojo (Fan et al., 2022), PCA-EVAL proves to be more effective in evaluating various LLM-based agents. This is primarily due to PCA-EVAL's provision of high-level actions that can be readily implemented or programmed using the low-level actions in the corresponding domains. The high-level actions are more comprehensible for LLMs than the direct low-level actions like robotic movements in the simulation environments because (1) the high-level actions are in the form of natural languages, making it easier for LLMs to understand the meaning and connect with world knowledge. (2) LLMs are not grounded with low-level actions during the pretraining or finetuning stage, making it hard for LLMs to understand the consequences of executing an action.
Figure 2: An instance of PCA-EVAL.
To answer a question in PCA-EVAL, the agent must possess the following abilities: (1) Perception: accurately identify the concept related to the question within the image; (2) Cognition: engage in reasoning based on image perception and worldly knowledge; (3) Action: comprehend the potential actions, selecting the one that best aligns with the outcome of the reasoning process. A deficiency in any of these abilities would inevitably result in an incorrect answer, posing a significant challenge to the more complex capabilities of embodied agents. Although challenging, all the aforementioned abilities are essential for the decision-making process in embodied environments.
### Evaluation Metrics
For each instance, we instruct the agent to deliver an answer triplet comprising an image description \(d\), a reasoning process \(r\), and a final action \(a\), represented as \(<d,r,a>\). By comparing the model prediction with the ground truth answer, we can obtain a fine-grained diagnosis of the decision making process.
Perception Score.The Perception Score (P-Score) measures the model's ability to accurately perceive and interpret the observation. It is computed based on whether the agent's output image description \(d\) includes the key concept of the instance. If the agent accurately describes the question-related key concept in the image, the P-score is assigned a value of 1; otherwise, it is assigned a value of 0. For the instance in Figure 5, the agent should output "clear road" or "no car visible" or other semantically equivalent concepts in its description of the image to get the perception score.
Cognition Score.The Cognition Score (C-Score) assesses the model's ability to reason, comprehend, and make informed decisions based on the perceived input data and world knowledge. The score is 1 if the reasoning process is correct, otherwise the score is 0. For the instance in Figure 5, the agent should link the "clear road" to the action "keep driving" based on transportation commonsense to get the score.
Action Score.The Action Score (A-Score) measures the model's ability to generate appropriate and effective responses or actions based on the perceived input data and the cognitive understanding of the context. The score is assigned a value of 1 if the agent selects the correct action; otherwise, the score is set to 0.
The final Perception, Cognition, and Action scores of the agents are obtained by averaging the scores across all instances and domains in our PCA-EVAL dataset.
### Automatic Evaluation
Recent advancements have seen researchers harnessing powerful LLMs for the evaluation of output of language models. Studies have revealed that the outcomes from LLMs could exhibit remarkable alignment with human judgments Zheng et al. (2023); Wang et al. (2023c;b). In our investigation, we employed GPT-4 to automatically evaluate perception, cognition, and action scores based on the model's outputs. Our findings underscore a significant agreement between GPT-4 annotations and human annotator results. This is substantiated by Pearson correlation coefficients of 0.8, 0.9, and 0.95 for perception, cognition, and action evaluations, respectively. To facilitate ongoing and future research endeavors, we share our automatic evaluation script2 for seamless adoption, which could also be improved in the future. For a detailed description of our evaluation methodology, kindly refer to Appendix C
Footnote 2: [https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/evaluation/pca_auto_scoring.py](https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/evaluation/pca_auto_scoring.py)
### Dataset Overview
The PCA-EVAL benchmark currently comprises three domains, with a total of 300 instances, including 100 instances per domain. In our preliminary study, we find that the annotation process requires proactive thinking of the questions, actions, and corresponding answers, which makes quality control difficult. In order to ensure the quality of PCA-Eval, every single test case has been verified by at least three authors of this paper. Although challenging, we would keep scaling this
benchmark in order to advocate further attention to end-to-end decision-making. We introduce the three domains encompassed by our dataset as follows:
Autonomous Driving.In the autonomous driving domain, instances are derived from real-world transportation scenes, which requires the agent to have particular abilities such as traffic sign recognition, obstacle detection, and decision-making at intersections. The dataset aims to evaluate an agent's ability to perceive and interpret visual information while making safe and efficient driving decisions. The images are collected from TT100K (Zhu et al., 2016) dataset and annotators are instructed to propose an image-conditioned question that is grounded with real actions of vehicles.
Domestic Robot.The domestic assistance domain features instances from the ALFRED (Shridhar et al., 2020; Kolve et al., 2017) environment, which simulates a housekeeping robot performing tasks within a household setting. These tasks may include object manipulation, navigation, and interaction with various appliances. The environment assesses an agent's ability to understand and execute complex instructions while navigating and interacting with a dynamic environment. Annotators are asked to select one image from the randomly generated scenes in the environment, propose a question related to the items on the scene, and annotate the full information of the instance.
Open-World Game.In the open-world game domain, instances are sourced from the Minecraft environment, where agents are tasked with exploring, crafting, and surviving in a procedurally generated world. This dataset evaluates an agent's ability to reason and plan actions within a complex, open-ended environment, which often requires long-term strategizing and adaptability. Annotators receive predefined tasks from MineDojo (Fan et al., 2022) as a reference during the task generation phase. For each task, we instruct the annotator to sketch a task topology graph, exemplified in Figure 3. The task should be completed in accordance with the topological order of the graph, where the event located in the leaf nodes should be finished first. Each node in the task topology graph can be viewed as a step in the sequential decision. We list the in-domain task distribution and examples for each domain in Appendix A.
### Annotation Pipelines
The annotation process consists of two stages: (1) Dataset Annotation, and (2) Dataset Refinement. During the initial stage, three annotators are assigned to each domain, adhering strictly to the respective annotation guidelines. They first pinpoint the source images from each domain that are informative and meaningful so that they can write questions for each image. The annotators have the responsibility to ensure every question has only one correct answer and accurate rationales. In the subsequent stage, annotators are instructed to scrutinize the output actions and rationales presented by ChatGPT and check the annotations. This process aims to address the challenge of multiple correct answers, as ChatGPT can furnish comprehensive explanations for its actions. These explanations assist annotators in assessing the acceptability of ChatGPT's response, particularly when it deviates from the established ground truth answer. This enables annotators to refine annotations to ensure the presence of a single correct answer.
## 4 Methods
### End2End Decision Making via VLLMs
In this subsection, we detail the evaluation process for assessing state-of-the-art VLLMs, e.g., InstructBLIP, MMICL, and GPT4-Vision, on end-to-end embodied decision-making using the proposed PCA-EVAL benchmark. End2End embodied decision making is straightforward since we can directly feed the visual observation and the textual question to the multi-modal agent. As illustrated in Figure 5, the agent is prompted to output the image description and reasoning process before giving the final action.
Figure 3: Illustration of task topology graph. Events in green represent the leaf nodes of the graph.
### HOLMES: Multi-Agent Cooperation
Different from End2End embodied decision making, within HOLMES, we prompt large language models like ChatGPT-3.5 (OpenAI, 2022), GPT4 (OpenAI, 2023b) to call different visual models or APIs to gather information about the environment.
We provide these models with descriptions of the input and output for different visual models such as the image caption model based on InstructELIP, the object detection model based on POMP (Ren et al., 2023b), and the traffic sign detection model based on YOLO (Redmon and Farhadi, 2018). Additionally, we supply descriptions of valid APIs within the simulated environment, such as _list_nearby_mobs_in_minecraft()_ to tell what creatures can current player see and _list_items_at_hand_in_alfred()_ to tell what item the robot is holding in hand. Full API description files for each domain are shown in Appendix B.
These integrations enable the large language model to initiate a search for clues pertaining to a given question through a multi-turn conversation. As shown in Figure 4, the process involves alternating between invoking models or APIs to gather relevant information and analyzing the discovered clues to facilitate informed decision making. The HOLMES framework is designed to enhance cooperation and coordination among multiple agents in dynamic and complex environments.
In HOLMES, there are four key components as depicted in Figure 4: the image, the user, the LLM, and the Model/API Hub. Initially, the user poses a question about the optimal action to take based on the environment shown in the image, providing potential action choices. As the LLM cannot directly view the image, it's briefed with descriptions of available visual models and APIs supplied by the simulation environment. It's then tasked with gathering relevant data via these models and APIs to determine the appropriate action. When the LLM responds, the system checks if it has invoked a legitimate model or API, subsequently relaying the results from the invoked API. This feedback is logged into the dialogue history, allowing the LLM to analyze and form subsequent responses.
Figure 4: Three examples of HOLMES solving questions from different domains of PCA-EVAL.
Figure 5: An example of end-to-end decision making.
Once equipped with sufficient information, the LLM proposes the final action, accompanied by its underlying rationale. HOLMES emulates the detective game process, where one alternates between searching for clues using various tools and analyzing them before arriving at a conclusion.
## 5 Experiments
### Configurations
End2End.Under this setting same image and prompts are provided to different VLLMs. Additionally, the non-visual information "items in hand" and "items in inventory" for domestic and game domains are directly given to the models in the prompt since these information is hard to perceive from the image and is easy to obtain from the simulation environments. We would also make the prompts we use open-source for fair and convenient evaluation.
We compare four different models, InstructBLIP-Vicuna-13B3, MMICL-FLANT5XXL4, QwenVL-Chat5 and GPT4-Vision6. We apply default inference configurations for the corresponding models.
Holmes.In HOLMES framework, the LLM is required to continuously invoke various APIs and retrieve their return information. To streamline the evaluation process, we initially execute all APIs for every instance in PCA-EVAL, storing the result for each instance. This approach allows us to directly access the specific result of a given API without the need to run the model each time an evaluation is conducted. We would also make the API results open-source together with the benchmark. The description and implementation details of the APIs are listed in Appendix B.
Footnote 3: [https://github.com/salesforce/LAVIS/tree/main/projects/instructblip](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip)
Footnote 4: [https://huggingface.co/BleachNick/MMICL-Instructblip-T5-xxl](https://huggingface.co/BleachNick/MMICL-Instructblip-T5-xxl)
Footnote 5: [https://huggingface.co/Qwen/Qwen-VL-Chat](https://huggingface.co/Qwen/Qwen-VL-Chat)
Footnote 6: [https://chat.openai.com](https://chat.openai.com)
We compare three LLMs: Vicuna7, ChatGPT-3.5-Turbo and GPT48. However we found Vicuna models lack the capability to call various APIs for information gathering, thus we have only reported the results for ChatGPT and GPT4. We anticipate supplementing these results as soon as open-source models become available, which can understand API descriptions and correspondingly call different APIs.
Footnote 7: [https://huggingface.co/lmsys](https://huggingface.co/lmsys)
### Evaluation
PCA-Eval assesses embodied decision-making through three distinct lenses: perception, cognition, and action. The scores we reported in Table 1 rely on the consensus score from three human evaluators. We compute the average kappa correlation coefficient for these evaluators, resulting in 0.91 for the Perception Score and 0.88 for the Cognition Score. These figures indicate a good consistency in the evaluation process.
### Main Results
We evaluate various methods and models on the PCA-EVAL benchmark, as shown in Table 1.
In the upper block concerning End2End-VLLMs, the recently unveiled closed-source model, GPT-4V, outperforms existing open-source models by achieving the highest scores of 0.84, 0.74, and 0.74 in the perception, cognition, and action dimensions respectively. This performance represents a 26% action score improvement over its open-source counterpart, MMICL. The impressive performance of GPT-4V is primarily attributed to its exceptional ability to perceive visual information across different domains, particularly in the challenging game domain.
We also assessed the performance of embodied decision making using our HOLMES system.
As shown in the bottom block of the table, the HOLMES system, based on GPT4, achieves an Action Score of 0.71, matching the performance of GPT-4V (0.74). This suggests that the HOLMES system
is proficient in understanding the task goal, breaking down the larger goal into multiple smaller steps, and accurately invoking the relevant APIs to accomplish each step.
Specifically, the GPT4-HOLMES system can identify key concepts in an image through the results returned by APIs such as _list_nearby_mobs_in_minecraft()_. As a result, the system achieves an average Perception Score of 0.88, surpassing GPT-AV's 0.84. However, when compared to End2End methods, HOLMES relies on multi-step reasoning for the final decision. This approach can lead to the accumulation of reasoning errors, resulting in a lower Cognition Score in both Domestic and Game domains.
## 6 Discussion
### Comparison Between End2End and HOLMES
We conduct an analysis and comparison of the outputs generated by the End2End method with GPT4-Vision, as well as the HOLMES method with GPT4. Our findings indicate that the End2End method effectively mitigates information loss during the modality conversion process. As illustrated in Figure 5(a), an image depicts a road with several nearby cars. GPT4-Vision is capable of discerning that these cars are situated in a safe space, thereby suggesting that the driver can continue driving.
Conversely, GPT4, while aware of the number of cars, lacks information about their spatial relation, leading it to recommend slowing down. This suggests that the End2End method is superior in perceiving certain visual features that are not captured by the APIs. Conversely, some specialized APIs, such as traffic sign detection, outperform GPT4-Vision in tasks like traffic sign detection, as they are specifically trained for this task. This could enable the HOLMES method to gather more accurate information than the End2End model.
### Alignment between Agent Decisions and Human Values
We have observed instances where the decisions made by the agent contradict human values. For instance, consider the scenario depicted in Figure 5(b). The image illustrates a crosswalk devoid of pedestrians. The appropriate response in this situation would be to slow down, as caution is paramount when approaching a crosswalk, regardless of the presence or absence of pedestrians. However, upon processing the information that the crosswalk is unoccupied, ChatGPT suggests that maintaining the current speed is the optimal action, arguing that the absence of pedestrians eliminates the need to slow down. The rationale provided by ChatGPT is logical, yet it does not align with human values. We believe it is crucial for embodied agents to make decisions that are in harmony with human values, rather than solely focusing on maximizing their advantage.
### Limitation and Future Work
The current scope of PCA-EVAL is confined to merely three domains, with a cap of 100 instances per domain. One of our future work aims to broaden this scope to encompass more domains and em
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Traffic} & \multicolumn{3}{c|}{Domestic} & \multicolumn{3}{c|}{Game} & \multicolumn{3}{c}{Average} \\ & & P & C & A & P & C & A & P & C & A & P & C & A \\ \hline \multirow{3}{*}{End2End} & \multirow{3}{*}{
\begin{tabular}{c} InstanceBLIP\({}^{\dagger}\) \\ MMIC1\({}^{\dagger}\) \\ QwenVL-Chat\({}^{\dagger}\) \\ GPT4-V\({}^{\dagger}\) \\ GPT4-V\({}^{\dagger}\) \\ \end{tabular} } & - & - & 0.42 & - & - & 0.41 & - & - & 0.24 & - & - & 0.36 \\ & & - & - & 0.63 & - & - & 0.51 & - & - & 0.29 & - & - & 0.48 \\ & & - & 0.59 & - & - & 0.55 & - & - & 0.24 & - & - & 0.46 \\ & & GPT-4V\({}^{\dagger}\) & 0.75 & 0.73 & 0.78 & 0.81 & **0.69** & **0.67** & **0.95** & **0.79** & **0.77** & 0.84 & **0.74** & **0.74** \\ \hline \multirow{2}{*}{HOLMES} & ChatGPT\({}^{\ddagger}\) & 0.75 & 0.68 & 0.66 & **0.88** & 0.52 & 0.50 & 0.78 & 0.40 & 0.36 & 0.80 & 0.53 & 0.51 \\ & GPT4\({}^{\ddagger}\) & **0.87** & **0.82** & **0.82** & 0.85 & 0.61 & 0.56 & 0.91 & 0.77 & 0.74 & **0.88** & 0.73 & 0.71 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main results on PCA-EVAL. Models with \(\ddagger\) are fully open-source. Models with \(\ddagger\) only provide API to access. P, C, and A represent Perception, Cognition, and Action Scores, respectively. For the open-source models in End2End setting, we find it hard to prompt them to output correct cross-modal reasoning information, so their Perception and Cognition scores are not reported.
bodied environments where MLLMs could keep getting feedback. Furthermore, we plan to increase the number of instances for both the existing and newly introduced domains.
## 7 Conclusion
In this study, we present PCA-EVAL, a comprehensive evaluation benchmark for embodied decision-making that gauges performance in perception, cognition, and action, thereby offering an all-encompassing measure for various embodied agents. We conduct a systematic comparison between End2End embodied decision-making and HOLMES, a multi-agent cooperation framework developed by us. Our findings reveal that MLLM, when applied with the end2end method, surpasses the top-performing model in HOLMES, GPT-4, in terms of decision accuracy and cognition score. However, it is crucial to underscore that this superior performance is specific to the GPT4-Vision model, which significantly outperforms the open-source state-of-the-art VLLMs. These results and subsequent analysis underscore the necessity for ongoing exploration in embodied decision-making and the development of open-source MLLMs to ensure wider accessibility and progress in the field.
Figure 6: Case studies. |
2305.09392 | Strömgren photometric metallicity of the Small Magellanic Cloud
stars using Gaia DR3-XP spectra | Observational studies have identified several sub-structures in different
regions of the Magellanic Clouds, the nearest pair of interacting dwarf
satellites of the Milky Way. By studying the metallicity of the sources in
these sub-structures, we aim to shed light on the possible origin of these
sub-structures. Spectroscopic metallicities exist only for a few thousand
sources, mostly giant stars located in specific regions of the galaxies. These
metallicities come from different instruments at various spectral resolutions,
and systematic uncertainties hamper comparisons and draw firm conclusions about
their origin. The third data release of \textit{Gaia} has provided us with
$\sim$ 0.17 million XP spectra of the different stellar populations in the SMC
alone as faint as $\sim$ 18 mags in the G band, which are spread across $\sim$
10$^\circ$ from the SMC centre. We aim to determine the metallicities of these
sources based on synthetic Str\"{o}mgren photometry derived from XP spectra and
produce a high-resolution metallicity map of the SMC. Our metallicity gradient
estimate of the SMC turns out to be --0.062 $\pm$ 0.009 dex/deg. This is
comparable with the previous estimates, which also validate our method of
metallicity estimation. We aim to apply this method to other stellar
populations and to the LMC to create a high-resolution metallicity map of the
Magellanic Clouds. | Abinaya O. Omkumar, Smitha Subramanian, Maria-Rosa L. Cioni, Jos de Bruijne | 2023-05-16T12:27:25Z | http://arxiv.org/abs/2305.09392v1 | [
###### Abstract
Observational studies have identified several sub-structures in different regions of the Magellanic Clouds, the nearest pair of interacting dwarf satellites of the Milky Way. By studying the metallicity of the sources in these sub-structures, we aim to shed light on the possible origin of these sub-structures. Spectroscopic metallicities exist only for a few thousand sources, mostly giant stars located in specific regions of the galaxies. These metallicities come from different instruments at various spectral resolutions, and systematic uncertainties hamper comparisons and draw firm conclusions about their origin. The third data release of _Gaia_ has provided us with \(\sim\) 0.17 million XP spectra of the different stellar populations in the SMC alone as faint as \(\sim\) 18 mags in the G band, which are spread across \(\sim\) 10\({}^{\circ}\) from the SMC centre. We aim to determine the metallicities of these sources based on synthetic Stromgren photometry derived from XP spectra and produce a high-resolution metallicity map of the SMC. Our metallicity gradient estimate of the SMC turns out to be \(-\)0.062 \(\pm\) 0.009 dex/deg. This is comparable with the previous estimates, which also validates our method of metallicity estimation. We aim to apply this method to other stellar populations and to the LMC to create a high-resolution metallicity map of the Magellanic Clouds.
galaxies: Magellanic Clouds, galaxies: abundances, galaxies: evolution Stromgren photometric metallicity of the Small Magellanic Cloud stars using Gaia DR3-XP spectra] Stromgren photometric metallicity of the Small Magellanic Cloud stars using Gaia DR3-XP spectra A. O. Omkumar et al.] Abinaya O. Omkumar\({}^{1,2,3}\), Smitha Subramanian\({}^{1}\), Maria-Rosa L. Cioni\({}^{2}\), Jose de Bruijne\({}^{4}\) 2023 111 Dynamical Masses of Local Group Galaxies: IAU Symposium 379 P. Bonifacio, M.-R. Cioni & F. Hammer, eds.
## 1 Introduction
The Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are both gas-rich interacting dwarf irregulars. Evidence shows that the LMC-SMC pair, also known as the Magellanic Clouds (MCs), interacts with the Milky Way. And their proximity (\(\sim\) 55 kpc) makes them an ideal laboratory in the Local Group to study galaxy interaction and evolution processes in detail. Previous studies (Mackey et al., 2016; Pieres et al., 2017; Mackey et al., 2018; Belokurov & Erkal, 2019; El Youssoufi et al., 2021) have identified multiple signatures of interactions in the form of stellar sub-structures, over-densities and gaseous structures in and around the MCs. Apart from these above-mentioned sub-structures, some dual populations of intermediate/old stars (red clump; Omkumar et al., 2021 and red giant branch stars; Dobbie et al., 2014) have also been identified and studied. The results showed that they have different kinematics and are located at different distances. This could suggest that the dual populations formed during the interaction between the MCs and the stripped population's kinematics have been altered. If so, we expect similar metallicity ([Fe/H]) among the sources in these populations. Investigating the nature and origin of these sub-structures is essential for a comprehensive understanding of the consequences of dynamical interactions. But to do so, we need a metallicity estimate of a homogenous sample spread across the entire SMC, including its outskirts, where a plethora of sub-structures have been identified. Until now, we have spectroscopic metallicities of about a few thousand sources in the SMC from various instruments with different spectral resolutions. Comparing these estimates, which have different systematic
uncertainties, is not trivial, and the results obtained will not be statistically significant. Hence, we need to use another standard indirect method to obtain the [Fe/H] of a larger sample.
Grady et al. (2021) presented a machine-learning method to obtain photometric metallicity estimates for the Magellanic Cloud red giants from _Gaia_ Data Release 2 (DR2). Their predicted metallicity estimates of the MCs were comparable with the previous studies, but their method depends on their training sample. Also, the DR2 data was incomplete in the central regions of the MCs, and after applying all their quality filters, their final sample consisted of \(\sim\) 36,000 SMC sources. In the latest _Gaia_ (DR3), \(\sim\) 0.17 million XP spectra consisting of different stellar populations spread across a \(\sim\) 10\({}^{\circ}\) of the SMC is provided as ancillary data. This is the largest spectroscopic dataset of the SMC obtained so far. In this work, we used a subset of this dataset (\(\sim\) 80,000 giant stars) to estimate metallicity using the synthetic Stromgren photometry from the XP spectra. Stromgren photometry has proved to be one of the reliable methods to determine the metallicity of giant and sub-giant stars (Hilker, 2000). Their calibration is based on a homogeneous sample of globular clusters (\(\omega\) Centauri, M22 and M55), which are mostly metal-poor. When we apply this calibration relation to a sample consisting of both metal-poor giants and relatively metal-rich younger giants, the calibration would work poorly on the metal-rich giants, since such calibrators were absent (Lombardo et al., 2021). This is due to age-metallicity degeneracy. Although photometric metallicity estimates were proven to be a great tool for obtaining metallicity estimates, one must be careful to consider the age-metallicity degeneracy. In the case of our sample, it does not include many younger giants, so the effect of age-metallicity degeneracy is less likely to affect our estimates.
## 2 Gaia sample selection
We selected _Gaia_ DR3 sources within \(\sim\) 10\({}^{\circ}\) of the SMC centre (RA = 12\(\fdg\)80 and Dec = - 73\(\fdg\)15) with magnitudes G \(<\) 20.5. This resulted in 4,710,809 sources, which further reduced to 171,060 sources having Gaia DR3 XP spectra. We also applied additional criteria on parallax and proper motions (Gaia Collaboration et al., 2021; Arenou et al., 2018; Omkumar et al., 2021) only to select probable SMC sources. Our final sample contained 151,530 sources. We used this source list to query the XP spectra from the Gaia archive (Datalink-products). Currently, we have estimated the [Fe/H] for only a subset (\(\sim\) 80,000 sources) of the final sample in the dereddened colour range 0.5 \(<\) (b - y)\({}_{0}\)\(<\) 1.1 mag for which the [Fe/H] calibrations were available in the literature. The spatial distribution of this subset of data is shown in Figure 1.
## 3 Synthetic Photometry using GaiaXPy
We derived Stromgren magnitudes (v, b and y) using the GaiaXPy tool provided by the _Gaia_ consortium. This tool allows the generation of synthetic photometry in a set of desired systems from the input internally-calibrated continuously-represented mean spectra (see Montegriffo et al. (2022) for more details). We then corrected for the interstellar extinction by translating the extinction coefficient in the visual band A\({}_{V}\) into A\({}_{v}\), A\({}_{b}\) and A\({}_{y}\), where A\({}_{V}\)=A\({}_{0}\)/1.003 (Sale et al., 2014) and A\({}_{0}\) is the extinction at 547.7 nm. Using these extinction-corrected magnitudes, we proceeded to estimate the [Fe/H].
## 4 Estimation of [Fe/H]
Hilker (2000) provided the calibration equation to calculate the iron abundance [Fe/H] that is applicable for sources in the dereddened colour range 0.5 \(<\) (b - y)\({}_{0}\)\(<\) 1.1 mag. This is reproduced here in equation 1. We used our extinction-corrected colours in this equation and estimated the [Fe/H] of each of the sources. We plotted a 2D Hess diagram of the estimated [Fe/H] (dex) as a function of radius (degrees) from the SMC centre in the left panel of Figure 2. The increasing stellar density is indicated by the colour from blue to yellow. From the figure, it is clear that we do see a larger spread in the [Fe/H] values, but the stellar density indicates
that very few sources have very low and high [Fe/H] values. To better understand these values, we binned the dataset in concentric shells of 0\(\aas@@fstack{\circ}\)5 from the SMC centre. We then estimated the median abundance and the standard errors in each of the bins. We plotted the median [Fe/H] and median radius of each concentric shell in Figure 2). To estimate the metallicity gradient, we made a linear fit using Python and the best fit is also shown in Figure 2). We obtained a negative metallicity gradient from the centre to the SMC outskirts from our sample, which is -0.062 \(\pm\) 0.009 dex/deg, consistent with the previous studies suggesting that the central SMC is metal-rich. We also note that in the outer bin at \(>8^{\circ}\) that the metallicity increases slightly and reaches e median value similar to that of the inner regions. This could be due to the lesser number of stars in the outskirts but further investigation is required to confirm it.
\[[Fe/H]=(m_{1,0}+a_{1}*(b-y)_{0}+a_{2})/a_{3}*(b-y)_{0}+a_{4} \tag{1}\]
Figure 1: Spatial distribution of the selected sources in the dereddened colour range \(0.5<\) (b – y)\({}_{0}<1.1\) mag in XY plane centred on the SMC. The increasing number density is shown with the colour scale from blue to yellow.
Figure 2: _Left Panel_: Estimated [Fe/H] of the subset of selected (red in Figure 1) Gaia DR3 SMC sources as a function of radius. _Right Panel_: Median metallicities for the sources in the left panel as a function of radius. The linear fit corresponds to a gradient of –0.062 \(\pm\) 0.009 dex/deg.
where
\[m_{1,0}=(v-b)_{0}-(b-y)_{0} \tag{2}\]
and
\(a_{1}\) = -1.277 \(\pm\) 0.050; \(a_{2}\) = 0.331 \(\pm\) 0.035; \(a_{3}\) = 0.324 \(\pm\) 0.035; \(a_{4}\) = -0.032 \(\pm\) 0.025
We are also estimating the uncertainties for our [Fe/H] estimations using error propagation and it will further help as a constraint to have more significant results.
## 5 Summary and Ongoing work
In this contribution, we presented the method we used and the preliminary results from our investigation of the metallicity distribution across the SMC where we estimate [Fe/H] from _Gaia_ XP spectra using the synthetic Stromgren photometry. The resulting [Fe/H] gradient of -0.062 \(\pm\) 0.009 dex/deg from our study is comparable with the results of previous studies like -0.075 \(\pm\) 0.011 dex/deg from Dobbie et al. (2014) and -0.031 \(\pm\) 0.005 dex/deg from Choudhury et al. (2020). Our study also validates the potential usage of _Gaia_ XP spectra to estimate the individual [Fe/H] values which are needed for different science cases. As we have mentioned in the previous sections, we have only explored a subset of the data available for the SMC. We also aim to apply this method to different types of stars, not only giants, within the galaxy to increase the size of our metallicity sample and reduce the uncertainties in our measurements. One of our next steps is to produce a high spatial resolution photometric map using the estimated values. Then identify the sources in different substructures and compare them with those of the main body of the SMC to shed light on their possible origins.
We are also applying this general method we have developed in our study to stellar populations of the Large Magellanic Cloud and this can also be extended to other systems. This will provide us with the largest and most homogeneous abundance estimates of the MCs, which can be used to analyse different stellar sub-structures, metallicities of young and old stellar populations, metallicities in different regions of the MCs and so on.
## Acknowledgement
AOO is grateful to ESA for support via the Archival Research Visitor Programme during her stay at ESTEC.
|
2303.15435 | The Stable Signature: Rooting Watermarks in Latent Diffusion Models | Generative image modeling enables a wide range of applications but raises
ethical concerns about responsible deployment. This paper introduces an active
strategy combining image watermarking and Latent Diffusion Models. The goal is
for all generated images to conceal an invisible watermark allowing for future
detection and/or identification. The method quickly fine-tunes the latent
decoder of the image generator, conditioned on a binary signature. A
pre-trained watermark extractor recovers the hidden signature from any
generated image and a statistical test then determines whether it comes from
the generative model. We evaluate the invisibility and robustness of the
watermarks on a variety of generation tasks, showing that Stable Signature
works even after the images are modified. For instance, it detects the origin
of an image generated from a text prompt, then cropped to keep $10\%$ of the
content, with $90$+$\%$ accuracy at a false positive rate below 10$^{-6}$. | Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, Teddy Furon | 2023-03-27T17:57:33Z | http://arxiv.org/abs/2303.15435v2 | # The Stable Signature: Rooting Watermarks in Latent Diffusion Models
###### Abstract
Generative image modeling enables a wide range of applications but raises ethical concerns about responsible deployment. This paper introduces an active strategy combining image watermarking and Latent Diffusion Models. The goal is for all generated images to conceal an invisible watermark allowing for future detection and/or identification. The method quickly fine-tunes the latent decoder of the image generator, conditioned on a binary signature. A pre-trained watermark extractor recovers the hidden signature from any generated image and a statistical test then determines whether it comes from the generative model. We evaluate the invisibility and robustness of the watermarks on a variety of generation tasks, showing that Stable Signature works even after the images are modified. For instance, it detects the origin of an image generated from a text prompt, then cropped to keep 10% of the content, with 90+% accuracy at a false positive rate below 10\({}^{-6}\).
## 1 Introduction
Recent progress in generative modeling and natural language processing has made it easy to create and manipulate images in a photorealistic manner. For instance, DALL-E 2 [60] or Stable Diffusion [64] generate images from text, which are often indistinguishable from real artworks. They have given birth to many image edition tools like ControlNet [100], Instruct-Pix2Pix [7], and others [13, 27, 67]. They are establishing themselves as creative tools for artists, designers, and the general public.
While this is a great step forward for generative AI, it raises new ethical concerns. Indeed, their sophistication is such that it will soon be impossible to distinguish AI generations from real pictures. For example, a generated picture recently won an art competition [28]. Not being able to identify that images are generated by AI makes it difficult to remove them from certain platforms and to ensure their compliance with ethical standards. The lack of traceability also opens the door to new threats such as deep fakes, impersonation or copyright usurpation [8, 17].
A baseline solution to identify generated images is forensics, passive methods to detect generated/manipulated images. On the other hand, existing watermarking methods can be added on top of image generation. They are based on the idea of invisibly embedding a secret message into the image, which can then be extracted and used to identify the image. This has several drawbacks. If the model leaks or is open-sourced, the post-generation watermarking is easy to remove. The open source Stable Diffusion [66] is a case in point, since removing the watermark amounts to commenting out a single line in the source code.
Our Stable Signature method merges watermarking into the generation process itself, without any architectural changes. It adjusts the pre-trained generative model such that all the images it produces conceal a given watermark. There are several advantages to this approach [45, 95]. It protects both the generator and its productions. Besides, it does not require additional processing of the generated image, which makes the watermarking computationally lighter, straightforward, and secure. Model providers
Figure 1: Overview. The latent decoder can be fine-tuned to pre-emptively embed a signature into all generated images.
would then be able to deploy their models to different user groups with a unique watermark, and monitor that they are used in a responsible manner. They could also give art platforms, news outlets and other sharing platforms the ability to detect when an image has been generated by their AI.
We focus on Latent Diffusion Models (LDM) [64] since they can perform a wide range of generative tasks. This work shows that simply fine-tuning a small part of the generative model - the decoder that generates images from the latent vectors - is enough to natively embed a watermark into all generated images. Stable Signature does not require any architectural change and does not modify the diffusion process. Hence it is compatible with most of the LDM-based generative methods [7, 13, 59, 67, 100]. The fine-tuning stage is performed by back-propagating a combination of a perceptual image loss and the hidden message loss from a watermark extractor back to the LDM decoder. We pre-train the extractor with a simplified version of the deep watermarking method HiDDeN [104].
We create an evaluation benchmark close to real world situations where images may be edited. The tasks are: detection of AI generated images, tracing models from their generations. For instance, we detect \(90\%\) of images generated with the generative model, even if they are cropped to \(10\%\) of their original size, while flagging only one false positive every \(10^{6}\) images. To ensure that the model's utility is not weakened, we show that the FID [33] score of the generation is not affected and that the generated images are perceptually indistinguishable from the ones produced by the original model. This is done over several tasks involving LDM (text-to-image, inpainting, edition, etc.).
As a summary, (1) we efficiently merge watermarking into the generation process of LDMs, in a way that is compatible with most of the LDM-based generative methods; (2) we demonstrate how it can be used to detect and trace generated images, through a real-world evaluation benchmark; (3) we compare to post-hoc watermarking methods, showing that it is competitive while being more secure and efficient, and (4) evaluate robustness to intentional attacks.
## 2 Problem Statement & Background
Figure 1 shows a model provider _Alice_ who deploys a latent diffusion model to users _Bobs_. Stable Signature embeds a binary signature into the generated images. This section derives how Alice can use this signature for two scenarios:
* _Detection: "Is it generated by my model?"_. Alice detects if an image was generated by her model. As many generations as possible should be flagged, while controlling the probability of flagging a natural image.
* _Identification: "Who generated this image?"_. Alice monitors who created each image, while avoiding to mistakenly identifying a Bob who did not generate the image.
### Image watermarking for detection
Alice embeds a \(k\)-bit binary signature into the generated images. The watermark extractor then decodes messages from the images it receives and detects when the message is close to Alice's signature. An example application is to block AI-generated images on a content sharing platform.
Statistical testLet \(m\in\{0,1\}^{k}\) be Alice's signature. We extract the message \(m^{\prime}\) from an image \(x\) and compare it to \(m\). As done in previous works [45, 94], the detection test relies on the number of matching bits \(M(m,m^{\prime})\): if
\[M\left(m,m^{\prime}\right)\geq\tau\ \ \text{where}\ \ \tau\in\{0,\dots,k\}, \tag{1}\]
then the image is flagged. This provides a level of robustness to imperfections of the watermarking.
Formally, we test the statistical hypothesis \(H_{1}\): "\(x\) was generated by Alice's model" against the null hypothesis \(H_{0}\): "\(x\) was not generated by Alice's model". Under \(H_{0}\) (_i.e_. for vanilla images), we assume that bits \(m^{\prime}_{1},\dots,m^{\prime}_{k}\) are (i.i.d.) Bernoulli random variables with parameter \(0.5\). Then \(M(m,m^{\prime})\) follows a binomial distribution with parameters (\(k\), \(0.5\)). We verify this assumption experimentally in App. B.3. The False Positive Rate (FPR) is the probability that \(M(m,m^{\prime})\) takes a value bigger than the threshold \(\tau\). It is obtained from the CDF of the binomial distribution, and a closed-form can be written with the regularized incomplete beta function \(I_{x}(a;b)\):
\[\text{FPR}(\tau) =\mathbb{P}\left(M(m,m^{\prime})>\tau|H_{0}\right)=\frac{1}{2^{k} }\sum_{i=\tau+1}^{k}\binom{k}{i}\] \[=I_{1/2}(\tau+1,k-\tau). \tag{2}\]
### Image watermarking for identification
Alice now embeds a signature \(m^{(i)}\) drawn randomly from \(\{0,1\}^{k}\) into the model distributed to \(\text{Bob}^{(i)}\) (for \(i=1\cdots N\), with \(N\) the number of Bobs). Alice can trace any misuse of her model: generated images violating her policy (gore content, deepfakes) are linked back to the specific Bob by comparing the extracted message to Bobs' signatures.
Statistical testWe compare the message \(m^{\prime}\) from the watermark extractor to \(\left(m^{(1)},\dots,m^{(N)}\right)\). There are now \(N\) detection hypotheses to test. If the \(N\) hypotheses are rejected, we conclude that the image was not generated by any of the models. Otherwise, we attribute the image to \(\text{argmax}_{i=1\dots N}M\left(m^{\prime},m^{(i)}\right)\). With regards to the detection task, false positives are more likely since there are \(N\) tests. The global FPR at a given threshold \(\tau\) is:
\[\text{FPR}(\tau,N)=1-\left(1-\text{FPR}(\tau)\right)^{N}\approx N.\text{FPR}( \tau). \tag{3}\]
Equation (3) (resp. (2)), is used reversely: we find threshold \(\tau\) to achieve a required FPR for identification (resp. detection). Note that these formulae hold only under the assumption of i.i.d. Bernoulli bits extracted from vanilla images. This crucial point is enforced in the next section.
## 3 Method
Stable Signature modifies the generative network so that the generated images have a given signature through a fixed watermark extractor. It is trained in two phases. First, we create the watermark extractor network \(\mathcal{W}\). Then, we fine-tune the Latent Diffusion Model (LDM) decoder \(\mathcal{D}\), such that all generated images have a given signature through \(\mathcal{W}\).
### Pre-training the watermark extractor
We use HiDDeN [104], a classical method in the deep watermarking literature. It jointly optimizes the parameters of watermark encoder \(\mathcal{W}_{E}\) and extractor network \(\mathcal{W}\) to embed \(k\)-bit messages into images, robustly to transformations that are applied during training. We discard \(\mathcal{W}_{E}\) after training, since only \(\mathcal{W}\) serves our purpose.
Formally, \(\mathcal{W}_{E}\) takes as inputs a cover image \(x_{o}\in\mathbb{R}^{W\times H\times 3}\) and a \(k\)-bit message \(m\in\{0,1\}^{k}\). Similar to ReDMark [2], \(\mathcal{W}_{E}\) outputs a residual image \(\delta\) of the same size as \(x_{o}\), that is multiplied by a factor \(\alpha\) to produce watermarked image \(x_{w}=x_{o}+\alpha\delta\). At each optimization step an image transformation \(T\) is sampled from a set \(\mathcal{T}\) that includes common image processing operations such as cropping and JPEG compression1. A "soft" message is extracted from the transformed image: \(m^{\prime}=\mathcal{W}(T(x_{w}))\) (at inference time, the decoded message is given by the signs of the components of \(m^{\prime}\)). The _message loss_ is the Binary Cross Entropy (BCE) between \(m\) and the sigmoid \(\sigma(m^{\prime})\):
Footnote 1: The transformation needs to be differentiable in pixel space. This is not the case for JPEG compression so we use the forward attack simulation layer introduced by Zhang [97].
\[\mathcal{L}_{m}=-\sum_{i=1}^{k}m_{i}\cdot\log\sigma(m^{\prime}_{i})+(1-m_{i}) \cdot\log(1-\sigma(m^{\prime}_{i})).\]
The network architectures are kept simple to ease the LDM fine-tuning in the second phase. They are the same as HiDDeN [104] (see App. A.1) with two changes.
First, since \(\mathcal{W}_{E}\) is discarded, its perceptual quality is not as important, so the perceptual loss or the adversarial network are not needed. Instead, the distortion is constrained by a \(\tanh\) function on output of \(\mathcal{W}_{E}\) and by the scaling factor \(\alpha\). This improves the bit accuracy of the recovered message and makes it possible to increase its size \(k\).
Second, we observed that \(\mathcal{W}\)'s output bits for vanilla images are correlated and highly biased, which violates the assumptions of Sec. 2.1. Therefore we remove the bias and decorrelate the outputs of \(\mathcal{W}\) by applying a PCA whitening transformation (more details in App. A.1).
### Fine-tuning the generative model
In LDM, the diffusion happens in the latent space of an auto-encoder. The latent vector \(z\) obtained at the end of the diffusion is input to decoder \(\mathcal{D}\) to produce an image. Here we fine-tune \(\mathcal{D}\) such that the image contains a given message \(m\) that can be extracted by \(\mathcal{W}\). Stable Signature is compatible with many generative tasks, since modifying only \(\mathcal{D}\) does not affect the diffusion process.
First, we fix the signature \(m=(m_{1},\ldots,m_{k})\in\{0,1\}^{k}\). The fine-tuning of \(\mathcal{D}\) into \(\mathcal{D}_{m}\) is inspired by the original training of the auto-encoder in LDM [64].
Training image \(x\in\mathbb{R}^{H\times W\times 3}\) is fed to the LDM encoder \(\mathcal{E}\) that outputs activation map \(z=\mathcal{E}(x)\in\mathbb{R}^{h\times w\times c}\), downsampled by a power-of-two factor \(f=H/h=W/w\). The decoder reconstructs an image \(x^{\prime}=\mathcal{D}_{m}(z)\) and the extractor recovers \(m^{\prime}=\mathcal{W}(x^{\prime})\). The _message loss_ is the BCE between \(m^{\prime}\) and the original \(m\): \(\mathcal{L}_{m}=\mathrm{BCE}(\sigma\left(m^{\prime}\right),m)\).
In addition, the original decoder \(\mathcal{D}\) reconstructs the image without watermark: \(x^{\prime}_{o}=\mathcal{D}(z)\). The _image perceptual loss_\(\mathcal{L}_{\mathrm{i}}\) between \(x^{\prime}\) and \(x^{\prime}_{o}\), controls the distortion. We use the Watson-VGG perceptual loss introduced by Czolbe [15], an improved version of LPIPS [101]. It is essential that the decoder learns luminance and contrast masking to add less perceivable watermarks.
The weights of \(\mathcal{D}_{m}\) are optimized in a few backpropagation steps to minimize
\[\mathcal{L}=\mathcal{L}_{\mathrm{m}}+\lambda_{\mathrm{i}}\;\mathcal{L}_{ \mathrm{i}}. \tag{4}\]
This is done over \(100\) iterations with the AdamW optimizer [48] and batch of size \(4\), the fine-tuning sees _less than 500 images_ and takes _one minute on a single GPU_. The learning rate follows a cosine annealing schedule with \(20\) iterations of linear warmup to \(10^{-4}\) and decays to \(10^{-6}\). The factor \(\lambda_{\mathrm{i}}\) in (4) is set to \(2.0\) by default.
Figure 2: Steps of the method. (a) We pre-train a watermark encoder \(\mathcal{W}_{E}\) and extractor \(\mathcal{W}\), to extract binary messages. (b) We fine-tune the decoder \(\mathcal{D}\) of the LDM’s auto-encoder with a fixed signature \(m\) such that all the generated images (c) lead to \(m\) through \(\mathcal{W}\).
## 4 Text-to-Image Watermarking Performance
This section shows the potential of our method for detection and identification or images generated by a Stable-Diffusion-like model [64]2. We apply generative models watermarked with \(48\)-bit signatures on prompts of the MS-COCO [46] validation set. We evaluate detection and identification on the outputs, as illustrated in Figure 1.
Footnote 2: Due to potential concerns associated with pre-existing third-party generative models, such as Stable Diffusion or LDMs, we refrain from experimenting with these models and instead use a large diffusion model (2.2B parameters) trained on a proprietary dataset of 330M image-text pairs.
We evaluate their robustness to different transformations applied to generated images: strong cropping (\(10\%\) of the image remaining), brightness shift (strength factor \(2.0\)), as well as a combination of crop \(50\%\), brightness shift \(1.5\) and JPEG \(80\). This covers typical geometric and photometric edits (see Fig. 5 for visual examples).
The performance is partly obtained from experiments and partly by extrapolating small-scale measurements.
### Detection results
For detection, we fine-tune the decoder of the LDM with a random key \(m\), generate \(1000\) images and use the test of Eq. (1). We report the tradeoff between True Positive Rate (TPR), _i.e_. the probability of flagging a generated image and the FPR, while varying \(\tau\in\{0,..,48\}\). For instance, for \(\tau=0\), we flag all images so \(\text{FPR}=1\), and \(\text{TPR}=1\). The TPR is measured directly. In contrast the FPR is inferred from Eq. (2), because it would otherwise be too small to be measured on reasonably sized problems (this approximation is validated experimentally in App. B.4). The experiment is run on \(10\) random signatures and we report averaged results.
Figure 3 shows the tradeoff under image transformations. For example, when the generated images are not modified, Stable Signature detects \(99\%\) of them, while only \(1\) vanilla image out of \(10^{9}\) is flagged. At the same \(\text{FPR}=10^{-9}\), Stable Signature detects \(84\%\) of generated images for a crop that keeps \(10\%\) of the image, and \(65\%\) for a transformation that combines a crop, a color shift, and a JPEG compression. For comparison, we report results of a state-of-the-art passive method [12], applied on resized and compressed images. As to be expected, we observe that these baseline results have orders of magnitudes larger FPR than Stable Signature, which actively marks the content.
### Identification results
Each Bob has its own copy of the generative model. Given an image, the goal is to find if any of the \(N\) Bobs created it (detection) and if so, which one (identification). There are \(3\) types of error: _false positive_: flag a vanilla image; _false negative_: miss a generated image; _false accusation_: flag a generated image but identify the wrong user.
For evaluation, we fine-tune \(N^{\prime}=1000\) models with random signatures. Each model generates \(100\) images. For each of these \(100\)k watermarked images, we extract the Stable Signature message, compute the matching score with all \(N\) signatures and select the user with the highest score. The image is predicted to be generated by that user if this score is above threshold \(\tau\). We determined \(\tau\) such that \(\text{FPR}=10^{-6}\), see Eq. (3). For example, for \(N=1\), \(\tau=41\) and for \(N=1000\), \(\tau=44\). Accuracy is extrapolated beyond the \(N^{\prime}\) users by adding additional signatures and having \(N>N^{\prime}\) (_e.g_. users that have not generated any images).
Figure 4 reports the per-transformation identification accuracy. For example, we identify a user among \(N\)=\(10^{5}\) with \(98\%\) accuracy when the image is not modified. Note that for the combined edit, this becomes \(40\%\). This may still be dissusasive: if a user generates \(3\) images, he will be identified \(80\%\) of the time. We observe that at this scale, the false accusation rate is zero, _i.e_. we never identify the wrong user. This is because \(\tau\) is set high to avoid FPs, which also makes false accusations unlikely. We observe that the iden
Figure 4: **Identification results**. Proportion of well-identified users. Detection with FPR=\(10^{-6}\) is run beforehand, and we consider it an error if the image is not flagged.
Figure 5: Transformations evaluated in Sec. 4 & 5. ‘Combined’ is made of crop \(50\%\), brightness adjustment \(1.5\) and JPEG \(80\) compression.
Figure 3: **Detection results**. TPR/FPR curve of the detection under different transformations. Forensics\({}^{\dagger}\) indicates passive detection (without watermark) [12].
tification accuracy decreases when \(N\) increases, because the threshold \(\tau\) required to avoid false positives is higher when \(N\) increases, as pointed out by the approximation in (3). In a nutshell, by distributing more models, Alice trades some accuracy of detection against the ability to identify users.
## 5 Experimental Results
We presented in the previous section how to leverage watermarks for detection and identification of images generated from text prompts. We now present more general results on robustness and image quality for different generative tasks. We also compare Stable Signature to other watermarking algorithms applied post-generation.
### Tasks & evaluation metrics
Since our method only involves the LDM decoder, it makes it compatible with many generative tasks. We evaluate text-to-image generation and image edition on the validation set of MS-COCO [46], super-resolution and inpainting on the validation set of ImageNet [16] (all evaluation details are available in App. A.3).
We evaluate the image distortion with the Peak Signal-to-Noise Ratio (PSNR), which is defined as \(\mathrm{PSNR}(x,x^{\prime})=-10\cdot\log_{10}(\mathrm{MSE}(x,x^{\prime}))\), for \(x,x^{\prime}\in[0,1]^{c\times h\times w}\), as well as Structural Similarity score (SSIM) [86]. They compare images generated with and without watermark. On the other hand, we evaluate the diversity and quality of the generated images with the Frechet Inception Distance (FID) [33]. The bit accuracy - the percentage of bits correctly decoded - evaluates the watermarks' robustness.
### Image generation quality
Figure 6 shows qualitative examples of how the image generation is altered by the latent decoder's fine-tuning. The difference is very hard to perceive even for a trained eye. This is surprising for such a low PSNR, especially since the watermark embedding is not constrained by any Human Visual System like in professional watermarking techniques. Most interestingly, the LDM decoder has indeed learned to add the watermark signal only over textured areas
Figure 6: Images generated from text with or without the watermarked generative models. The PSNR is \(35.4\) dB in the first row and \(28.6\) dB in the second. Images generated with Stable Signature look natural because modified areas are located where the eye is not sensitive. More examples are available in Appendix C.
where the human eyes are not sensitive, while the uniform backgrounds are kept intact (see the pixel-wise difference). More visual results are available in App. C.
Table 1 presents a quantitative evaluation of image generation quality on the different tasks. We report the FID, and the average PSNR and SSIM that are computed between the images generated by the fine-tuned LDM and the original one. The results show that no matter the task, the watermarking has very small impact on the FID of the generation.
The average PSNR is around \(30\) dB and SSIM around \(0.9\) between images generated by the original and a watermark model. They are a bit low from a watermarking perspective because we do not explicitly optimize for them. Indeed, in a real world scenario, one would only have the watermarked version of the image. Therefore we don't need to be as close as possible to the original image but only want to generate artifacts-free images. Without access to the image generated by the original LDM, it is very hard to tell whether a watermark is present or not.
### Watermark robustness
We evaluate the robustness of the watermark to different image transformations. For each task, we generate \(1\)k images with \(10\) models fine-tuned for different messages, and report the average bit accuracy in Table 1. The evaluated transformations are presented in Fig. 5. We provide results on additional transformations in App. B.2.
We see that the watermark is indeed robust for several tasks and across transformations. The bit accuracy is always above \(0.9\), except for inpainting, when replacing only the masked region of the image (between \(1-50\)% of the image, with an average of \(27\%\) across masks). Besides, the bit accuracy is not perfect even without edition, mainly because there are images that are harder to watermark (_e.g_. the ones that are very uniform, like the background in Fig. 6) and for which the accuracy is lower.
Note that the robustness comes even without any transformation during the LDM fine-tuning phase: it is due to the watermark extractor. If the watermark embedding pipeline is learned to be robust against an augmentation, then the LDM will learn how to produce watermarks that are robust against it during fine-tuning.
### Comparison to post-hoc watermarking
An alternative way to watermark generated images is to process them after the generation (post-hoc). This may be simpler, but less secure and efficient than Stable Signature. We compare our method to a frequency based method, DCT-DWT [14], iterative approaches (SSL Watermark [24] and FNNS [41]), and an encoder/decoder one like HiD-DeN [104]. We choose DCT-DWT since it is employed by the original open source release of Stable Diffusion [66], and the other methods because of their performance and their ability to handle arbitrary image sizes and number of bits. We use our implementations for each method, see details in App. A.4.
Table 1 compares the generation quality and the robustness over \(5\)k generated images. Overall, Stable Signature achieves comparable results in terms of robustness. HiD-DeN's performance is a bit higher but its output bits are not i.i.d. meaning that it cannot be used with the same guarantees as the other methods. We also observe that post-hoc generation gives worse qualitative results, images tend to present artifacts (see Fig. 13 in the supplement). One explanation is that Stable Signature is merged into the high-quality generation process with the LDM auto-encoder model, which is able to modify images in a more subtle way.
### Can we trade image quality for robustness?
We can choose to maximize the image quality or the robustness of the watermark thanks to the weight \(\lambda_{i}\) of the perceptual loss in (4). We report the average PSNR of \(1\)k generated images, as well as the bit accuracy obtained on
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{PSNR / SSIM \(\uparrow\)} & \multicolumn{3}{c}{FID \(\downarrow\)} & \multicolumn{3}{c}{Bit accuracy \(\uparrow\) on:} \\ & & & & None & Crop & Bright. Comb. & \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & Text-to-Image & LDM [64] & \(30.0\) / \(0.89\) & \(19.6\) (\(-0.3\)) & \(0.99\) & \(0.95\) & \(0.97\) & \(0.92\) \\ \cline{2-9} & Image Edition & DiffEdit [13] & \(31.2\) / \(0.92\) & \(15.0\) (\(-0.3\)) & \(0.99\) & \(0.95\) & \(0.98\) & \(0.94\) \\ \cline{2-9} & Inpainting - Full & & & & & & & & \\ & - Mask only & & & & & & & \\ \cline{2-9} & Super-Resolution & LDM [64] & \(34.0\) / \(0.94\) & \(11.6\) (\(+0.0\)) & \(0.98\) & \(0.93\) & \(0.96\) & \(0.92\) \\ \hline \multirow{6}{*}{
\begin{tabular}{} \end{tabular} } & _Post generation_ & & & & & & & \\ & Dot-Dwt [14] & \(0.14\) (s/img) & \(39.5\) / \(0.97\) & \(19.5\) (\(-0.4\)) & \(0.86\) & \(0.52\) & \(0.51\) & \(0.51\) \\ & SSL Watermark [24] & \(0.45\) (s/img) & \(31.1\) / \(0.86\) & \(20.6\) (\(+0.7\)) & \(1.00\) & \(0.73\) & \(0.93\) & \(0.66\) \\ & FNNS [41] & \(0.28\) (s/img) & \(32.1\) / \(0.90\) & \(19.0\) (\(-0.9\)) & \(0.93\) & \(0.93\) & \(0.91\) & \(0.93\) \\ & HiDDeN [104] & \(0.11\) (s/img) & \(32.0\) / \(0.88\) & \(19.7\) (\(-0.2\)) & \(0.99\) & \(0.97\) & \(0.99\) & \(0.98\) \\ \cline{2-9} & _Merged in generation_ & & & & & & & \\ & Stable Signature & \(0.00\) (s/img) & \(30.0\) / \(0.89\) & \(19.6\) (\(-0.3\)) & \(0.99\) & \(0.95\) & \(0.97\) & \(0.92\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Generation quality and comparison to post-hoc watermarking on 512\(\times\)512 images and \(48\)-bit signatures. PSNR and SSIM are computed between generations of the original and watermark generators. For FID, we show in (color) the difference with regards to original. Post-hoc watermarking is evaluated on text-generated images. (App. B.2 gives results on more transformations, and App. A gives more details on the evaluations.) Overall, Stable Signature has minimal impact on generation quality. It has comparable robustness to post-hoc methods while being rooted in the generation itself.
the extracted message for the 'Combined' editing applied before detection (qualitative results are given in App. B.1). A higher \(\lambda_{i}\) leads to an image closer to the original one, but to lower bit accuracies on the extracted message:
\begin{tabular}{l c c c c c c} \hline \hline \(\lambda_{i}\) for fine-tuning & \(0.8\) & \(0.4\) & \(0.2\) & \(0.1\) & \(0.05\) & \(0.025\) \\ \hline PSNR \(\uparrow\) & \(31.4\) & \(30.6\) & \(29.7\) & \(28.5\) & \(26.8\) & \(24.6\) \\ Bit acc. \(\uparrow\) on 'comb.' & \(0.85\) & \(0.88\) & \(0.90\) & \(0.92\) & \(0.94\) & \(0.95\) \\ \hline \hline \end{tabular}
### What makes a good watermark extractor?
In the following experiments, we analyze the watermark pre-training. Ablations are conducted on a shorter schedule of \(50\) epochs, on \(128\times 128\) images and \(16\)-bits messages.
Attack simulation layer.Watermark robustness against image transformations depends solely on the watermark extractor. We pre-train watermark extractors with or without specific transformations in the simulation layer and plug them in the LDM fine-tuning stage. From there, we generate \(1\)k images from text prompts and report the bit accuracy of the extracted watermarks in Table 2. The extractor is naturally robust to some transformations, such as crops or brightness, without being trained with them, while others, like rotations or JPEG, require simulation during training for the watermark to be recovered at test time. Empirically we observed that adding a transformation improves results for the latter, but makes training more challenging.
Scaling factor at pre-training.The watermark encoder does not need to be perceptually good and it is beneficial to degrade image quality during pre-training. In Table 3, we train watermark encoders/extractors for different scaling factor \(\alpha\) (see Sec. 3.1), and observe that \(\alpha\) strongly affects the bit accuracy of the method. When it is too high, the LDM needs to generate low quality images for the same performance because the distortions seen at pre-training by the extractor are too strong. When it is too low, they are not strong enough for the watermarks to be robust: the LDM will learn how to generate watermarked images, but the extractor won't be able to extract them on edited images.
## 6 Attacks on Stable Signature's Watermarks
We examine the watermark's resistance to intentional tampering, as opposed to distortions that happen without bad intentions like crops or compression (discussed in Sec. 4). We consider two threat models: one is typical for many image watermarking methods [14] and operates at the image level, and another targets the generative model level. For image-level attacks, we evaluate on \(5\)k images generated from COCO prompts. Full details on the following experiments can be found in Appendix A.5.
### Image-level attacks
Watermark removal.Bob alters the image to remove the watermark with deep learning techniques, like methods used for adversarial purification [74, 90] or neural auto-encoders [1, 47]. Note that this kind of attacks has not been explored in the image watermarking literature to our knowledge. Figure 7 evaluates the robustness of the watermark against neural auto-encoders [4, 11, 20, 64] at different compression rates. To reduce the bit accuracy closer to random (50%), the image distortion needs to be strong (PSNR\(<\)26). However, assuming the attack is _informed on the generative model_, the auto-encoder is the same as the one used to generate the images, the attack becomes much more effective. It erases the watermark while achieving high quality (PSNR\(>\)29). This is because the image is modified precisely in the bandwidth where the watermark
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\(\mathcal{N}\) training} & \multicolumn{5}{c}{Bit accuracy \(\uparrow\) at test time:} \\ \cline{2-6} & Crop \(0.1\) & Rot. \(90\) & JPEG \(50\) & Bright. \(2.0\) & Res. \(0.7\) \\ \hline ✗ & 1.00 & 0.56 & 0.50 & 0.99 & 0.48 \\ ✔ & 1.00 & 0.99 & 0.90 & 0.99 & 0.91 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Role of the simulation layer.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Scaling factor \(\alpha\) & \(0.8\) & \(0.4\) & \(0.2\) & \(0.1\) & \(0.05\) \\ \hline (P\({}_{1}\)) - PSNR \(\uparrow\) & \(16.1\) & \(21.8\) & \(27.2\) & \(33.5\) & \(39.3\) \\ (P\({}_{2}\)) - PSNR \(\uparrow\) & \(27.9\) & \(30.5\) & \(\mathbf{30.8}\) & \(28.8\) & \(27.8\) \\ \hline (P\({}_{1}\)) - Bit acc. \(\uparrow\) on ‘none’ & \(1.00\) & \(1.00\) & \(0.86\) & \(0.72\) & \(0.62\) \\ (P\({}_{2}\)) - Bit acc. \(\uparrow\) on ‘none’ & \(0.98\) & \(\mathbf{0.98}\) & \(0.91\) & \(0.90\) & \(0.96\) \\ (P\({}_{2}\)) - Bit acc. \(\uparrow\) on ‘comb.’ & \(\mathbf{0.86}\) & \(0.73\) & \(0.82\) & \(0.81\) & \(0.69\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Influence of the (discarded) watermark encoder perceptual quality. P\({}_{1,2}\) stands for Phase 1 or 2.
Figure 7: **Removal attacks.**\(x_{o}\) is the image produced by the original generator, \(x_{r}\) is the version produced by the watermarked generator and then attacked. Bit accuracy is on the watermark extracted from \(x_{r}\). Neural auto-encoders follow the same trend except with the one used by the LDM (‘KL-Iß’ for our LDM). When access to the watermark extractor is granted, adversarial attacks also remove the watermark at lower PSNR budget.
is embedded. Note that this assumption is strong, because Alice does not need to distribute the original generator.
Watermark removal & embedding (white-box).To go further, we assume that the attack is _informed on the watermark extractor_ - because it has leaked. Bob can use an adversarial attack to remove the watermark by optimizing the image under a PSNR constraint. The objective is to minimize the \(\ell_{2}\) distance between a random binary message sampled beforehand and the extractor's output, effectively replacing the original signature with a random one. It makes it possible to erase the watermark with a lower distortion budget, as seen in Fig. 7.
Instead of removing the watermark, an attacker could embed a signature into vanilla images (unauthorized embedding [14]) to impersonate another Bob of whom they have a generated image. It highlights the importance of keeping the watermark extractor private.
### Network-level attacks
Model purification.Bob gets Alice's generative model and uses a fine-tuning process akin to Sec. 3.2 to eliminate the watermark embedding - that we coin _model purification_. This involves removing the message loss \(\mathcal{L}_{m}\), and shifting the focus to the perceptual loss \(\mathcal{L}_{i}\) between the original image and the one reconstructed by the LDM auto-encoder.
Figure 8 shows the results of this attack for the MSE loss. The PSNR between the watermarked and purified images is plotted at various stages of fine-tuning. Empirically, it is difficult to significantly reduce the bit accuracy without compromising the image quality: artifacts start to appear during the purification.
Model collusion.Users may collude by aggregating their models. For instance, Bob\({}^{(i)}\) and Bob\({}^{(j)}\) can average the weights of their models (like Model soups [87]) creating a new model to deceive identification. We found that the bit at position \(\ell\) output by the extractor will be \(0\) (resp. \(1\)) when the \(\ell\)-th bits of Bob\({}^{(i)}\) and Bob\({}^{(j)}\) are both \(0\) (resp. \(1\)), and that the extracted bit is random when their bits disagree. We show the distributions of the soft bits (before thresholding) output by the watermark extractor on images generated by the average model. The \(\ell\)-th output is labeled by bits of Bob\({}^{(i)}\) and Bob\({}^{(j)}\) (00 means both have 0 at position \(\ell\)):
This so-called _marking assumption_ plays a crucial role in tratior tracing literature [26, 54, 79]. Surprisingly, it holds even though our watermarking process is not explicitly designed for it. The study has room for improvement, such as creating user identifiers with more powerful traitor tracing codes [79] and using more powerful traitor accusation algorithms [26, 54]. Importantly, we found the precedent remarks also hold if the colluders operate at the image level.
## 7 Related Work
Image generationhas long been dominated by GANs which are still state-of-art on many datasets [36, 37, 38, 72, 83]. Transformers have also been used successfully to model image [62, 19] or video [75] distributions, allowing higher diversity at the cost of increased inference time. Images are typically converted to a list of tokens with vector-quantized architectures [20, 63, 92]. These generative models rely on an image decoder at the end, making Stable Signature directly applicable to such methods.
Diffusion models [18, 35, 57, 76] have brought huge improvements in text-conditional image generation, being now able to synthesize high-resolution photo-realistic images for a wide variety of text prompts [3, 34, 61, 65, 70]. They can also perform conditional image generation tasks - like inpainting or text-guided image editing - by fine-tuning the diffusion model with additional conditioning, masked input image, segmentation map, etc. [49, 69]. Because of their iterative denoising algorithm, diffusion models can also be adapted for image editing in a zero-shot fashion by guiding the generative process [13, 32, 39, 55, 81, 88]. All these methods, when applied on top of Stable Diffusion, operate in the latent space of images, requiring a latent decoder to produce an RGB image.
Detection of AI-generated/manipulated imagesis notably active in the context of deep-fakes [31, 103]. Many works focus on the detection of GAN-generated images [10, 30, 85, 102]. One way is to detect inconsistencies in the generated images, via lights, perspective or physical objects [21, 22, 44, 51, 84]. These approaches are restricted to photo-realistic images or faces but do not cover artworks where objects are not necessarily physically correct.
Figure 8: **Robustness to model purification**, fine-tuning the model to remove watermarks. \(x_{w}\) is the watermarked image, \(x_{r}\) is generated with the purified model at different steps of the process.
Other approaches use traces left by the generators in the spatial [53, 93] or frequency [25, 102] domains. They have extended to diffusion models in recent works [12, 73], and showed encouraging results. However purely relying on forensics and passive detection is limiting. As an example, the best performing method to our knowledge [12] is able to detect \(50\%\) of generated images for an FPR around \(1\)/\(100\). Put differently, if a user-generated content platform were to receive \(1\) billion images every day, it would need to wrongly flag \(10\) million images to detect only half of the generated images. Besides, passive techniques cannot trace images from different versions of the same model, conversely to active ones like watermarking.
Image watermarkinghas long been studied in the context of tracing and intellectual property protection [14]. More recently, deep learning encoder/extractor alternatives like HiDDeN [2, 43, 50, 96, 104] or iterative methods by Vukotic _et al_. [24, 41, 82] showed competitive results in terms of robustness to a wide range of transformations, namely geometric ones.
In the specific case of **generative models**, some works deal with watermarking the training set on which the generative model is learned [94]. It is highly inefficient since every new message to be embedded requires a new training pipeline. Merging the watermarking and the generative process is a recent idea [45, 89, 95, 98], that is closer to the model watermarking litterature [80]. They suffer from two strong limitations. First, these methods only apply to GAN, while LDM are beginning to replace them in most applications. Second, watermarking is incorporated in the training process of the GAN from the start. This strategy is very risky because the generative model training is more and more costly3. Our work shows that a quick fine-tuning of the latent decoder part of the generative model is enough to achieve a good watermarking performance, provided that the watermark extractor is well chosen.
Footnote 3: Stable Diffusion training costs \(\sim\)$600k of cloud compute (Wikipedia).
## 8 Conclusion & Discussion
By a quick fine-tuning of the decoder of Latent Diffusion Models, we can embed watermarks in all the images they generate. This does not alter the diffusion process, making it compatible with most of LDM-based generative models. These watermarks are robust, invisible to the human eye and can be employed to _detect_ generated images and _identify_ the user that generated it, with very high performance.
The public release of image generative models already has an important societal impact. With this work, we put to light the usefulness of using watermarking instead of relying on passive detection methods. We hope it will encourage researchers and practitioners to employ similar approaches before making their models publicly available.
## Ethical Statement
### Societal Impact
The public release of image generative models already has an important societal impact. There is a risk of misuse of these models, when employed to (1) generate fake content presented as real (2) generate small variations of copyrighted images and sell them as originals. However, this work makes it possible for researchers and practitioners to distribute their models and at the same time to trace back images to the model that generated them. Therefore, it can be seen as a mitigation for the societal risks posed by image generation.
Watermarking in general improves the traceability of content. This traceability can have negative consequences, for example when it is used to trace political opponents in authoritarian regimes or whistleblowers in secretive companies. We believe that this risk is mitigated in our use case since generated images do not have much political content in general.
### Environmental Impact
We do not expect any environmental impact specific from this work. The cost of the experiments and the method is high, though order of magnitudes less than other computer vision fields. We roughly estimated that the total GPU-days used for running all our experiments to \(2000\), or \(\approx 50000\) GPU-hours. This amounts to total emissions in the order of 10 tons of CO\({}_{2}\)eq. This is excluding the training of the generative model itself, since we did not perform that training. Estimations are conducted using the Machine Learning Impact calculator presented by Lacoste _et al_. [42]. We do not consider in this approximation: memory storage, CPU-hours, production cost of GPUs/ CPUs, etc.
### Reproducibility Statement
The code to reproduce the experiments will be open-sourced. Although the diffusion-based generative model has been trained on proprietary data, we use the KL auto-encoder from LDM [64] with compression factor \(f=8\). This is the same one that is used in open-source alternatives.
|
2310.01550 | Spontaneously interacting qubits from Gauss-Bonnet | Building on previous constructions examining how a collection of small,
locally interacting quantum systems might emerge via spontaneous symmetry
breaking from a single-particle system of high dimension, we consider a larger
family of geometric loss functionals and explicitly construct several classes
of critical metrics which "know about qubits" (KAQ). The loss functional
consists of the Ricci scalar with the addition of the Gauss-Bonnet term, which
introduces an order parameter that allows for spontaneous symmetry breaking.
The appeal of this method is two-fold: (i) the Ricci scalar has already been
shown to have KAQ critical metrics and (ii) exact equations of motions are
known for loss functionals with generic curvature terms up to two derivatives.
We show that KAQ critical metrics, which are solutions to the equations of
motion in the space of left-invariant metrics with fixed determinant, exist for
loss functionals that include the Gauss-Bonnet term. We find that exploiting
the subalgebra structure leads us to natural classes of KAQ metrics which
contain the familiar distributions (GUE, GOE, GSE) for random Hamiltonians. We
introduce tools for this analysis that will allow for straightfoward, although
numerically intensive, extension to other loss functionals and higher-dimension
systems. | Sean Prudhoe, Rishabh Kumar, Sarah Shandera | 2023-10-02T18:45:12Z | http://arxiv.org/abs/2310.01550v2 | # Spontaneously interacting qubits from Gauss-Bonnet
###### Abstract
Building on previous constructions examining how a collection of small, locally interacting quantum systems might emerge via spontaneous symmetry breaking from a single-particle system of high dimension, we consider a larger family of geometric loss functionals and explicitly construct several classes of critical metrics which "know about qubits" (KAQ). The loss functional consists of the Ricci scalar with the addition of the Gauss-Bonnet term, which introduces an order parameter that allows for spontaneous symmetry breaking. The appeal of this method is two-fold: (i) the Ricci scalar has already been shown to have KAQ critical metrics and (ii) exact equations of motions are known for loss functionals with generic curvature terms up to two derivatives. We show that KAQ critical metrics, which are solutions to the equations of motion in the space of left-invariant metrics with fixed determinant, exist for loss functionals that include the Gauss-Bonnet term. We find that exploiting the subalgebra structure leads us to natural classes of KAQ metrics which contain the familiar distributions (GUE, GOE, GSE) for random Hamiltonians. We introduce tools for this analysis that will allow for straightfoward, although numerically intensive, extension to other loss functionals and higher-dimension systems.
## 1 Introduction
Although the notion of emergent gravity has been studied from a variety of perspectives1, a simpler but still illuminating question to consider is the emergence of locality. A formal structure to pose that question, at least for toy systems described by large, finite-dimensional quantum systems, was recently suggested by Freedman and Zini [7; 8; 9]. They introduced functionals of geometry on the evolutionary operators of high-dimensional quantum systems and asked whether the geometries that minimize those functionals correspond to dynamics of a many-body quantum system with a notion of local interactions. The tools
to answer this question are inner products on the Lie algebra \(\mathfrak{su}(N)\), corresponding to left-invariant metrics on the associated manifold of the Lie group \(SU(N)\)[10; 11; 12]. Those inner products, or metrics, can be used to construct probability distributions over Hamiltonians that give preference to different classes of dynamics on a Hilbert space of dimension \(N\).
Here, we expand in two technical ways on the examples considered in the original statement of this program [7]. First, we consider a new set of geometrically motivated loss functionals which have critical points corresponding to a qubit structure decomposition. Reference [7] labeled critical points of that type as KAQ since they 'know about qubits'. Motivated by the critical point structure of the Ricci scalar, we consider loss functions built from higher-order curvature terms. The exact equations of motion for such actions already exist in the literature [13; 14; 15; 16]. Secondly, we provide a construction for classes of KAQ metrics that generalize those recovered in [7] and originally found in [17]. We use this construction as an ansatz for critical points of our loss functionals. In this way we are able to determine potential KAQ critical points in the space of our ansatz metrics, which then may be checked against the equations of motion. We do not need to search in the full space of left-invariant metrics, we only need to search in the exponentially reduced space of our ansatz KAQ metrics. Although still numerically intensive, this is a promising approach to apply to systems larger than we treat in this article.
In the rest of this section, we lay out in more detail the statement of the problem and the tools to be used, including the new loss functionals. Section 2 then describes in detail the parametrizations of KAQ metrics we find useful. In Section 3 we present the equations of motion that must be solved to find critical points and apply the parametrizations to find new critical points, presented in Section 4. We conclude with implications and further directions in Section 5.
### Distributions over Hamiltonians
Rather than searching for some dynamics that picks out a specific Hamiltonian, it is natural to ask for a distribution over Hamiltonians that assigns a higher likelihood to a particular class with interesting behavior. A simple and familiar choice for a distribution over dynamics of a quantum system of dimension \(N\) is the Gaussian Unitary Ensemble [18; 19], where each independent real number of the \(N\times N\) Hermitian matrix that defines the Hamiltonian, \(H\), is independently drawn from a Gaussian distribution. Equivalently,
\[\rho_{\text{GUE}}\big{(}H\big{)}\propto e^{-\text{tr}\big{(}H^{2}\big{)}}\,. \tag{1}\]
This distribution is invariant under unitary transformations, \(H\to UHU^{\dagger}\) (as is clear from the cyclic property of the trace). It does not give preference to Hamiltonians that have the many-body local structure typical of spin systems frequently observed in nature, where the many entries in the large matrix \(H\) that correspond to interactions among nearly all the degrees of freedom would be suppressed. A different distribution that would support a many-body local structure when \(N=2^{d}\) would, for example, have a basis constructed from Pauli words for qubits, \(\otimes_{i=1}^{d}\sigma_{i}^{(J)}\), where each \(\sigma^{(J)}\) is any of the Pauli matrices or the \(2\times 2\) identity. A choice of weights assigned to operators in this basis could define a distribution
favoring some subset of operators, for example words of shorter length (measured by the number of non-identity Pauli matrices).
To formalize distributions that are different from GUE, consider the set of inner products on the space of operators, which for quantum systems with Hilbert space of dimension \(N\) is the algebra \(\mathfrak{g}=\mathfrak{su}(N)\). The Lie algebra comes equipped with the Lie bracket, but the bracket itself does not assign an inner product. The GUE arises from the most symmetric choice of metric, given by the Killing form \(K:\mathfrak{g}\times\mathfrak{g}\to\mathbb{C}\), which defines a map that is invariant under a basis change. Using \(K\) defines the Killing-Cartan metric for \(X\), \(Y\) elements of the algebra
\[\delta(X,Y)\equiv\operatorname{tr}\bigl{(}\mathrm{ad}X*\mathrm{ad}Y\bigr{)}\,, \tag{2}\]
where \(\mathrm{ad}X\) is the map defined by \(\mathrm{ad}X(Y)=[X,Y]\) for all \(Y\). General inner products on the algebra correspond to left-invariant metrics on the group manifold for \(G=SU(N)\), while the Killing-Cartan metric is special in that it is both left and right invariant. Although from the manifold point of view one can imagine more complex metrics that depend on some choice of coordinates, the set of left-invariant metrics is sufficient for constructing a larger class of distributions over Hamiltonians.
The more general distributions over Hamiltonians can be written
\[\rho_{\mathrm{FZ}}(\mathcal{O})\propto e^{-g(\mathcal{O},\mathcal{O})} \tag{3}\]
where \(g\) is a left-invariant but not bi-invariant metric over the algebra \(\mathfrak{su}(N)\). It defines a preferred subsystem decomposition by its principle axes. Each inner product on the algebra has a corresponding orthonormal basis \((X_{a})\) such that \(\langle X_{a},X_{b}\rangle=\delta_{ab}\) where \(\delta_{ab}\) is the Kronecker delta and the indices take values between \(1\) and \(4^{d}-1\). The orthonormal basis of an inner product is generically a non-holonomic or non-coordinate basis as the structure constants \(C^{c}_{\phantom{c}ab}\), where
\[[X_{a},X_{b}]=\sum_{c}C^{c}_{\phantom{c}ab}X_{c}\,, \tag{4}\]
are non-zero.
### Many-body dynamics via preferred geometries
Although one can construct any distribution over Hamiltonians by hand, it is interesting to ask if there may be a geometrical means of picking out interesting classes, which perhaps could be dynamically realized as a spontaneous symmetry breaking process that fragments a large "single-particle" quantum system into an ensemble of small quantum systems interacting in a way that resembles local, many-body physics. Freedman and Zini [7, 8, 9] considered a family of functionals of the group geometry, parameterized by the metric and the structure constants, and explored whether the minima of these functionals select out KAQ metrics.
Here, we explore this idea a bit further by looking at functionals \(\mathcal{L}[g]\) that are natural on the group manifold, defined by the set of two-derivative curvature tensors
\[\begin{split}\mathcal{L}[g]&=R+\alpha R^{2}+\beta R _{ab}R^{ab}+\gamma R_{abcd}R^{abcd}\\ &=R+\alpha\mathcal{R}_{0}+\beta\mathcal{R}_{2}+\gamma\mathcal{R} _{4}\,.\end{split} \tag{5}\]
The minima of just the Ricci scalar, \(R\), were first studied by [17], and the same results were recovered by [7]. The larger class of functionals in Eq.(1.5) is especially tractable to study since the conditions for a metric to be a critical point are already known [16; 20]. We return to this in Section 3. While [7] used numerical techniques to find all critical points of some loss functionals and then determine if they corresponded to KAQ or non-KAQ metrics, we take a different approach and instead explore whether or not KAQ metrics occur as critical points (and ideally minima) of an expanded set of loss functions. To do so, we next introduce parametrizations of KAQ metrics, including generalizations of those that correspond to the KAQ critical points found in [17] for \(\mathcal{L}[g]=R\). This allows us to explore the structure and properties of the KAQ metrics in more detail, although with the drawback that we cannot determine the relative frequency of KAQ vs non-KAQ minima.
## 2 KAQ parametrization schemes
Among the \(\binom{N+1}{2}\) distinct metrics on \(SU(N)\), only some will have a structure that is compatible with a tensor product decomposition into \(d\) qubit operators when \(N=2^{d}\). This notion can be expanded to apply to tensor decompositions of more general large \(N\) spaces that can include factors of dimension other than two (qudits) [8; 9]. The KAQ property of a metric \(g\) is decoded from its (possibly non-unique) principle axes. An observable \(E_{a}\) is said to be a principle axis of the metric if
\[g^{a}_{\ b}E_{a}=\lambda_{b}E_{b} \tag{2.1}\]
that is \(E_{a}\) is an eigenvector of the tensor \(g^{a}_{\ b}=g_{cb}\delta^{ca}\), where \(\delta_{ab}\) is the Killing-Cartan metric. The set \(\{E_{a}\}\) is only required to be a linearly independent set, hence the use of a notation different from \(\{X_{a}\}\). The metric is said to be KAQ if \(\mathbf{a}\) basis of principle axes exist such that
\[\begin{split}&\Phi[E_{a}]=\sigma_{1}^{(a_{1})}\otimes...\otimes \sigma_{d}^{(a_{d})}\\ &[E_{a},E_{b}]=\mathcal{P}^{c}_{\ ab}E_{c}\end{split} \tag{2.2}\]
where \(\Phi\) is a Lie algebra isomorphism i.e. a bijective linear map which preserves commutation relations and the tensors \(\mathcal{P}^{c}_{\ ab}\) are the structure constants of \(\mathfrak{s}\big{(}\mathfrak{u}(2)^{\otimes d}\big{)}\), the Lie algebra \(\mathfrak{u}(2)^{\otimes d}\) with the generator \(\mathfrak{1}^{\otimes d}\) removed.
The structure of any degenerate eigenspaces of a metric determine how much freedom there is to find a KAQ basis. In the case of no degenerate eigenvalues, the metric can only be KAQ if all principle axes are already aligned with the Pauli word basis. This is clearly a special case. More generally, a degenerate eigenspace may be decomposed into a basis that aligns with Pauli words, although some "decoding" may be required. Decoding here means that the degenerate axes in question need to be mixed by means other than unitary conjugation. For example, an obvious structure to make use of in searching for KAQ metrics is the \(\mathfrak{so}(N)\) subalgebra of \(\mathfrak{su}(N)\), which one might expect contains a set of axis decodable to the length-one Pauli words, the single-qubit operators [21]. In the case of \(\mathfrak{su}(4)\), the construction is simple: if one starts with the natural basis of Gell-mann
matrices, the single-qubit operators can be recovered by identifying linear combinations of matrices in the sub-algebra that have a tensor product form and then making a rotation to align those linear combinations with length-one Pauli words. On the other hand, the other obvious subalgebra decomposition, \(\mathfrak{sp}(N/2)\), contains degenerate subspaces that must align with Pauli word basis, and others which may require decoding. Operationally, we consider this case in detail below to illuminate the relationship between known critical points of the Ricci scalar, which carry this subgroup structure, to critical points with KAQ structure.
It is important to stress that the metric will typically have many possible bases of principle axes, and only one needs to satisfy the KAQ condition. Furthermore, among that restricted set of KAQ metrics, not all will generate many-body local dynamics by suppressing the contributions from Pauli words with length close to \(d\).
In order to construct parametrizations of KAQ metrics, which can then be used as ansatze for critical metrics, we use the fact that a left-invariant metric over a Lie group is entirely specified by an inner product over the corresponding Lie algebra. In this way instead of referring to the metric, one may just as well refer to an orthonormal basis of the associated inner product. For excellent reviews of the Lie algebra and geometric background needed for these constructions, see [10; 11; 12].
### Riemannian geometry of compact symmetric Lie algebras
Consider first the Killing-Cartan geometry naturally available to SU(4). It is the unique (up to scale) bi-invariant geometry over the special unitary group, and describes color dynamics (in 0+1 dimensions) mediated by 15 gluons. The generalized Gell-Mann matrices, \(G_{a}\)[22] provide an orthonormal basis. We take a non-standard normalization
\[\mathrm{tr}\big{(}G_{a}^{\dagger}G_{b}\big{)}=\frac{1}{2}\delta_{ab}\,, \tag{3}\]
which will enter the structure constants
\[[G_{a},G_{b}]=-iK^{c}_{\ ab}\,G_{c}\,. \tag{4}\]
Denoting \(K_{a}\) as the matrix of structure constants with entries \(K^{c}_{\ ab}\) in the Gell-Mann basis we then have
\[\mathrm{tr}K_{a}K_{b}=-4\delta_{ab} \tag{5}\]
and bi-invariance implies that \(K^{\mathrm{T}}_{a}=-K_{a}\) i.e. the structure constants are totally antisymmetric. This equation is negative definite due to the compactness of the special unitary group. The normalization choice here leads to a convenient normalization later on in the loss functionals, where the Ricci scalar takes the value \(R=15=N^{2}-1\) when evaluated over the Killing-Cartan geometry.
The Killing-Cartan geometry serves the role of a simple fiducial metric, as well as the assumed unstable starting point for spontaneous symmetry breaking. Instead of studying a more complicated left-invariant metric directly, one may look at the linear transformation (\(\omega\)) relating one of its orthonormal bases to that of the Killing-Cartan geometry [17]. The transformation \(\omega\) is required to fix the determinant, which we impose in order to use the
equations of motion referenced in Section 3. The linear transformation is related to the metric when expressed in the Gell-Mann basis as \(g=\omega^{-2}\). One may check such transformation certainly maps Gell-Mann letters to an orthonormal basis of \(g\).
Assume we have related a new orthonormal basis to the Gell-Mann matrices by a special linear transformation \(\tilde{G}_{a}=\sum_{a}\omega_{ab}G_{b}\). Then the structure constants are related by
\[\tilde{K}_{a}=\sum_{b}\omega_{ab}[\omega^{-1}K_{b}\omega]\,, \tag{6}\]
where \(K_{b}\) is the matrix with \(kj\)th entry \(K_{bj}^{k}\). For these calculations the location of an index (up or down) is unimportant, as the metric is always an identity matrix. The preceding equation makes explicit the matrix multiplication that must be performed to determine the new structure constants. That is, the transformation law may be written equivalently as
\[\tilde{K}^{c}_{\,\,ad}=\tilde{K}_{cab}=\sum_{bef}\omega_{ab}\omega_{ce}^{-1}K_ {ebf}\omega_{fd}\,. \tag{7}\]
The utility is that now curvature functions may be expressed in terms of Killing-Cartan tensor networks, where the total anti-symmetry can be leveraged.
We shall end with a final bit of notation, which simplifies the construction of KAQ metrics. As the metrics are considered over the Lie algebra, it is helpful to use the adjoint representation (\(\mathsf{adg}\)). Here observables themselves become operators acting over \(\mathfrak{g}\). For example, we can define kets from the Gell-mann matrices \(G_{a}\in\mathfrak{su}(2^{d})\), denoted \(|G_{a}\rangle\). We then define dual vectors with respect to the Killing-Cartan metric i.e.
\[\langle G_{b}|\equiv 2\text{tr}\left[G_{b}^{\dagger}(\cdot)\right]\,. \tag{8}\]
Defined in this way \(\{|G_{a}\rangle\}\) is an orthonormal basis of \(\mathfrak{g}\), with inner product \(\langle G_{a}|G_{b}\rangle=\delta_{ab}\). Projectors are defined in the standard way
\[\Pi_{a}=|G_{a}\rangle\langle G_{a}|\,, \tag{9}\]
and the action of the observables becomes (4)
\[\text{ad}G_{a}|G_{b}\rangle=K_{a}|G_{b}\rangle=|[G_{a},G_{b}]\rangle=\sum_{c}K _{cab}|G_{c}\rangle\,. \tag{10}\]
We also obtain a matrix representation for the action of any metric over \(\mathfrak{g}\). For example the linear transformation \(\omega\) takes the form
\[\omega=\sum_{ab}\omega_{ab}|G_{a}\rangle\langle G_{b}| \tag{11}\]
where \(\omega_{ab}=\omega_{ba}\) and \(\omega_{ab}\in\mathbb{R}\).
### Jensen geometries over SU(4)
In [17] the author searched for the critical points of the Ricci scalar curvature (\(R\)) in the space of left-invariant metrics with fixed determinant. It was shown that for unimodular
Lie algebras (\(\operatorname{tr}\left[C_{a}\right]=0\)), Einstein metrics are precisely the critical points of \(R\). While the proof was not constructive for general unimodular Lie groups, by specializing the author was able to construct several classes of Einstein metrics, now called Jensen metrics. These were found over symmetric Lie algebras, those Lie algebras that may be Cartan decomposed in at least one way. The Cartan decomposition plays a crucial role in forming the ansatz metric used to find these Einstein metrics. Thinking about decompositions that leverage algebraic structures is a point we pursue further in this work, so it is useful to perform the detailed construction of the Jensen metrics as a starting point.
If one further assumes the manifold is compact, then a non-trivial left-invariant Einstein metric exists. The Lie algebra \(\mathfrak{su}(4)\) is compact and symmetric. Thus it may be Cartan decomposed into a subalgebra and its orthogonal complement with respect to the Killing-Cartan form, allowing for non-trivial Ricci critical metrics. There are two inequivalent ways to do this, corresponding to the two non-isomorphic subalgebras \(\mathfrak{so}(4)\), and \(\mathfrak{sp}(2)\) the compact symplectic Lie algebra.
The heart of the Cartan decomposition is to break the Lie algebra into the subalgebra and the orthogonal complement with respect to the Killing-Cartan form. That is, given a subalgebra \(\mathfrak{R}\subset\mathfrak{g}\) we decompose \(\mathfrak{g}\) as
\[\mathfrak{g}=\mathfrak{R}\oplus\mathfrak{M} \tag{12}\]
where \(\delta(A,B)=0\) for all \(A\in\mathfrak{R}\) and \(B\in\mathfrak{M}\). We say the pair \((\mathfrak{g},\mathfrak{R})\) forms a Cartan decomposition if the following are satisfied
\[[\mathfrak{R},\mathfrak{R}]\subseteq\mathfrak{R}\quad[\mathfrak{R},\mathfrak{ M}]\subseteq\mathfrak{M}\quad[\mathfrak{M},\mathfrak{M}]\subseteq\mathfrak{R}\,. \tag{13}\]
The commutation relation above are equivalent to the existence of a certain kind of Lie algebra isomorphism, known as the Cartan involution \((\theta)\). Explicitly a Cartan involution is a Lie algebra automorphism \(\theta:\mathfrak{g}\to\mathfrak{g}\) such that [23]
\[\theta=\begin{bmatrix}\mathbb{1}_{r}&0\\ 0&-\mathbb{1}_{m}\end{bmatrix} \tag{14}\]
where \(r=\dim\mathfrak{R}\) and \(m=\dim\mathfrak{M}\). In what follows we construct the two types of Jensen metrics that exist for \(\mathfrak{su}(4)\). To do so, we construct the transformation \(\omega\), which takes an orthonormal basis of the Killing-Cartan metric to an orthonormal basis of the given critical metric. These transformations are generated by the symmetric trace-less operator
\[B=\begin{bmatrix}\frac{1}{r}\mathbb{1}_{r}&\mathbf{0}\\ \mathbf{0}&-\frac{1}{m}\mathbb{1}_{m}\end{bmatrix} \tag{15}\]
yielding the linear transformation
\[\omega=\exp\tau B\,. \tag{16}\]
The metric corresponding to \(\omega(\tau)\) is a non-trivial critical point of the Ricci scalar iff
\[\tau=\frac{rm}{2(r+m)}\ln\frac{2r+m}{2r-m}\,. \tag{17}\]
These metric are KAQ due to the following. First, the subalgebras which are pulled out for these two Cartan decompositions are generated by known sets of Pauli-words. This in conjunction with the large degeneracy in the Cartan blocks ensure a KAQ basis may be defined. We do this explicitly in the next section.
Before moving to our more general sets of KAQ parametrizations, it is worth explaining the algebraic structures we leverage. To begin we simply note that the Jensen metrics have a reduced symmetry structure compared to the Killing-Cartan metric. Using the form of Jensen metrics, defined by Eq.(15)-Eq.(17), and the commutation relations, one may show that the Jensen metrics are only bi-invariant with respect to \(\mathfrak{R}\). Thus since Jensen metrics are not ad\(\mathfrak{g}\) invariant, they cannot support GUE dynamics. However the ad\(\mathfrak{R}\) invariance allows for the creation of Gaussian ensembles over smaller sets of matrices. For example the \(\mathfrak{so}(4)\) Jensen metric gives dynamics via the Ricci scalar where the GUE spontaneously breaks down to a model which approximates a lower dimensional Gaussian orthogonal ensemble (GOE),
\[\rho=\rho_{\text{GUE}}(H^{(N^{2}-1)})\rightarrow\rho\approx\rho_{\text{GOE}}(H ^{(N(N-1)/2)})\,. \tag{18}\]
It does not exactly yield the GOE as there is a non-negligible probability for observables outside of the \(\mathfrak{so}(4)\) subalgebra to contribute to the Hamiltonian. Similarly, the \(\mathfrak{sp}(2)\) Jensen metrics generate an approximate Gaussian Symplectic Ensemble (GSE).
But there are other known examples of Einstein metrics over SU(N), which have more varied structure than Jensen metrics. Most of these metrics are naturally reductive like the Jensen metrics. Naturally reductive metrics are more general than Jensen metrics, as they are not ad\(\mathfrak{R}\) invariant. Instead they generate a further break down of the Lie algebra, which isolates certain algebraic structures of \(\mathfrak{R}\). Given a Lie algebra with an orthogonal (but not necessarily Cartan) decomposition, one further decomposes it as
\[\mathfrak{g}=\mathfrak{R}\oplus\mathfrak{M}=\mathfrak{C}\oplus\mathfrak{I}_{1 }\oplus...\oplus\mathfrak{I}_{q}\oplus\mathfrak{M} \tag{19}\]
where \(\mathfrak{C}\) is the center of \(\mathfrak{R}\) (\([\mathfrak{C},\mathfrak{R}]\)=0) and \(\mathfrak{I}_{i}(i>0)\) are simple ideals satisfying
\[[\mathfrak{R},\mathfrak{I}_{i}]\subset\mathfrak{I}_{i}\,. \tag{20}\]
Using this decomposition, a general form of naturally reductive metrics is given in [24],
\[\langle\,|\,\rangle=\langle\,|\,\rangle_{\mathfrak{C}}+\lambda_{1}\mathbb{1}_ {\mathfrak{I}_{1}}+...+\lambda_{q}\mathbb{1}_{\mathfrak{I}_{q}}+\mu\mathbb{1} _{\mathfrak{M}} \tag{21}\]
where \(\langle\,|\,\rangle_{\mathfrak{C}}\) is a general inner product over \(\mathfrak{C}\), and \(\lambda_{i}\) and \(\mu\) are non-negative. The authors used these parametrization to find examples of naturally reductive (non-Jensen) Einstein metrics, which are critical points of the \(R\). The critical points of \(R\) found in [7; 8; 9] were only of Jensen type. We find evidence later for why the non-Jensen type critical points were not found during the evolutionary search done by Freedman and Zini.
It has recently been shown that non-naturally reductive Einstein metrics exist over \(SU(N)\)[25], but we do not yet know how large the overlap is with KAQ metrics. With that said, we shall consider metric parametrizations which are non-naturally reductive. We
motivate our construction by considering metrics with a reduced bi-invariance compared with the Jensen metrics. That is we consider metrics which are bi-invariant with respect to a Cartan subalgebra of \(\mathfrak{R}\). Explicitly we decompose the Lie algebra as
\[\mathfrak{g}=\mathfrak{R}\oplus\mathfrak{M}=\mathfrak{R}_{0}\oplus\mathfrak{R}_ {1}\oplus...\oplus\mathfrak{R}_{q}\oplus\mathfrak{M} \tag{2.22}\]
where \(\mathfrak{R}_{0}\) is a Cartan subalgebra of \(\mathfrak{R}\), and \(\mathfrak{R}_{i}\) are subspaces satisfying
\[[\mathfrak{R}_{0},\mathfrak{R}_{i}]\subset\mathfrak{R}_{i}\,. \tag{2.23}\]
Using this decomposition we define the following metrics
\[\langle\,|\,\rangle=\langle\,|\,\rangle_{\mathfrak{R}_{0}}+\lambda_{1} \mathbb{1}_{\mathfrak{R}_{1}}+...+\lambda_{q}\mathbb{1}_{\mathfrak{R}_{q}}+ \mu\mathbb{1}_{\mathfrak{M}} \tag{2.24}\]
which are seen to be \(\mathrm{ad}\mathfrak{R}_{0}\) invariant by the previous assumptions.
### Cipher classes
In this section we introduce our KAQ parametrizations, which we break into cipher classes. These classes are distinguished by the existence of non-isomorphic KAQ bases. A given cipher class will have a basis of principal axes that is mixed between Gell-Mann letters and Pauli words. The smallest of these classes are the untranslated KAQ (UKAQ) metrics. These metrics have a KAQ basis consisting entirely of Gell-Mann letters. Such metrics can be the hardest to decode the KAQ property. The next biggest class are the partially translated KAQ metrics (PKAQ), followed by the fully translated KAQ metrics (FKAQ). The FKAQ metrics only have KAQ bases consisting entirely of Pauli words, thus they require no decoding when checking the KAQ property. We have the inclusion relation
\[\mathbf{U}\mathrm{KAQ}\subset\mathbf{P}\mathrm{KAQ}\subset\mathbf{F}\mathrm{ KAQ} \tag{2.25}\]
For what follows we assume the basis principal axes maps to pure tensor Pauli words i.e.
\[\Phi\left[\tilde{E}_{a}\right]=\sigma_{a_{1}}\otimes...\otimes\sigma_{a_{d}} \tag{2.26}\]
where \(\tilde{E}_{a}\) represents the bases of principal axes post possible decoding process. It is useful to introduce some additional notation for the generalized Gell-Mann matrices, splitting them into three groups
\[\{G_{a}\}=\{\{A_{l}\},\{S_{l}\},\{D_{p}\}\} \tag{2.27}\]
which are the anti-symmetric matrices, the off-diagonal symmetric matrices, and the diagonal matrices respectively. For a generic \(N\) these indices take values in
\[\begin{split} 1&\leq l\leq\binom{N}{2}\\ 1&\leq p\leq N-1\end{split} \tag{2.28}\]
How we label the Gell-Mann matrices follows the notation from [22], although we use a different overall normalization.
#### 2.3.1 Untranslated KAQ metrics
First, we consider the class of untranslated KAQ metrics (UKAQ), which are generalizations of the \(\mathfrak{so}(4)\) Jensen metrics and considerably less degenerate. We begin by choosing our orthonormal basis of the Killing-Cartan metric. We define generalized Jensen metrics as those where the orthonormal basis is compatible with the involution \(\theta[X]=-X^{\mathrm{T}}\), i.e. we want an orthonormal basis of Killing-Cartan constructed from eigenvectors of \(\theta\). Notice that
\[\begin{split}&\theta[A_{l}]=A_{l}\\ &\theta[S_{l}]=-S_{l}\\ &\theta[D_{p}]=-D_{p}\end{split} \tag{29}\]
thus we may simply use Gell-Mann letters for our fiducial basis. But importantly the eigenspectra of Gell-Mann letters do not match those of the Pauli words. Therefore these bases are not related by unitary conjugation. To serve as a basis of principal axes, they must be mapped to Pauli words through a Lie algebra homomorphism. See also [21; 26]. Having ruled out the existence of an inner automorphism (unitary conjugation) that accomplishes the translation to Pauli words, the only remaining isomorphism are the outermorphism. But the only non-trivial outermorphism of \(\mathfrak{su}(n)\) is complex conjugation. Complex conjugation in combination with unitary conjugation will never be able to match the eigen-spectra of Gell-Mann letters and Pauli words.
Therefore we need to be able to decode the Gell-Mann matrices for the constructed metrics to be KAQ. It is straightforward to show that they may serve as \(\mathbf{a}\) basis of principal axis, so long as we assume certain degeneracy patterns exist in the metric. To elucidate this concept, we construct the following dictionary that translates Gell-Mann letters to Pauli words
\[\begin{split}&(A_{1}+A_{6})=\frac{1}{2}\big{(}\mathbb{1}_{1}\otimes Y _{2}\big{)},\quad(A_{1}-A_{6})=\frac{1}{2}\big{(}Z_{1}\otimes Y_{2}\big{)}\\ &(A_{2}+A_{5})=\frac{1}{2}\big{(}Y_{1}\otimes\mathbb{1}_{2}\big{)},\quad(A_{2}-A_{5})=\frac{1}{2}\Big{(}Y_{1}\otimes Z_{2}\Big{)}\\ &(A_{3}+A_{4})=\frac{1}{2}\Big{(}Y_{1}\otimes X_{2}\Big{)},\quad( A_{3}-A_{4})=\frac{1}{2}\Big{(}X_{1}\otimes Y_{2}\Big{)}\,.\end{split} \tag{30}\]
We may further construct Pauli words from \(\mathfrak{M}\)
\[(S_{1}+S_{6}) =\frac{1}{2}\Big{(}\mathbb{1}_{1}\otimes X_{2}\Big{)},\quad(S_{1}-S _{6})=\frac{1}{2}\Big{(}Z_{1}\otimes X_{2}\Big{)} \tag{31}\] \[(S_{2}+S_{5}) =\frac{1}{2}\Big{(}X_{1}\otimes\mathbb{1}_{2}\Big{)},\quad(S_{2}- S_{5})=\frac{1}{2}\Big{(}X_{1}\otimes Z_{2}\Big{)}\] \[(S_{3}+S_{4}) =\frac{1}{2}\Big{(}X_{1}\otimes X_{2}\Big{)},\quad(S_{4}-S_{3})= \frac{1}{2}\Big{(}Y_{1}\otimes Y_{2}\Big{)}\] \[\Bigg{(}D_{1}-\sqrt{\frac{1}{3}}D_{2}+\sqrt{\frac{2}{3}}D_{3} \Bigg{)} =\frac{1}{2}\Big{(}\mathbb{1}_{1}\otimes Z_{2}\Big{)}\] \[\Bigg{(}D_{1}+\sqrt{\frac{1}{3}}D_{2}-\sqrt{\frac{2}{3}}D_{3} \Bigg{)} =\frac{1}{2}\Big{(}Z_{1}\otimes Z_{2}\Big{)}\] \[\Bigg{(}0D_{1}+\sqrt{\frac{2}{3}}D_{2}+\sqrt{\frac{1}{3}}D_{3} \Bigg{)} =\frac{1}{2}\Big{(}Z_{1}\otimes\mathbb{1}_{2}\Big{)}\,.\]
Therefore it is possible for Gell-Mann letters to form a KAQ basis, so long as the metric has appropriate degeneracy patterns. These degeneracies are required to be able to translate Gell-Mann letters to Pauli words using more general maps than inner automorphisms. As an example consider \(A_{1}\) and \(A_{6}\), their weights must be the same to construct the words \(E_{1}\) and \(E_{2}\). It is these considerations that lead us to the following UKAQ metrics
\[\begin{split}\omega_{\text{UKAQ}}(\vec{r},\vec{m})& =e^{r_{1}}\bigg{(}|A_{1}\rangle\langle A_{1}|+|A_{6}\rangle\langle A _{6}|\bigg{)}+e^{r_{2}}\bigg{(}|A_{2}\rangle\langle A_{2}|+|A_{5}\rangle \langle A_{5}|\bigg{)}\\ &+e^{r_{3}}\bigg{(}|A_{3}\rangle\langle A_{3}|+|A_{4}\rangle \langle A_{4}|\bigg{)}+e^{m_{1}}\bigg{(}|S_{1}\rangle\langle S_{1}|+|S_{6} \rangle\langle S_{6}|\bigg{)}\\ &+e^{m_{2}}\bigg{(}|S_{2}\rangle\langle S_{2}|+|S_{5}\rangle \langle S_{5}|\bigg{)}+e^{m_{3}}\bigg{(}|S_{3}\rangle\langle S_{3}|+|S_{4} \rangle\langle S_{4}|\bigg{)}\\ &+e^{-\Delta}\bigg{(}|D_{1}\rangle\langle D_{1}|+|D_{2}\rangle \langle D_{2}|+|D_{3}\rangle\langle D_{3}|\bigg{)}\,,\end{split} \tag{32}\]
where we fix the determinant by setting \(\Delta=\frac{2}{3}\sum_{i}(r_{i}+m_{i})\). We can determine whether the UKAQ parametrization is compatible with a many-body structure. A physically interesting structure arises when
\[\begin{split}& r_{i}>0\\ & m_{i}<0\\ &\Delta\geq 0\,,\end{split} \tag{33}\]
where a many-body structure exists if an inner automorphism \(\Phi_{\text{MBP}}\) exists mapping \(\mathfrak{R}\) to the set of 1-string Pauli words. The inner automorphism for this example is constructed by the unitary
\[U_{\text{MBP}}=\exp\left[\frac{i\pi}{4}Y_{1}\otimes Y_{2}\right] \tag{34}\]
which maps computational states to Bell states. Therefore, with a few assumptions about the singular values, we have shown that many-body KAQ metrics exist which generalize
the \(\mathfrak{so}(4)\) Jensen metrics. For further exploration in the more complex loss functionals, we make the simplifying assumption that \(m_{i}=-\Delta\) and \(r_{2}=r_{3}\). We then obtain a class of \(\mathrm{ad}\mathfrak{R}_{0}\) invariant metrics
\[\Omega_{\mathrm{UKAQ}}(t,s)=\omega_{\mathrm{UKAQ}}\left[\frac{t}{6},\frac{s}{6},\frac{s}{6},-\frac{t+2s}{27},-\frac{t+2s}{27},-\frac{t+2s}{27}\right] \tag{35}\]
where \(\mathfrak{R}_{0}\)=span\((A_{1},A_{6})\).
#### 2.3.2 Partially translated KAQ metrics
The partially translated or PKAQ metrics have a KAQ basis which is mixed between Gell-Mann letters and Pauli words. They will be simpler to decode than UKAQ metrics, but still require some work. The PKAQ metric structure appears naturally as a generalization of the compact symplectic Jensen metrics. The KAQ basis should be compatible with the involution \(\theta[X]=\mathcal{J}X^{\mathrm{T}}\mathcal{J}\) where the matrix \(\mathcal{J}\) is defined as
\[\mathcal{J}=\begin{bmatrix}0&\mathbb{1}_{2^{d-1}}\\ -\mathbb{1}_{2^{d-1}}&0\end{bmatrix} \tag{36}\]
A critical difference appearing for this choice of \(\theta\), is that most Gell-Mann matrices are not eigenvectors. For example
\[\begin{split}\theta[A_{1}]&=-A_{6}\\ \theta[A_{6}]&=-A_{1}\end{split} \tag{37}\]
thus \(\big{(}A_{1}-A_{6}\big{)}\) is an eigenvector of \(\theta\) that lives in \(\mathfrak{R}\) and \(\big{(}A_{1}+A_{6}\big{)}\) is an eigenvector of \(\theta\) that lives in \(\mathfrak{M}\). Thus we have to use an orthonormal basis of the Killing-Cartan metric that is mixed between Gell-Mann letters and Pauli words. Using the same dictionary, but a different Cartan decomposition, the Pauli words in \(\mathfrak{R}\) are
\[\begin{split}&(S_{1}-S_{6})=\frac{1}{2}\Big{(}Z_{1}\otimes X_{ 2}\Big{)},\quad(A_{1}-A_{6})=\frac{1}{2}\Big{(}Z_{1}\otimes Y_{2}\Big{)}\\ &(S_{2}+S_{5})=\frac{1}{2}\Big{(}X_{1}\otimes\mathbb{1}_{2} \Big{)},\quad(A_{2}+A_{5})=\frac{1}{2}\Big{(}Y_{1}\otimes\mathbb{1}_{2} \Big{)}\\ &(S_{3}+S_{4})=\frac{1}{2}\Big{(}X_{1}\otimes X_{2}\Big{)},\quad( S_{4}-S_{3})=\frac{1}{2}\Big{(}Y_{1}\otimes Y_{2}\Big{)}\\ &(A_{3}+A_{4})=\frac{1}{2}\Big{(}Y_{1}\otimes X_{2}\Big{)},\quad( A_{3}-A_{4})=\frac{1}{2}\Big{(}X_{1}\otimes Y_{2}\Big{)}\\ & D_{1}-\sqrt{\frac{1}{3}}D_{2}+\sqrt{\frac{2}{3}}D_{3}=\frac{1}{2 }\Big{(}\mathbb{1}_{1}\otimes Z_{2}\Big{)}\\ & 0D_{1}+\sqrt{\frac{2}{3}}D_{2}+\sqrt{\frac{1}{3}}D_{3}=\frac{1}{2 }\Big{(}Z_{1}\otimes\mathbb{1}_{2}\Big{)}\,,\end{split} \tag{38}\]
and the Pauli words constructed from \(\mathfrak{M}\) are
\[\begin{split}&(S_{1}+S_{6})=\frac{1}{2}\Big{(}\mathbb{1}_{1} \otimes X_{2}\Big{)},\quad(A_{1}+A_{6})=\frac{1}{2}\Big{(}\mathbb{1}_{1} \otimes Y_{2}\Big{)}\\ &(S_{2}-S_{5})=\frac{1}{2}\Big{(}X_{1}\otimes Z_{2}\Big{)},\quad( A_{2}-A_{5})=\frac{1}{2}\Big{(}Y_{1}\otimes Z_{2}\Big{)}\\ & D_{1}+\sqrt{\frac{1}{3}}D_{2}-\sqrt{\frac{2}{3}}D_{3}=\frac{1}{ 2}\Big{(}Z_{1}\otimes Z_{2}\Big{)}\,.\end{split} \tag{39}\]
Each word constructed from the same pair of Gell-Mann letters living in different Cartan blocks must be pre-translated into the Killing-Cartan basis to be compatible with \(\theta\). For each of these pre-translated words, we may assign an independent weight in the metric. Only the pairs \((S_{3},S_{4})\) and \((A_{3},A_{4})\) have the same weights, as we do not require them to be translated to achieve the Cartan decomposition. We are thus lead to the following parametrization for generalized \(\mathfrak{sp}(2)\) metrics
\[\begin{split}\omega_{\text{PKAQ}}(r_{i},m_{\alpha})& =e^{r_{1}}|Z_{1}X_{2}\rangle\langle Z_{1}X_{2}|+e^{r_{2}}|Z_{1}Y_{2 }\rangle\langle Z_{1}Y_{2}|+e^{r_{3}}|X_{1}\rangle\langle X_{1}|+e^{r_{4}}|Y_{ 1}\rangle\langle Y_{1}|\\ &+e^{r_{5}}\big{(}|S_{3}\rangle\langle S_{3}|+|S_{4}\rangle \langle S_{4}|\big{)}+e^{r_{6}}\big{(}|A_{3}\rangle\langle A_{3}|+|A_{4}\rangle \langle A_{4}|\big{)}\\ &+e^{r_{7}}|Z_{2}\rangle\langle Z_{2}|+e^{r_{8}}|Z_{1}\rangle \langle Z_{1}|+e^{m_{1}}|X_{2}\rangle\langle X_{2}|+e^{m_{2}}|Y_{2}\rangle \langle Y_{2}|\\ &+e^{m_{3}}|X_{1}Z_{2}\rangle\langle X_{1}Z_{2}|+e^{m_{4}}|Y_{ 1}Z_{2}\rangle\langle Y_{1}Z_{2}|+e^{-\Delta}|Z_{1}Z_{2}\rangle\langle Z_{1}Z _{2}|\end{split} \tag{40}\]
where again \(\Delta\) is chosen such that \(\det\bigl{(}\omega_{\text{PKAQ}}\bigr{)}=1\). Here we clearly see that a few principle axes are still untranslated, but there is far less translation to be done than in the UKAQ example. There are of course other ways to construct a PKAQ parametrization, which are not compatible with the Cartan involution.
Again we simplify our PKAQ parametrization for further investigation. We reduce to two parameters, as it is the minimal number we need to search for non-Jensen type critical points. We take the following non-naturally reductive \(\text{ad}\mathfrak{R}_{0}\) invariant parametrization
\[\Omega_{\text{PKAQ}}(t,s)=\omega_{\text{PKAQ}}\left[\frac{t}{10},\frac{t}{10},\frac{t}{10},\frac{t}{10},\frac{s}{10},\frac{s}{10},\frac{t}{10},-\frac{3t+2s} {25},-\frac{3t+2s}{25},-\frac{3t+2s}{25},-\frac{3t+2s}{25}\right] \tag{41}\]
where \(\mathfrak{R}_{0}\)=span\((Z_{1},Z_{2})\). In the next section we introduce the final cipher class used in our search.
#### 2.3.3 Fully translated KAQ metrics
The final cipher class we consider is the class of fully translated KAQ (FKAQ) metrics. These metrics have no degeneracy, therefore to know about qubits the principal axes must already agree with a set of Pauli words. We use the following parametrization for FKAQ metrics
\[\begin{split}\omega_{\text{FKAQ}}(w_{i},W_{\alpha})& =e^{w_{1}}|X_{1}\rangle\langle X_{1}|+e^{w_{2}}|Y_{1}\rangle\langle Y _{1}|+e^{w_{3}}|Z_{1}\rangle\langle Z_{1}|\\ &+e^{w_{4}}|X_{2}\rangle\langle X_{2}|+e^{w_{5}}|Y_{2}\rangle \langle Y_{2}|+e^{w_{6}}|Z_{2}\rangle\langle Z_{2}|\\ &+e^{W_{1}}|X_{1}X_{2}\rangle\langle X_{1}X_{2}|+e^{W_{2}}|X_{1} Y_{2}\rangle\langle X_{1}Y_{2}|+e^{W_{3}}|X_{1}Z_{2}\rangle\langle X_{1}Z_{2}|\\ &+e^{W_{4}}|Y_{1}X_{2}\rangle\langle Y_{1}X_{2}|+e^{W_{5}}|Y_{1} Y_{2}\rangle\langle Y_{1}Y_{2}|+e^{W_{6}}|Y_{1}Z_{2}\rangle\langle Y_{1}Z_{2}|\\ &+e^{W_{7}}|Z_{1}X_{2}\rangle\langle Z_{1}X_{2}|+e^{W_{8}}|Z_{1}Y _{2}\rangle\langle Z_{1}Y_{2}|+e^{-\Delta}|Z_{1}Z_{2}\rangle\langle Z_{1}Z_{2}| \end{split} \tag{42}\]
where we have assumed that all the principal axes are pure tensor. We have reduced the number of parameters by choosing a set of lab frames for the qubits, i.e. a choice of \(xyz\)-axes. The parameter \(\Delta\) is chosen such that \(\omega_{\text{FKAQ}}\) has unit determinant. In this class no decoding needs to be done, only the commutation relations need to be checked to confirm the KAQ property.
As with the previous sections, we reduce the number of independent weights. The first parametrization is naturally reductive and based upon a penalty metric [27; 28; 29; 30]. We define the biased penalty metric
\[\Omega_{\rm BP}(t,s)=e^{\frac{t}{6}}\Big{(}|X_{1}\rangle\langle X _{1}|+|Y_{1}\rangle\langle Y_{1}|+|Z_{1}\rangle\langle Z_{1}|\Big{)}+e^{\frac {s}{6}}\Big{(}|X_{2}\rangle\langle X_{2}|+|Y_{2}\rangle\langle Y_{2}|+|Z_{2} \rangle\langle Z_{2}|\Big{)}\] \[+e^{-\frac{(t+s)}{18}}\Big{(}|X_{1}X_{2}\rangle\langle X_{1}X_{2 }|+|X_{1}Y_{2}\rangle\langle X_{1}Y_{2}|+|X_{1}Z_{2}\rangle\langle X_{1}Z_{2} |+|Y_{1}X_{2}\rangle\langle Y_{1}X_{2}|+|Y_{1}Y_{2}\rangle\langle Y_{1}Y_{2}|\] \[+|Y_{1}Z_{2}\rangle\langle Y_{1}Z_{2}|+|Z_{1}X_{2}\rangle\langle Z _{1}X_{2}|+|Z_{1}Y_{2}\rangle\langle Z_{1}Y_{2}|+|Z_{1}Z_{2}\rangle\langle Z _{1}Z_{2}|\Big{)}\,. \tag{43}\]
Motivated by the construction in [31] we define a non-naturally reductive class of metrics, which we refer to as Abelian breakdown (AB)
\[\omega_{\rm AB}(t_{i}) =e^{-\Delta}\big{(}|Z_{1}\rangle\langle Z_{1}|+|Z_{2}\rangle \langle Z_{2}|+|Z_{1}Z_{2}\rangle\langle Z_{1}Z_{2}|\big{)}\] \[+e^{t_{1}}\Big{(}|X_{1}\rangle\langle X_{1}|+|X_{1}Z_{2}\rangle \langle X_{1}Z_{2}|\Big{)}+e^{t_{2}}\Big{(}|Y_{1}\rangle\langle Y_{1}|+|Y_{1} Z_{2}\rangle\langle Y_{1}Z_{2}|\Big{)}\] \[+e^{t_{3}}\Big{(}|X_{2}\rangle\langle X_{2}|+|Z_{1}X_{2}\rangle \langle Z_{1}X_{2}|\Big{)}+e^{t_{4}}\Big{(}|Y_{2}\rangle\langle Y_{2}|+|Z_{1} Y_{2}\rangle\langle Z_{1}Y_{2}|\Big{)}\] \[+e^{t_{5}}\Big{(}|X_{1}X_{2}\rangle\langle X_{1}X_{2}|+|Y_{1}Y_{2 }\rangle\langle Y_{1}Y_{2}|\Big{)}+e^{t_{6}}\Big{(}|X_{1}Y_{2}\rangle\langle X _{1}Y_{2}|+|Y_{1}X_{2}\rangle\langle Y_{1}X_{2}|\Big{)}\,. \tag{44}\]
The name is chosen as all degenerate eigenspaces form Abelian subalgebras. We may reduce to two parameters for further investigation
\[\Omega_{\rm AB}(t,s) =e^{-\frac{8t+4s}{30}}\big{(}|Z_{1}\rangle\langle Z_{1}|+|Z_{2} \rangle\langle Z_{2}|+|Z_{1}Z_{2}\rangle\langle Z_{1}Z_{2}|\big{)}\] \[+e^{\frac{t}{10}}\Big{(}|X_{1}\rangle\langle X_{1}|+|Y_{1}\rangle \langle Y_{1}|+|X_{2}\rangle\langle X_{2}|+|Y_{2}\rangle\langle Y_{2}|\] \[\qquad\qquad+|X_{1}Z_{2}\rangle\langle X_{1}Z_{2}|+|Y_{1}Z_{2} \rangle\langle Y_{1}Z_{2}|+|Z_{1}X_{2}\rangle\langle Z_{1}X_{2}|+|Z_{1}Y_{2} \rangle\langle Z_{1}Y_{2}|\Big{)}\] \[+e^{\frac{s}{10}}\Big{(}|X_{1}X_{2}\rangle\langle X_{1}X_{2}|+|Y_ {1}X_{2}\rangle\langle Y_{1}X_{2}|+|X_{1}Y_{2}\rangle\langle X_{1}Y_{2}|+|Y_{ 1}Y_{2}\rangle\langle Y_{1}Y_{2}|\Big{)} \tag{45}\]
which becomes naturally reductive, but not Jensen, when \(t=s\).
## 3 Equations of motion
We turn our attention now to the loss functionals we consider in this work, which may only depend on the metric and structure constants. We consider loss functionals derived from curvature functionals which are essentially tensor networks of the Christoffel connection. These classes of contain structure constant networks which do not appear in [7; 8; 9] at the same order in "perturbation" (defined by the number of structure constants in the contraction). We find it appealing to construct the loss functional from curvature functionals as
* \(R[g]\) is the lowest order term in many natural classes of such actions.
2. \(R[g]\) always has two distinct classes of KAQ critical metrics.
A larger, natural class of loss functionals is
\[\begin{split}\mathcal{L}[g]&=R+\alpha R^{2}+\beta R_{ ab}R^{ab}+\gamma R_{abcd}R^{abcd}\\ &=R+\alpha\mathcal{R}_{0}+\beta\mathcal{R}_{2}+\gamma\mathcal{R}_ {4}\,.\end{split} \tag{3.1}\]
An additional improvement on the pure Ricci theory is that this loss functional contains order parameters, allowing for the appearance of quantum subsystems through spontaneous symmetry breaking. Unlike the loss functional considered in the FZ program the relationships satisfied amongst coefficients in the FZ loss functional do not appear in our model. But as a function of the structure constants, should in principal allow for many-body structures. In Appendix A we give explicit formulas for the functionals \(\mathcal{R}_{2}\) and \(\mathcal{R}_{4}\), and in Appendix B we provide a graphical method for collecting various tensor networks, and use them for comparison to FZ loss functionals. The results of [16; 20] provide the equations of motion that must be satisfied in order for \(g\) to be a critical metric of the loss functional \(\mathcal{L}[g;\alpha,\beta,\gamma]\), over a compact manifold while enforcing a fixed volume element. The last point is why the determinant of the metric must be fixed for our purposes.
The functionals that we examine explicitly in this class all have the Killing-Cartan geometry as an unstable critical point. We suspect that in general they may all be concave about the Killing-Cartan geometry for the same reason as found in [7], who examined the behavior of individual diagrams contributing to the functionals (see their Appendix C and our Appendix B).
Performing the variation of the various functionals (in an orthonormal basis of the metric) yields the following "stress-energy" tensors
\[\begin{split}& T_{ij}=R_{ij}-\frac{1}{2}R\delta_{ij}\\ & T_{ij}^{(0)}=2RR_{ij}+2\nabla^{k}\nabla_{k}(R\delta_{ij})-2 \nabla_{i}\nabla_{j}R-\frac{1}{2}R^{2}\delta_{ij}\\ & T_{ij}^{(2)}=2R_{ikjl}R^{kl}+\nabla^{k}\nabla_{k}R_{ij}+\frac{ 1}{2}\nabla^{k}\nabla_{k}(R\delta_{ij})-\nabla_{i}\nabla_{j}R-\frac{1}{2}R_{kl }R^{kl}\delta_{ij}\\ & T_{ij}^{(4)}=2R_{iklm}R_{j}^{\;\;klm}+4R_{ikjl}R^{kl}+4\nabla^{k }\nabla_{k}R_{ij}-2\nabla_{i}\nabla_{j}R-4R_{ik}R_{j}^{k}-\frac{1}{2}R_{klmn} R^{klmn}\delta_{ij}\end{split}\]
It is important to note that using our definition of \(R_{ijkl}\) (see Appendix A), leads to different indicies being contracted in the Riemann-Ricci terms than those found in [16]. Muto [15] proved that a given metric is a critical point iff
\[\mathcal{T}_{ij}=T_{ij}+\alpha T_{ij}^{(0)}+\beta T_{ij}^{(2)}+\gamma T_{ij}^{ (4)}=\Lambda\delta_{ij} \tag{3.2}\]
where \(\Lambda\) is a real constant depending on the parameters of the problem, namely \(m\), \(r\), and the coupling constants. Note to reduce the calculation, we may move all the terms already proportional to the metric to the RHS. Further, as we are considering homogeneous spaces
the covariant derivatives of the Ricci scalar vanish. The "stress-energy" tensors simplify to
\[T_{ij}=R_{ij}\] \[T^{(0)}_{ij}=2RR_{ij}+2R\nabla_{k}\nabla_{k}(\delta_{ij})\] \[T^{(2)}_{ij}=2R_{ikjl}R_{kl}+\nabla_{k}\nabla_{k}(R_{ij})+\frac{ 1}{2}R\nabla_{k}\nabla_{k}(\delta_{ij})\] \[T^{(4)}_{ij}=2R_{iklm}R_{jklm}+4R_{ikjl}R_{kl}+4\nabla_{k}\nabla _{k}(R_{ij})-4R_{ik}R_{kj}\]
as we are working in an orthonormal basis, we are free to lower all the indices. Summation over repeated indices is still implied. The last simplification we can do is to compute the covariant Laplacians. The Laplacian of the metric vanishes
\[\nabla_{k}\nabla_{k}(\delta_{ij})=-\nabla_{k}(\Gamma_{lki}\delta_{lj}+\Gamma_{ lkj}\delta_{il})=\nabla_{k}(\Gamma_{ikj}+\Gamma_{jki})=0 \tag{10}\]
which follows from metric compatibility of the connection.
The works done by Freedman and Zini mainly considered loss functionals generated from tri-variate perturbed Gaussians, although they also studied the Ricci scalar. An example of the non-Gaussian functionals they considered is the Euclidean integral
\[F[G;\kappa]=\int_{\mathbb{R}^{3(4^{n}-1)}}dy_{1}dy_{2}dy_{3}e^{\left[-\kappa \sum_{i=1}^{3}g_{ab}y_{i}^{a}y_{i}^{b}+C_{abc}y_{i}^{a}y_{j}^{b}y_{3}^{c}\right]}\,. \tag{11}\]
The use of three integration variables is necessary to construct a non-vanishing scalar from the structure constants. Considering perturbed Gaussian integrals allows for a systematic approach to the perturbative calculation. In this way a series of trivalent tensor networks is obtained that determines the loss functional at a given order in perturbation parameter \(\kappa\).
As a simple, but illustrative choice, we choose to analyze in detail the loss functional with the addition of the Gauss-Bonnet sum of curvature scalars:
\[\mathcal{L}_{\text{GB}}(\gamma)=R+\gamma(\mathcal{R}_{0}-4\mathcal{R}_{2}+ \mathcal{R}_{4})\,. \tag{12}\]
Note that the Gauss-Bonnet term is not topological here, since the dimension of the space is \(4^{d}-1\geq 15\), never 4. This loss functional should demonstrate general properties of the larger family, as well as contrast with the choice in Eq. (11) since it depends on a generic combination of all possible contractions of up to four structure constants. A graphical representation of the types of terms is given in Appendix B. The diagrams help illuminate a few important points of contrast between the loss functionals in Eq. (10) compared to Eq. (11). First, the family of curvature terms depends on diagrams with only at most four structure constants in the contraction. In contrast, the perturbed Gaussian of [32] contains a series out to infinite order, which was computed up to terms of order six for the analysis. At the level of fourth order terms, the Gauss-Bonnet combination does not induce any special cancellation between diagrams appearing in \(R_{0}\), \(R_{2}\), and \(R_{4}\). The individual curvature terms contain somewhat symmetric combinations of diagrams with a different relationship from that imposed by Eq. (11). This family of loss-functionals are well-suited
to a geometrically illuminating study that may be able to connect KAQ structure to other known and interesting classes of metrics, including the naturally reductive metrics.
The equations of motion we need with the inclusion of the Gauss-Bonnet term are
\[\mathcal{T}_{ij}=R_{ij}+\gamma\Big{(}2RR_{ij}+2R_{iklm}R_{jklm}-4R_{lijk}R_{lk}- 4R_{ik}R_{kj}\Big{)}=\Lambda_{\rm GB}\delta_{ij}\,. \tag{3.6}\]
Although the Laplacian of the Ricci tensor will in general be non-vanishing, but we do not need to compute it to determine the equations motion for the Gauss-Bonnet term. Next, we apply the parameterizations from the earlier section to this equation of motion.
## 4 Results for SU(4)
The equations of motion allow us to search for solutions in the spaces of 2-parameter KAQ metrics defined in Section 2. These parametrization include both naturally and non-naturally reductive metrics. To yield a solution, a given \(\omega\) must generate a diagonal stress energy tensor satisfying
\[\mathcal{T}_{i}\equiv\mathcal{T}_{ii}=\Lambda_{\rm GB} \tag{4.1}\]
where \(1\leq i\leq 15\) and \(\Lambda_{\rm GB}\) is real. Thus for a given parametrization, we first determine the number of independent \(\mathcal{T}_{i}\). By setting \(s=at\), we simultaneously plot the independent \(\mathcal{T}_{i}\), allowing us to vary \(a\) and check if any critical points appear for non-zero values of \(t\).
By making contour plots of the loss function, we systematically find potential critical points and check if they are indeed true critical points. Further, while we do not have access to the second variation of the loss functional, we still obtain information about the second derivative by comparing contour plots which agree along the line \(t=s\). Doing so affords us a glimpse at the stability of the loss functional around certain Jensen type critical points.
### FKAQ critical points
Here we collect our results for the two FKAQ parametrizations given in Section 2. These include the naturally reductive parametrization \(\Omega_{\rm BP}\) defined in Eq.(2.43) and the non-naturally reductive parametrization \(\Omega_{\rm AB}\) defined in Eq.(2.45). Using two parametrizations allows us to compare the type of critical metrics that appear in the naturally reductive vs. non-naturally reductive cases.
We show results with the help of two types of figures. First, to demonstrate the technique, we plot the equations of motion for an example, at fixed values of the weighting parameters in the metric \((s,t)\). But, to visualize the space of solutions in the weighting parameters, we show contour plots of the loss functionals for varying \(s,t\). It is important to stress that not all critical points that appear in these plots are critical in the space of left-invariant metrics. All that can be learned from these plots is whether they are critical in the considered parametrization space. In order to determine true criticality we always appeal to the equations of motion.
#### 4.1.1 Biased penalty metric
There are only independent three independent \(\mathcal{T}_{i}\) for the biased penalty metric ansatz, \(\Omega_{\rm BP}\) from Eq.(43). The equations that must be satisfied are
\[\mathcal{T}_{1}=e^{\frac{1}{9}(-7a-10)t} \Big{(} -3.\gamma e^{\frac{at}{3}}+18.75\gamma e^{\frac{4}{9}(a+1)t}+9. \gamma e^{\frac{2}{3}(a+2)t}-1.125\gamma e^{\frac{1}{9}(2a+11)t}+1.125\gamma e ^{\frac{1}{9}(8a+5)t} \tag{45}\] \[+0.375\gamma e^{\frac{1}{9}(10a+13)t}+0.75e^{\frac{5}{9}(a+1)t}+0.25e^{\frac{1}{9}(7a+13)t}-1.125\gamma e^{t/3}\Big{)}\] \[= \Lambda_{\rm GB},\]
\[\mathcal{T}_{4}=e^{\frac{1}{9}(-10a-7)t} \Big{(} -1.125\gamma e^{\frac{at}{3}}+18.75\gamma e^{\frac{4}{9}(a+1)t}+1.125\gamma e^{\frac{1}{9}(5a+8)t}-1.125\gamma e^{\frac{1}{9}(11a+2)t}\] \[+0.375\gamma e^{\frac{1}{9}(13a+10)t}+9.\gamma e^{\frac{2}{3}(2at +t)}+0.75e^{\frac{5}{9}(a+1)t}+0.25e^{\frac{1}{9}(13a+7)t}\] \[-3.\gamma e^{t/3}\Big{)}=\Lambda_{\rm GB},\]
\[\mathcal{T}_{7}=e^{-\frac{10}{9}(a+1)t} \Big{(} 2.\gamma e^{\frac{2at}{3}}+1.5\gamma e^{\frac{1}{3}(a+1)t}+51.5 \gamma e^{\frac{8}{9}(a+1)t}+3.\gamma e^{\left(a+\frac{4}{3}\right)t}-18.75 \gamma e^{\frac{1}{9}(4a+7)t} \tag{46}\] \[-0.75\gamma e^{\frac{1}{9}(5a+11)t}-18.75\gamma e^{\frac{1}{9}(7a +4)t}-0.75\gamma e^{\frac{1}{9}(11a+5)t}+3.\gamma e^{\frac{4at}{3}+t}\] \[-0.5e^{\frac{1}{9}(5a+8)t}-0.5e^{\frac{1}{9}(8a+5)t}+2.e^{at+t}+2. \gamma e^{2t/3}\Big{)}\] \[= \Lambda_{\rm GB}.\]
With only three independent diagonal elements, this parametrization is only slightly more complicated than the Jensen type. For Jensen metrics there are only 2 independent \(\mathcal{T}_{ii}\). Recall that we want to find non-Jensen type critical points, as for larger number of qubits Jensen metrics are highly non-local and do not give a typical many-body structure.
In Figure 1 we plot the set of \(\mathcal{T}_{i}\) for two different values of \(a\). The left plot shows that for most values of \(a\) (here 0.1), the only solution that appears is the trivial non-KAQ solution \(s=at=0\). But for special values of \(a\), non-trivial solutions do appear. We unsurprisingly find Jensen type critical points for \(a=1\), which is expected due to the simplicity of the equations of motion. Figure 2 however, shows an example of a non-Jensen type critical points, of which several exist.
In Fig 3 we present a holistic view of the critical points that appear through the use of contour plots of the loss functional. We see that even for the Ricci scalar non-Jensen naturally reductive critical points are present. The shapes of the contour near the origin show why evolutionary searches can miss out on these more structured critical points, as appears to have happened in [32]. Evolutionary searches begin with the choice of an initial parent point. The natural choice here is the Killing-Cartan metric, or a randomly selected nearby point. The search then casts a small net around the parent point, and the loss functions is computed for each of these points. The point with the lowest value serves as the new parent point in the following step. The topography of these plots demonstrates that such a method can fail to find the saddle points, instead settling on the Jensen-type critical point(s).
Figure 1: Here the equations of motion, Eq.(4.2), are plotted for the biased penalty metric ansatz \(\Omega_{\rm BP}\), with \(\gamma=1\) in the Gauss-Bonnet loss functional, Eq.(3.5). The left plot shows that Jensen-type critical points exist, and in fact a new solution for appears for \(t<0\). The second plot demonstrates that for most \(a\) non-trivial solutions do not exist.
Figure 2: Here the equations of motion for \(\Omega_{\rm BP}\) are plotted for \(a\approx-2.06\). We see that there is a non-Jensen type solution for \(t\approx-1.67\).
#### 4.1.2 Abelian breakdown metric
Moving to our other reduced FKAQ parametrization \(\Omega_{\rm AB}\) from Eq.(2.45), we plot the set of independent \(\mathcal{T}_{i}\) in Fig 4, where there are 4 such elements. We find solutions only for \(a=1\), the potential meeting point in the left plot never actually becomes a crossing. In this case the solution is not of Jensen type, even though the corresponding critical metric only has two distinct weights. The solution does not separately scale the components of a Cartan decomposition, instead \(\mathfrak{M}\) will have two separate scales contained within.
We have plotted contour plots in Fig 5, where we find only one non-trivial solution. Comparing to the biased penalty example we may make two comments. First, there are far fewer critical points along these directions in the space of left-invari
Figure 4: Plotted are the equations of motion for the Abelian breakdown metric ansatz, \(\Omega_{\rm AB}\) of Eq.(2.45), which is non-naturally reductive. Again, we have taken \(\gamma=1\) for the amplitude of the Gauss-Bonnet term. The first plot shows the behavior for a typical \(a\); no non-trivial solutions exist. The second plot shows the existence of a non-Jensen type critical point for \(a=1\) and \(t\approx-1.45\).
Figure 3: Contours of the loss functional are plotted for \(\Omega_{\rm BP}\) for \(\gamma=0\) (the Ricci scalar only) and \(\gamma=1\). All red points marked on the plots are critical in the space of left-invariant metrics with fixed determinant. Critical points residing on the line \(t=s\) are Jensen type.
the critical point we obtained is rather interesting. This metric differentiates the Cartan subalgebra from the other observables. While only a small step, this gives some evidence that curvature-based loss functionals may have KAQ critical points with many-body local structure.
### UKAQ critical points
Turning to our UKAQ parametrization, \(\Omega_{\rm UKAQ}\) from Eq.(35), we may perform the same search for critical points. The equations of motions are much more complex for this example, where there are 6 independent \(\mathcal{T}_{i}\). While certain choices of \(a\) reduce the number of independent \(\mathcal{T}_{ii}\), only for \(a=1\) do we obtain non-trivial solutions to the equations of motion.
From Figure 6 we see that no critical points appear besides the Jensen type along the line \(t=s\). We take this as evidence that naturally reductive metrics are somewhat favored over non-naturally reductive ones. Notice that these critical Jensen metrics are exactly the same as those found in Figure 3. Comparing these plots lets us understand the stability of these solutions, at least in the space of naturally reductive metrics. There is evidence that the Jensen type critical point with \(s,t<0\) is a stable critical point, where as the solution for positive \(t\) and \(s\) is clearly a saddle point.
Figure 5: Contours of the loss functional \(\mathcal{L}_{\rm GB}\) are plotted for \(\gamma=0\) (the Ricci scalar only) and \(\gamma=1\). The red points marked on the plots are critical points in the space of left-invariant metrics with fixed determinant. It is important to stress that the line \(t=s\) are not Jensen type metrics for the \(\Omega_{\rm AB}\) parametrization.
We do not fully understand the propensity towards Jensen metrics in our examples. It could simply be a preference for naturally reductive metrics. There is however another possibility. At such a low dimension (two qubits) the Jensen metrics and penalty metrics are essentially the same. So, the loss functions investigated here may simply be favoring penalty structures but it would be enlightening to study \(\mathfrak{su}(8)\).
### PKAQ critical points
And for the final parametrization we investigate \(\Omega_{\rm PKAQ}\) defined in Eq.(41). We only find Jensen-type critical points, and note that this direction seems the least fruitful in the search for critical metrics.
Figure 6: Contours of the loss functionals are plotted for the \(\Omega_{\rm UKAQ}\) parametrization, Eq.(35). The red points located along the line \(t=s\) are critical points in the space of left-invariant metrics. All other points (green) are only critical in the considered parametrization space.
Figure 7: Contours are plotted for the \(\Omega_{\rm PKAQ}\) parametrization, Eq.(41). The red points indicate true critical points, which are all Jensen type. The green point, in the second plot, is only critical in the parametrization space.
Conclusions
In this work we have found critical points of curvature-based loss functionals that are KAQ and include metrics which are compatible with many-body physics. All of the critical points we have found are naturally reductive, although we cannot rule out the existence of non-naturally reductive KAQ metrics. We could not find any \(\mathrm{ad}\mathfrak{R}_{0}\) critical metrics which were not Jensen. We also have found that most critical points are saddle points, with the potential for a few stable Jensen critical points.
While we have only analyzed detailed examples for \(\mathfrak{su}(4)\), it is straightforward, although numerically intensive, to generalize these constructions to larger \(N\). What we have presented is evidence that the KAQ ansatz is a useful tool both for finding critical metrics among the large set of left-invariant metrics and better understanding the structure of KAQ critical points. Even going to \(\mathfrak{su}(8)\) would be helpful in determining more about the structure of the critical points of curvature-based loss functionals. For example with three qubits, penalty metrics are not Jensen, allowing for more exploration in what exactly drives the value of the loss function; the algebraic properties of the principal axes or structure of the weights?
These constructions for KAQ metrics we provide may also be used to explore a much broader family of functionals depending on curvature tensors. However, as this construction cannot find non-KAQ minima, we cannot study the relative frequency of KAQ vs non-KAQ. Ideally, one would like functionals with only KAQ minima, or related structure for \(N\neq 2^{d}\).
## 6 Acknowledgements
We thank Sarang Gopalakrishnan for bringing the Freedman and Zini papers to our attention and for initial discussions. The work of S. Prudhoe and S. Shandera was supported by NSF PHY-2310662. The work of R. Kumar was supported by NSF PHY-2207851.
|
2305.05580 | Fashion CUT: Unsupervised domain adaptation for visual pattern
classification in clothes using synthetic data and pseudo-labels | Accurate product information is critical for e-commerce stores to allow
customers to browse, filter, and search for products. Product data quality is
affected by missing or incorrect information resulting in poor customer
experience. While machine learning can be used to correct inaccurate or missing
information, achieving high performance on fashion image classification tasks
requires large amounts of annotated data, but it is expensive to generate due
to labeling costs. One solution can be to generate synthetic data which
requires no manual labeling. However, training a model with a dataset of solely
synthetic images can lead to poor generalization when performing inference on
real-world data because of the domain shift. We introduce a new unsupervised
domain adaptation technique that converts images from the synthetic domain into
the real-world domain. Our approach combines a generative neural network and a
classifier that are jointly trained to produce realistic images while
preserving the synthetic label information. We found that using real-world
pseudo-labels during training helps the classifier to generalize in the
real-world domain, reducing the synthetic bias. We successfully train a visual
pattern classification model in the fashion domain without real-world
annotations. Experiments show that our method outperforms other unsupervised
domain adaptation algorithms. | Enric Moreu, Alex Martinelli, Martina Naughton, Philip Kelly, Noel E. O'Connor | 2023-05-09T16:14:57Z | http://arxiv.org/abs/2305.05580v1 | Fashion CUT: Unsupervised domain adaptation for visual pattern classification in clothes using synthetic data and pseudo-labels
###### Abstract
Accurate product information is critical for e-commerce stores to allow customers to browse, filter, and search for products. Product data quality is affected by missing or incorrect information resulting in poor customer experience. While machine learning can be used to correct inaccurate or missing information, achieving high performance on fashion image classification tasks requires large amounts of annotated data, but it is expensive to generate due to labeling costs. One solution can be to generate synthetic data which requires no manual labeling. However, training a model with a dataset of solely synthetic images can lead to poor generalization when performing inference on real-world data because of the domain shift. We introduce a new unsupervised domain adaptation technique that converts images from the synthetic domain into the real-world domain. Our approach combines a generative neural network and a classifier that are jointly trained to produce realistic images while preserving the synthetic label information. We found that using real-world pseudo-labels during training helps the classifier to generalize in the real-world domain, reducing the synthetic bias. We successfully train a visual pattern classification model in the fashion domain without real-world annotations. Experiments show that our method outperforms other unsupervised domain adaptation algorithms.
Keywords:Domain adaptation Synthetic data Pattern classification.
## 1 Introduction
In 2021, 75% of EU internet users bought goods or services online [1]. One of the main drivers of increased e-commerce engagement has been convenience, allowing customers to browse and purchase a wide variety of categories and brands in a single site. If important product metadata is either missing or incorrect, it becomes difficult for customers to find products as the number of available products on e-commerce sites grows. Online stores typically offer a set of filters (e.g. pattern, color, size, or sleeve length) that make use of such metadata and help customers to find specific products. If such critical information is missing or incorrect then the product cannot be effectively merchandised. Machine learning
has been used for fashion e-commerce in recent works to analyze product images, e.g. clothes retrieval [2], detecting the outline [3], or to find clothes that match an outfit [4]. In this paper, our prime interest is a visual classification task which consists of classifying patterns in catalog images of clothing. Patterns describe the decorative design of clothes, and they are important because they are widely used by customers to find products online. Figure 1 shows fashion visual pattern examples in the syntetic and real-world domains.
Fashion pattern classification is challenging. Fashion images often include models in different poses with complex backgrounds. Achieving high performance requires large annotated datasets [5][6][7]. However, public datasets are only available for non-commercial use or do not cover the specific attributes or diversity we require, while generating private datasets with fine-grained and balanced annotations is expensive. In addition, publicly available fashion datasets typically have underrepresented classes with only a few samples. For example, in the Deep Fashion dataset [5] there are 6633 images with the "solid" pattern while only 242 images contain the "lattice" pattern. Categories that are underrepresented during training achieve a lower performance, thus reducing the overall performance.
We address these problems by generating artificial samples using Synthetic Data Generation (SDG) techniques. Synthetic data has shown promising results in domains where few images are available for training [8][9]. The main advantage of synthetic data is that it can generate unlimited artificial images because labels are automatically produced by the 3D engine when rendering the images.
Figure 1: Synthetic samples from our Zalando SDG dataset (first row) and real samples from the DeepFashion dataset [5] (second row) representing the striped, floral, and plain categories.
However, synthetic images are not a precise reflection of the real-world domain in which the model will operate. Computer vision models are easily biased by the underlying distribution in which they are trained [10]. Even if the synthetic images use realistic lighting and textures that look realistic to humans, the model will tend to over-optimize against the traits of the synthetic domain and won't generalize well to real data.
We consider this problem in the context of unsupervised domain adaptation [11] by using the knowledge from another related domain where annotations are available. We assume that we have abundant annotated data on the source domain (synthetic images) and a target domain (real images) where no labels are available.
Unsupervised domain adaptation has shown excellent results when translating images to other domains [11] ; nevertheless, translated images can't be readily used to train classification models because image features, such as patterns, are distorted during the translation step since the translation model doesn't have information about the features. Specifically, when complex patterns are shifted to a different domain, they can be distorted to a level that they no longer adhere to the original pattern label for the synthetic image. For example, when an image with the "camouflage" pattern is translated from the synthetic to the real domain, the pattern could be accidentally distorted to "floral".
In this paper, we introduce a new unsupervised domain adaptation approach that doesn't require groundtruth labels. First, we produce a synthetic dataset for fashion pattern classification using SDG that equally represents all the classes. Second, we jointly train a generative model and a classifier that will make synthetic images look realistic while preserving the class patterns. In the final stage of the training, real-world pseudo-labeled images are used to improve the model generalization toward real images. The contributions of this paper are as follows:
* We propose a novel architecture that performs the image translation task while jointly training a classification model.
* We outperform other state-of-the-art unsupervised domain adaptation algorithms in the visual fashion pattern classification task.
The remainder of the paper is organized as follows: Section 2 reviews relevant work; Section 3 explains our method; Section 4 presents our synthetic dataset and experiments, and Section 5 concludes the paper.
## 2 Related work
Synthetic data has been used extensively in the computer vision field. Techniques to generate synthetic datasets range from simple methods generating primitive shapes [12] to photorealistic rendering using game engines [13]. Although high quality synthetic images can appear realistic to humans, they don't necessarily help the computer vision models to generalize to real-world images. Convolutional neural networks easily overfit on synthetic traits that are not present in the real-world. This is addressed by using domain adaptation techniques that
reduces the disparity between the synthetic and real domains. Some works approach domain adaptation by simply improving the realism aspect [14], or by pushing the randomization and distribution coverage at the source [15]. These approaches imply additional modeling effort and longer generation times per image, for example by relying on physically based renderers for higher photorealistic results, making the synthetic data better match the real data distribution.
In the context of unsupervised domain adaptation, non-adversarial approaches consist of matching feature distributions in the source and the target domain by transforming the feature space to map the target distribution [16]. Gong et al. [17] found that gradually shifting the domains during training improved the method's stability. Recent methods are based on generative adversarial networks [18] because of their unsupervised and unpaired nature. Generative domain adaptation approaches rely on a domain discriminator that distinguishes the source and target domains [19] and updates the generator to produce better images. Our approach improves existing adversarial approaches by optimizing a classifier alongside the generator, producing realistic data that retrain the source category.
## 3 Fashion CUT
Our approach has two components: 1) An image translation network that generates realistic images. 2) A classifier that enforces the generated images to keep the class patterns. The overall architecture is shown in Figure 2.
Acquiring paired images from both domains can be difficult to achieve in the fashion domain, resulting in high costs. As such, we use Contrastive Unpaired
Figure 2: The proposed architecture includes a translation model (CUT) and a classifier model (ResNet50), which are optimized together via a common loss that ensures realistic images with reliable annotations. Pseudo-labeled real images are included in each mini-batch to improve the classifier generalization.
Translation (CUT) [20] for the image translation module. Synthetic images don't have to match the exact position or texture of real images in the dataset because we use an unpaired translation method. CUT learns a mapping that translates unpaired images from the source domain to the target domain. It operates on patches that are mapped to a similar point in learned feature space using an infoNCE [21] contrastive loss. In addition, CUT uses less GPU memory than other two-sided image translation models (e.g. CycleGAN) because it only requires one generator and one discriminator. By reducing memory usage, the joint training of an additional classifier becomes tractable on low cost GPU setups with less than 16GB of memory.
While CUT produces realistic images, the class patterns can be lost or mixed with other classes since CUT doesn't enforce that these category features are consistent across the image translation. The generator's only objective is to produce realistic images that resemble the real-world domain, but it ignores the nature of each pattern. Any pattern distorted during the translation will impact the performance of a classifier trained on this synthetic data. Figure 3 showcases unsuccessful examples of mixed patterns by the generator, and Figure 4 shows successful translations using Fashion CUT.
In order to enforce stability in the generated patterns, we add a ResNet50 model that predicts the category of the images generated by CUT. The classifier is optimized alongside the CUT generator to fulfill both classification and translation tasks. Figure 5 shows how the classifier preserves the pattern features in comparison to vanilla CUT. Training both models simultaneously is faster
Figure 3: Synthetic images (first row) and unsuccessfully adapted images using CUT (second row) due to shifted patterns by the generator when not imposing class constraints.
and provides better results than training them separately. The generator loss function is given by:
\[\begin{split}\lambda g*\mathcal{L}_{\textit{GAN}}(\textit{G},D,X,Y)+ \lambda c*\mathcal{L}_{\textit{classifier}}(\textit{C})+\\ \lambda ncex*\mathcal{L}_{\textit{NCEx}}(\textit{G},D,X)+\lambda ncey* \mathcal{L}_{\textit{NCEy}}(\textit{G},D,\textit{Y})\end{split} \tag{1}\]
where \(\mathcal{L}_{\textit{GAN}}(\textit{G},D,X,\textit{Y})\) is the generator loss, \(\mathcal{L}_{\textit{classifier}}(\textit{C})\) is the cross-entropy loss on the classifier inferred from the images generated by the generator. \(\mathcal{L}_{\textit{NCEx}}(\textit{G},D,X)\) and \(\mathcal{L}_{\textit{NCEy}}(\textit{G},D,\textit{Y})\) are the contrastive losses that encourage spatial consistency for the synthetic and real images, respectively. \(G\) is the generator model, \(D\) the discriminator model, \(X\) the real image, \(Y\) the synthetic image, and \(C\) the classification model. \(\lambda g\), \(\lambda c\), \(\lambda ncex\), and \(\lambda ncey\) are hyperparameters that control the weight of the generator, the classifier, and both contrastive losses, respectively.
In our experiments we empirically choose to replace half of the synthetic mini-batch with images from the target domain. As real-world annotations are not available for generated images, we use pseudo-labels predicted by the classifier. The model suffers from the cold start problem when introducing pseudo-labels in the early epochs because the classifier struggles to converge. We found that the classifier requires at least 1 epoch of synthetic samples in order to generate reliable pseudo-labels for real-world images. We obtained the best results when enabling pseudo-labels at the end of epoch 2.
Figure 4: Synthetic images (first row) and adapted domain images using Fashion CUT(second row).
## 4 Experiments
This section describes the synthetic dataset we generated and the two experiment setups used to evaluate Fashion CUT.
### Zalando SDG dataset
The Zalando SDG dataset is composed of 31,840 images of 7 classes: plain, floral, striped, dotted, camouflage, gradient, and herringbone. The dataset has been generated using Blender, an open-source 3D computer-graphic software [22]. We relied on a basic set of professionally modeled 3D objects from CGTrader representing a variety of fashion silhouettes (e.g. shirt, dress, trousers) and implemented a procedural material for each of the 7 target classes. Each procedural material is implemented as a Blender shader node, where multiple properties can
Figure 5: Comparison of CUT and Fashion CUT image translation. Note that the annotations (gradient, striped, dotted) are preserved when using Fashion CUT.
be exposed and controlled via Blender Python API. Examples of such properties include pattern scale, color or color pairing, orientation and image-texture. This setup allows an arbitrary amount of different images for each 3D object and class pair to be generated programmatically. We randomized background, lighting, and camera position, as seen in Figure 6. We didn't use physically based renderers as those are more resource intensive, instead we traded off rendering accuracy for speed and adopted the real-time Blender Eevee render engine [23].
The procedural materials can be applied to any new 3D objects. As such they provide a powerful generalized approach to data creation, and the generated images do not require any manual human validation as long as the procedural randomization guarantees that each possible output belongs to the expected target domain class.
### Evaluation on Zalando SDG dataset
In our experiments we train end-user pattern classification models using datasets from both 31,840 synthetic fashion imagery (the source domain, which includes groundtruth labels), and 334,165 real-world fashion imagery (the target domain, which has no groundtruth labels and is used solely to train our domain adaptation transformation). We evaluate the performance of the algorithms using a validation set and a test set composed of 41,667 annotated real images each. The metric used is accuracy and all algorithms use a ResNet50 [24] as the classifier. Fashion CUT is optimized using Adam with learning rate \(10^{-5}\) and \(\lambda g=0.1\), and \(\lambda classifier=0.1\) for \(N=5\) epochs.
In Table 1, we compare the performance of domain adaptation algorithms trained only on our 31,840 synthetically generated dataset and evaluated on the 41,667 real fashion images.
First, we measured the performance of training without domain adaptation. In other words, the classifier was trained only on synthetic images. The performance was poor because the model didn't have information about real world images.
Second, we evaluate Zalando SDG on other domain adaptation algorithms in the fashion domain. All experiments were performed in the environment provided
Figure 6: For each render we start with a provided 3D object, add environment and spot lights, apply a procedural material and then randomize its properties (e.g. colors, scale).
by Jiang et al. [27]. Our approach outperforms the other algorithms for the pattern classification task. Finally, we found that using pseudo-labels improves the results with minor changes in the training.
### Synthetic dataset size
We explore the required number of synthetic images to successfully train our unsupervised domain adaptation algorithm. For this experiment we train our model using 10,000 unlabeled real images and changing the number of synthetic images. Figure 7 shows that Fashion CUT performance benefits from large synthetic datasets. We found that at least 5,000 synthetic images are required to outperform other algorithms in visual pattern classification.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline No adaptation & 0.441 \\ BSP [25] & 0.499 \\ MDD [26] & 0.540 \\ AFN [16] & 0.578 \\ Fashion CUT (ours) & 0.613 \\ Fashion CUT with pseudo-labels (ours) & **0.628** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of unsupervised domain adaptation algorithms on our Zalando SDG dataset. The metric used is accuracy.
Figure 7: Evaluation of Fashion Cut with varying amounts of the Zalando SDG dataset and 10.000 unlabeled real images. The performance is measured in accuracy.
## 5 Conclusions
Combining synthetic data generation with unsupervised domain adaptation can successfully classify patterns in clothes without real-world annotations. Furthermore, we found that attaching a classifier to an image translation model can enforce label stability, thus improving performance. Our experiments confirm that Fashion CUT outperforms other domain adaptation algorithms in the fashion domain. In addition, pseudo-labels proved to be beneficial for domain adaptation in the advanced stages of the training. As future work, we will explore the impact of fashion synthetic data in a semi-supervised setup. We hope this study will help enforce 3D rendering as a replacement for human annotations.
|
2301.09895 | The emergence of low-frequency dual Fano resonances in chiral twisting
metamaterials | In the current work, through a finite element analysis, we demonstrate that a
configuration of chiral cells having syndiotactic symmetry provides dual Fano
resonances at low frequency. From the phononic dispersion and transmission
response, we compare the signature provided by a composite made of chiral cells
to the ones of homogeneous medium, isotactic nonchiral, and isotactic chiral
beams. The study results in an innovative design of a mechanical metamaterial
that induces the Fano resonance at low frequency with a relatively high quality
factor. This might be a significant step forward for mechanical wave filtering
and detection. Performances have been evaluated using a sensor that will be
implemented as a thermometer. | Brahim Lemkalli, Muamer Kadic, Youssef El Badri, Sébastien Guenneau, Abdellah Mir, Younes Achaoui | 2023-01-24T10:11:47Z | http://arxiv.org/abs/2301.09895v1 | # The emergence of low-frequency dual Fano resonances in chiral twisting metamaterials
###### Abstract
In the current work, through a finite element analysis, we demonstrate that a configuration of chiral cells having syndiotactic symmetry provides dual Fano resonances at low frequency. From the phononic dispersion and transmission response, we compare the signature provided by a composite made of chiral cells to the ones of homogeneous medium, isotactic nonchiral, and isotactic chiral beams. The study results in an innovative design of a mechanical metamaterial that induces the Fano resonance at low frequency with a relatively high quality factor. This might be a significant step forward for mechanical wave filtering and detection. Performances have been evaluated using a sensor that will be implemented as a thermometer.
Twisting metamaterials, Fano resonance, Temperature sensor
## I Introduction
In recent years, the emergence of composite-structured materials has heralded significant advancements in mechanical engineering [1]. As a result, new generations of man-made materials, known as "metamaterials," are created, allowing mechanical behaviors to be adapted with new characteristics beyond the intrinsically well known [2]. From an elastodynamic viewpoint, these allow the manipulation and control of acoustic wave propagation by both mechanisms, namely local resonance and Bragg scattering [3; 4]. Besides, mechanical metamaterials are well known by their various exotic parameters in the static regime, including negative Poisson's ratio [5], flexibility [6], and twist conversion [7; 8], which leads to a "dynamic paradigm," used today in a wide range of applications. For instance, auxetic metamaterials were proposed in order to enhance seismic shielding against surface waves [9]. Besides, metamaterials with a twist can exhibit a distinct feature called acoustic activity. This converts the linear polarization of a transverse wave to circular polarization [10]. Recently, twisting metamaterials demonstrated the conversion of longitudinal waves into twist waves [11; 12].
In general, the local resonance is caused by the coupling between a discrete resonance and a continuous state, which causes the appearance of a peak at the resonance frequency followed or preceded by the dip of the anti-resonance. This mechanism is a consequence of constructive and destructive interferences, respectively, previously reported in the field of optics [13]. Since its discovery more than 60 years ago [13], the prominent Fano resonance has piqued the interest of scientists due to its asymmetric nature, which is used in some relevant applications [14] such as filtering [15] and detection [16].
As a mechanical counterpart, this sort of resonance has gained prominence [17]. Several devices based on mechanical Fano resonance have been developed in recent years [18], including concentrated pipes [19], Helmholtz resonators [20], and phononic crystals [21; 22; 23]. However, the dimensions of these structures, notably phononic crystals, are equivalent to or even larger than the wavelengths; also, the Fano resonance effect occurs in just one operational frequency range. Multi-band systems with sub-wavelength dimensions and a high quality factor at low frequencies remain a major challenge for the development of multi-band and multi-functional devices [21]. Dual Fano resonators for low frequencies have recently been developed, employing an array of units made up of two types of cell units containing multiple cavities, each with its own specific set of characteristics [24]. These are based on the emergence of acoustic metamaterials [25] with dimensions smaller than the wavelength, leading to exceptional elastic wave manipulation abilities.
In this study, we leverage the design of a metamaterial with a twist to generate double Fano resonance at low frequency, inspired by the chiral tacticity in metamaterials [26]. In Section II, we demonstrate numerically that a chiral syndiotactic cell generates local resonance. By connecting two cells in such a way that the contact plane between the two cells forms a mirror, the Fano resonance fingerprint is the direct consequence of |
2302.01741 | Disorder in interacting quasi-one-dimensional systems: flat and
dispersive bands | We investigate the superconductor-insulator transition (SIT) in disordered
quasi-one dimensional systems using the density-matrix renormalization group
method. Focusing on the case of an interacting spinful Hamiltonian at
quarter-filling, we contrast the differences arising in the SIT when the parent
non-interacting model features either flat or dispersive bands. Furthermore, by
comparing disorder distributions that preserve or not SU(2)-symmetry, we unveil
the critical disorder amplitude that triggers insulating behavior. While
scaling analysis suggests the transition to be of a
Berezinskii-Kosterlitz-Thouless type for all models (two lattices and two
disorder types), only in the flat-band model with Zeeman-like disorder the
critical disorder is nonvanishing. In this sense, the flat-band structure does
strengthen superconductivity. For both flat and dispersive band models, i) in
the presence of SU(2)-symmetric random chemical potentials, the
disorder-induced transition is from superconductor to insulator of singlet
pairs; ii) for the Zeeman-type disorder, the transition is from superconductor
to insulator of unpaired fermions. In all cases, our numerical results suggest
no intermediate disorder-driven metallic phase. | Mi-Ji Liang, Yong-Feng Yang, Chen Cheng, Rubem Mondaini | 2023-02-03T14:04:11Z | http://arxiv.org/abs/2302.01741v2 | # Disorder in interacting quasi-one-dimensional systems: flat and dispersive bands
###### Abstract
We investigate the superconductor-insulator transition (SIT) in disordered quasi-one dimensional systems using the density-matrix renormalization group method. Focusing on the case of an interacting spinful Hamiltonian at quarter-filling, we contrast the differences arising in the SIT when the parent non-interacting model features either flat or dispersive bands. Furthermore, by comparing disorder distributions that preserve or not SU(2)-symmetry, we unveil the critical disorder amplitude that triggers insulating behavior. While scaling analysis suggests the transition to be of a Berezinskii-Kosterlitz-Thouless type for all models (two lattices and two disorder types), only in the flat-band model with Zeeman-like disorder the critical disorder is nonvanishing. In this sense, the flat-band structure does strengthen superconductivity. For both flat and dispersive band models, i) in the presence of SU(2)-symmetric random chemical potentials, the disorder-induced transition is from superconductor to insulator of singlet pairs; ii) for the Zeeman-type disorder, the transition is from superconductor to insulator of unpaired fermions. In all cases, our numerical results suggest no intermediate disorder-driven metallic phase.
+
Footnote †: Those authors contributed equally to this work.
+
Footnote †: Those authors contributed equally to this work.
## I Introduction
Band dispersion naturally affects the physics of quantum systems. Compared to regular dispersive bands, systems exhibiting flat bands support abundant phenomena such as topological insulating/superconducting physics, various edge states, and exotic superfluid phases. In the noninteracting case, a purely flat band has constant energy as a function of quasimomentum. For a particle loaded in a flat band, the high degeneracy causes it to localize in a compact form within a few sites whose geometry depends on the details of the Hamiltonian [1; 2; 3; 4]. Any finite interaction will be much larger than the bandwidth, leading to rich strongly-correlated physics at any value of the interaction strength.
Among the many interesting aspects of such systems, one of the interests lies in the interplay of the flat band structure, and superconductivity [5; 6; 7; 8; 9; 10; 11; 12]. Studies on topological models suggest that the isolated flat bands have much higher superconducting transition temperature [5; 6; 7]. In lattice models with flat bands, the preformed pairs dominate transport even above the critical temperature of the transition to a superfluid state [8]. Compared to a standard two-leg fermionic ladder, recent work argues that the Creutz lattice [13] exhibiting a flat dispersion in the non-interacting regime has longer-ranged pairing correlation function, suggesting a more robust pairing and superconductivity [9].
One way to further probe whether pairing and superconductivity are enhanced in systems with flat dispersion is to estimate their robustness against disorder. In the absence of interactions, disorder generically localizes all single-particle eigenstates and induces Anderson localization [14]; in flat-band systems, such localization phenomenon still occurs but with characteristic critical exponents that depend on specific details [15; 16], including the existence of coupling to dispersive bands [17].
At high energies, the interplay between disorder and interactions can lead to disorder-free flat-band localization at weak disorder, and conventional disorder-induced many-body localization in the strong-disorder regime [18; 19; 20; 21; 22]. Turning to the low-energy but yet interacting scenario, sufficient disorder destroys the phase coherence associated with superfluid/superconducting order, leading to a superconductor-insulator transition (SIT) [23]. However, whether the route to insulating behavior proceeds through the direct localization of Cooper pairs [24] or via the destruction of Cooper pairing then followed by the standard localization of single electrons is still unsettled [25; 26]. Although disorder-induced ground-state transitions have been extensively studied for either spin or bosonic lattices [27; 28; 29; 30], the transition type and the university class of disorder-driven SIT in the presence of both charge and spin degrees of freedom remains elusive.
In this work, we aim to systematically investigate the disorder-induced SIT in a fully interacting setting from the perspective of how robust is the pairing and superconductivity against disorder in systems with either flat or dispersive bands. Additionally, we are also interested in details of the SIT, including its university class, via proper finite-size scaling from numerically exact calculations of systems with different sizes. Specifically, we focus on the attractive Fermi-Hubbard model on the Creutz lattice with flat dispersion in the noninteracting regime, and carefully examine the pairing and superconductivity via energies, superfluid densities, and correlation functions. We benchmark our results with a regular two-leg ladder with dispersive bands to study how band disper
sion affects pairing and superconductivity under the influence of disorder.
The rest of the paper is organized as follows. In Sec. II, we introduce the Hamiltonian on two lattice types with different dispersions, two types of disorders, and also our numerical method. Section III is devoted to the finite-size scaling of the superfluid weight, where we discuss the universality class of disorder-induced SIT. In Sec. IV, we further analyze the ground-state phase transition and corresponding phases via correlation functions. The summary of the results is presented in Sec. V.
## II Model and method
We first consider the attractive Hubbard model on the Creutz lattice described by the Hamiltonian:
\[\hat{\mathcal{H}}_{C}= -\mathrm{i}t\sum_{j,\sigma}\left(\hat{c}^{A\dagger}_{j,\sigma} \hat{c}^{A}_{j+1,\sigma}-\hat{c}^{B\dagger}_{j,\sigma}\hat{c}^{B}_{j+1,\sigma} +\mathrm{H.c.}\right)\] \[-t\sum_{j,\sigma}\left(\hat{c}^{A\dagger}_{j,\sigma}\hat{c}^{B}_{ j+1,\sigma}+\hat{c}^{B\dagger}_{j,\sigma}\hat{c}^{A}_{j+1,\sigma}+\mathrm{H.c.}\right)\] \[+U\sum_{j,\alpha}\hat{n}^{\alpha}_{j,\uparrow}\hat{n}^{\alpha}_ {j,\downarrow}, \tag{1}\]
where \(\hat{c}^{\alpha\dagger}_{j,\sigma}\) (\(\hat{c}^{\alpha}_{j,\sigma}\)) creates (annihilates) a fermion with spin \(\sigma=\uparrow,\downarrow\) on the \(j\)-th unit cell with chain index \(\alpha=A,B\) [see cartoon in Fig. 1(a)]; \(\hat{n}^{\alpha}_{j,\sigma}=\hat{c}^{\alpha\dagger}_{j,\sigma}\hat{c}^{\alpha} _{j,\sigma}\) is the corresponding the number-density operator. For comparison, we also examine the attractive Hubbard model on the regular two-leg ladder:
\[\hat{\mathcal{H}}_{L}= -t\sum_{j,\sigma}\left(\hat{c}^{A\dagger}_{j,\sigma}\hat{c}^{A}_{ j+1,\sigma}+\hat{c}^{B\dagger}_{j,\sigma}\hat{c}^{B}_{j+1,\sigma}+\mathrm{H.c.}\right)\] \[-t\sum_{j,\sigma}\left(\hat{c}^{A\dagger}_{j,\sigma}\hat{c}^{B}_{ j,\sigma}+\hat{c}^{B\dagger}_{j,\sigma}\hat{c}^{A}_{j,\sigma}+\mathrm{H.c.}\right)\] \[+U\sum_{j,\alpha}\hat{n}^{\alpha}_{j,\uparrow}\hat{n}^{\alpha}_{ j,\downarrow}\;, \tag{2}\]
where a schematic representation is shown in Fig. 1(b). For both models, the linear lattice size is \(L\), and the hopping amplitudes are proportional to \(t\) (in the Creutz lattice, intrachain hoppings gain a phase, being purely imaginary). The interactions are attractive, \(U<0\), wherein fermions with opposite spins form pairs to lower the total energy, further condensing to form a superfluid state, provided the parent state is metallic.
The Creutz lattice features a flat dispersion in the non-interacting case (\(U=0\)), and the two-leg ladder has, on the other hand, dispersive bands. An immediate question that arises is how the interaction affects the band structure, which can be inferred by the pair-excitation spectrum obtained from many-body numerical calculations, as shown in Fig. 1. For the Creutz lattice, there are two highly degenerated bands at \(\pm 4t\), each with zero bandwidth at \(U=0\). Their bandwidth grows in the presence of finite interactions, but the two bands are still relatively narrow. On the contrary, the bandwidth of the two-leg ladder is much larger. Therefore, we assume that the difference in the band structure in the noninteracting case would also affect the pairing and superconductivity in the presence of interactions. Notice that the model on both lattices at half-filling displays a vanishing superfluid weight \(\mathcal{D}_{s}\); we focus on quarter-filling, \(\langle\hat{n}\rangle=\sum_{j,\alpha,\sigma}\langle\hat{n}^{\alpha}_{j,\sigma} \rangle/2L=1/2\), to ensure we start from a robust superfluid state. Besides that, in what follows, we fix the interaction strength at \(U=-8\) [\(t=1\) sets the energy scale], at which \(\mathcal{D}_{s}\) for the Creutz lattice is close to its maximum in clean cases [9].
The robustness of the superconductivity is estimated by examining the critical disorder to break the pairing coherence and corresponding superconducting state. We first consider the spin-independent random chemical potentials which do not break local singlet pairs:
\[\hat{\mathcal{H}}_{\mu}=\sum_{i}\mu_{i}\hat{n}_{i}, \tag{3}\]
where \(i\) labels a single site and \(\mu_{i}\in[-W,W]\) is taken from an uncorrelated, uniform distribution with disorder strength \(W\). Alternatively, random Zeeman-like fields introduce another kind of disorder
\[\hat{\mathcal{H}}_{h}=\sum_{i}h_{i}\hat{S}^{z}_{i} \tag{4}\]
with \(h_{i}\in[-W,W]\) and \(S^{z}_{i}=\hat{n}_{i,\uparrow}-\hat{n}_{i,\downarrow}\). The latter
Figure 1: Pair excitation spectrum, \(E_{0}(N_{\uparrow}+1,N_{\downarrow}+1)-E_{0}(N_{\uparrow},N_{\downarrow})\), in the presence of the negative \(U\) for (a) the Creutz lattice and (b) a regular two-leg ladder. Here \(E_{0}(N_{\uparrow},N_{\downarrow})\) stands for the groundstate energy in a system with \(N_{\uparrow}\) spin-up and \(N_{\downarrow}\) spin-down fermions. Insets show the lattice geometry: the red ellipse denotes a unit cell labeled by \(j\), \(A\) and \(B\) label two legs. For the Creutz lattice, the arrows depict the sign of the hopping in the intrachain bonds. For the regular two-leg ladder, all hoppings are real without extra phases. Here the results are obtained from numerical calculations of systems with \(L=64\) with open boundary conditions.
breaks SU(2)-symmetry and tends to disassemble pairs (local or not).
To solve Eqs. (1) and (2) in the presence of disorder, we numerically employ the density matrix renormalization group (DMRG) [31; 32] method, which is extremely powerful in (quasi-) one-dimensional systems, to obtain the ground state of different lattices, including under disordered settings. The two Hamiltonians have \(U(1)\) symmetry with conserved total particle number \(N_{\sigma}=\sum_{i}\langle\hat{n}_{i,\sigma}\rangle\) for spin species \(\sigma\) even in the presence of disorders, thus we perform DMRG calculations in the sector with fixed good quantum numbers \(N_{\sigma}\). Observables such as superfluid weight, pair binding energy, and correlation functions are computed to characterize the ground state properties. In calculations aiming to obtain the superfluid weight, twisted boundary conditions are used; in the rest of the simulations, we implement open boundary conditions to reduce the computational cost. Up to 2000 DMRG kept states are used in all calculations, and the largest truncation error is about \(10^{-6}\). For disordered cases, all observables are obtained from the average over calculations of many disorder samples as indicated in what follows [see Appendix A for a benchmark against exact results in small systems.].
## III BKT scaling of Drude weight
The pairing of electrons is one of the necessary preconditions of superconductivity, which is a macroscopically coherent state of pairs. We estimate the pair formation via the singlet-pair binding energy
\[E_{b}\equiv E_{0}(N_{\uparrow}+1,N_{\downarrow}+1)+E_{0}(N_{\uparrow},N_{ \downarrow})\] \[-2E_{0}(N_{\uparrow}+1,N_{\downarrow}). \tag{5}\]
A negative \(E_{b}\) in the thermodynamic limit denotes that the energy cost for adding two interacting particles (or holes, depending on the filling) with opposite spins is lower than that of two noninteracting ones. As a result, the system exhibits a tendency toward the singlet-pair formation to lower the total energy. We first display the pairing binding energy in the presence of random chemical potentials in Fig. 2. In this case, the binding energy \(E_{b}\) is negative at zero disorder, and remains so with \(W>0\), for both the Creutz lattice and regular two-leg ladder. Therefore, from the energetic consideration, fermions in the presence of random chemicals still tend to form pairs, despite the inclusion of disorder.
Whereas useful to characterize singlet pair formation, the binding energy cannot quantify the coherence of such pairs, being thus unable to discern the superconducting state. For that, we examine the superfluid weight, which in one-dimension (1D) is equivalent to the Drude weight [33; 34; 35; 36; 37; 38; 39]:
\[\mathcal{D}_{s}=\pi L\frac{\partial^{2}E_{0}(\Phi)}{\partial\Phi^{2}}\bigg{|}_ {\Phi=0}, \tag{6}\]
where \(E_{0}(\Phi)\) is the ground state in the presence of a threaded magnetic flux \(\frac{\hbar c}{e}\Phi\)[40; 41]. Such flux is equivalent to the introduction of twisted boundary conditions [42] via the replacement \(\hat{c}_{j,\sigma}\to e^{\mathrm{i}\phi j}\hat{c}_{j,\sigma}\), where \(\phi=\Phi/L\) is the phase gradient per unit cell. In actual calculations, we use an approximant \(\mathcal{D}_{s}\approx 2\pi L[E_{0}(\delta\Phi)-E_{0}(0)]/(\delta\Phi)^{2}\)[28], choosing \(\delta\Phi=\pi/2\) to minimize the numerical error. Thus the superfluid weight is obtained by
\[\mathcal{D}_{s}\approx\frac{8L}{\pi}\left[E_{0}(\pi/2)-E_{0}(0)\right]. \tag{7}\]
The approximation in Eq. 7, while seemingly crude for such a large \(\delta\Phi\), has been numerically confirmed in the clean case, resulting in an absolute error of the order of \(10^{-3}\) [see Appendix B].
In either 1D or quasi-1D lattices, the quantum phase transition between the Mott insulating phase and the superfluid phase at fixed commensurate lattice filling is known to be of the Berezinskii-Kosterlitz-Thouless (BKT) type [27; 43; 44; 45]. In the case of disordered systems, such scaling form persists for bosonic systems when transitioning from a superfluid to a Bose glass [45; 46]. While no direct evidence corroborates that this should also be valid for fermionic systems, we notice that the attractive Hubbard model displays the formation of increasingly local Cooper pairs when \(|U|\gg 1\), which are mimicked by hardcore bosons in this limit [47]. Moreover, we recall that in the clean case, the results of the superfluid weight at the strong attractive interactions we use, \(U=-8\), steadily approach the ones for the corresponding bosonic model [9], lending further support for the same type of transition similarly occurring here [48]. Thus assuming a BKT scaling form for the superconductor to the disorder-induced insulator transition, the disorder-dependent correlation length scales as [49; 50]
\[\xi=\exp\frac{b_{\pm}}{\sqrt{|W-W_{c}|}}. \tag{8}\]
Figure 2: Pairing binding energy \(E_{b}\) as a function of disorder \(W\) for (a) the Creutz lattice and (b) regular two-leg ladder with different \(L\). Here the disorder is introduced by the random chemical potentials.
Here \(W_{c}\) is the critical disorder in the thermodynamic limit, and \(b_{+}\) (\(b_{-}\)) is a nonuniversal parameter for \(W>W_{c}(W<W_{c})\). For numerical convenience, we make the approximation that \(b_{+}=b_{-}\equiv b\). Then the critical disorder and the parameter \(b\) can be determined by the best data collapse of \(\mathcal{D}_{s}(L,W)\) as a function of \(L/\xi\).
We first consider the disorder introduced by random chemical potentials; the corresponding results of the superfluid weight and its scaling are displayed in Fig. 3. The data collapse of the \(\mathcal{D}_{s}(L)\) versus \(L/\xi\) indicates that the critical disorder \(W_{c}=0^{+}\) in the thermodynamic limit for both the Creutz lattice and the regular two-leg ladder, within system sizes amenable to our calculations. In this sense, the lattice geometry and band dispersion do not qualitatively affect the (lack of) robustness of the superconductivity against this SU(2)-symmetric disorder. Moreover, the superconducting state is so fragile that an infinitesimal disorder strength destroys the superconductivity. On the other hand, since random chemical potentials do not necessarily break pairs but rather their phase coherence, the system is an (interacting) Anderson insulator of singlet pairs in the disordered phase.
In contrast to the SU(2)-symmetric random chemical potential, the random magnetic field can break the singlet pairs induced by the local attraction \(U\). In this case, the introduction of random Zeeman fields with growing disorder strength results in the amplitude of \(E_{b}\) gradually _decreasing_ to zero with growing \(W\), regardless of the lattice geometry used, as shown in Fig. 4. While this result immediately points out the differences arising from the symmetry-type of disorder used on the pair robust
Figure 4: Pairing binding energy \(E_{b}\) as a function of disorder \(W\) for (a) the Creutz lattice and (b) regular two-leg ladder with different \(L\). Here the disorder is introduced by the random Zeeman field.
Figure 5: Data collapse of the superfluid weight \(\mathcal{D}_{s}\) versus \(L/\xi\) [\(-L/\xi\) if \(W<W_{c}\)] for (a) the Creutz lattice and (b) regular two-leg ladder with random Zeeman fields. Other parameters are the same as in Fig. 3.
Figure 3: Data collapse of the superfluid weight \(\mathcal{D}_{s}\) as a function of \(L/\xi\) [\(-L/\xi\) if \(W<W_{c}\)] for (a) the Creutz lattice and (b) regular two-leg ladder with random chemical potentials. Each inset shows \(\mathcal{D}_{s}\) as a function of the disorder \(W\) for different lattice lengths \(L\). The optimal parameters \(b\) and \(W_{c}\) in Eq. 8 are determined by minimizing a cost function of the data collapse [see Appendix C for detailed information]. Numerical results for \(L=12/16/20/24\) are obtained by the average over \(320/256/160/96\) disorder realizations.
ness [see Fig. 2 for a comparison], it does not make it clear if pairs are more resilient if contrasting dispersive or dispersionless systems under \(\hat{\mathcal{H}}_{h}\). Owing to the lack of a proper and systematic scaling procedure for \(E_{b}\), extracting the critical disorder that breaks pairs only from the binding energy is challenging. Consequently, the results in Fig. 4 do not conclude whether the singlet pair is more robust against disorder in the Creutz lattice with flat bands than a regular lattice with dispersive bands.
To resolve this question, we once again resort to the superfluid weight, which further probes the phase coherence of the formed pairs. The finite-size scaling of the superfluid weight \(\mathcal{D}_{s}\) is reported in Fig. 5. Here, the Creutz lattice and regular two-leg ladder results are qualitatively different under this type of random Zeeman-like disorder. In contrast to the case with random chemical potentials, the superconducting state survives in the Creutz lattice up until disorder strengths of \(W_{c}=4.8\), as shown in Fig. 5(a). However, the superconductivity is still fragile and destroyed by an infinitesimal disorder in the regular two-leg ladder, as shown in Fig. 5(b). In this sense, the flat dispersion has dramatically enhanced the robustness of the superfluidity, even in the presence of substantial disorder.
## IV Correlation functions
The scaling of superfluid weight suggests a BKT transition from the superconducting state to the insulating state for both lattice geometries and disorder types. However, the physical characteristics of the disordered phase can be, in principle, different. For example, a large disorder can lead to an Anderson insulator of singlet pairs or even of unpaired fermions. In this section, we calculate the correlation functions to characterize these states further. Specifically, we compute the pairing correlation function,
\[G^{P}_{ij}=\langle\hat{\Delta}^{A\dagger}_{i}\hat{\Delta}^{A}_{j}\rangle, \tag{9}\]
where \(\hat{\Delta}^{A}_{i}=\hat{c}^{A}_{i,\uparrow}\hat{c}^{A}_{i,\downarrow}\) annihilates a local singlet pair on the \(i\)-th unit cell with chain index \(A\), and the single-particle Green's function,
\[G^{\sigma}_{ij}=\langle\hat{c}^{A}_{i,\sigma}\hat{c}^{A\dagger}_{j,\sigma}\rangle. \tag{10}\]
Note here that it is sufficient to compute correlations along one of the chains, since both lattices have mirror symmetry across the rungs. As mentioned before, we use open boundary conditions for calculations of correlation functions to reduce the computational cost. In this case, for a generic two-point correlation function \(X_{ij}\) between sites \(i\) and \(j\), one can extract the averaged correlation decay as a function of distance as [51]
\[X(r)=\frac{1}{\mathcal{N}}\sum_{|i-j|=r}X_{ij}\, \tag{11}\]
where \(\mathcal{N}\) is the total number of pairs \(\{i,j\}\) satisfying \(|i-j|=r\). Based on this, we define the average pairing correlation function [single-particle Green's function] as \(P(r)\) [\(G(r)\)].
For the clean case, the system described by the attractive Hubbard model on both lattices features superconductivity, with power-law decaying of pair correlations denoting quasi-long range order. When the disorder is sufficiently strong, the system is in an Anderson insulating ground-state, and the corresponding pairing correlation function decays exponentially. Compared to the Creutz lattice with the flat band, \(P(r)\) decays slightly faster in the regular two-leg ladder [9]. However, the differences in pairing correlations responses to disorder between the two lattices are marginal when random chemical potentials are introduced, as shown in Fig. 6. In this case, \(P(r)\) turns to an exponential decay as soon as a minor disorder appears for both lattices, in agreement with the scaling of superfluid weight in Sec. III.
Differences between the results of the two lattices are much more prominent when random Zeeman fields introduce the disorder. As shown in Fig. 7, while \(P(r)\) decays exponentially in the presence of a weak disorder strength in the regular two-leg ladder, the pairing correlation function in the Creutz lattice preserves a power-law form even at very large values \(W\simeq 6\), for the same system size. Notice that although one cannot directly compare the critical disorder from correlation functions at a finite system size with the scaling result of superfluid weights in the thermodynamic limit, these results show qualitatively the same conclusion: the flat-band dispersion dramatically enhances the robustness of the superconductivity against spin-dependent disorder.
Besides of the universality class of the disorder-induced SIT in fermion-Hubbard models, there has been another
Figure 6: Pairing correlations functions \(P(r)\) versus distance \(r\) for (a) the Creutz lattice and (b) the regular two-leg ladder. The solid lines give an estimation of a power-law fitting \(\propto r^{\alpha}\) extracted for the clean case that guides the interpretation (note the log-log scale). Here the lattice length \(L=128\) and the disorder is introduced via random chemical potentials \(\hat{\mathcal{H}}_{\mu}\).
long-standing question in this topic. That is, whether the route to insulating behavior proceeds through the direct localization of Cooper pairs, or by a two-step process in which the Cooper pairing is first destroyed and then followed by the standard localization of single electrons [25]. Alternatively, an intermediate (poor) metallic state exists where the disorder destroys the pairing coherence, but localization does not yet occur. We try to answer this question by examining the single-particle Green's function, distinguishing the metallic phase from other phases with gapped single-particle excitations, such as the superconducting state and insulating states.
As previously mentioned, the random chemical potential does not break (local) singlet pairs. Thus the SIT transition is ought to be direct from the superconducting to the pair localization state. In that case, the single-particle Green's function would decay exponentially as the disorder strength \(W\) grows, indicating that the single-particle gap remains open throughout the transition. This picture is confirmed in Fig. 8, which displays the \(G(r)\) decaying profile for both lattices with random chemical potentials. In contrast, the disorder introduced by the random Zeeman fields can destroy singlet pairs. However, such a disorder also destroys the coherence of pairs according to our numerical results for the superfluid weight \(\mathcal{D}_{s}\). In Fig. 9, we find no clue of an intermediate metallic state with algebraic decay single-particle Green's function for both lattices with different band dispersions [see Appendix D for energetic analysis of excitations supporting these results]. Finally, as an addendum, we note that a true uncorrelated Anderson insulator also exhibits gapless single-particle excitations. The fact that we observe gapped single-particle excitations across a wide range of disorder values, even substantially far from the SIT, indicates that correlation effects are still significantly relevant. If there is a transition (possibly a crossover) to such a regime, this occurs at values of \(W\) where the interaction strength \(|U|\) is an irrelevant perturbation.
## V Summary and discussion
We systematically investigate the disorder-induced SIT of the attractive Hubbard model in two lattices, the Creutz lattice with noninteracting flat bands and the regular two-leg ladder with noninteracting dispersive bands. Two disorder types have been considered, random chemical potentials, which do not break local singlet pairs, and random Zeeman fields that do break pairs in general. The finite-size scaling of numerically obtained su
Figure 8: Single-particle Green’s functions \(G(r)\) versus distance \(r\) for (a) the Creutz lattice and (b) the regular two-leg ladder, respectively. Here lattice length \(L=128\) and the disorder is introduced by random chemical potentials \(\hat{\mathcal{H}}_{\mu}\); note the vertical log-scale.
Figure 7: Similar to Fig. 6 but for disorder introduced via random Zeeman fields \(\mathcal{H}_{h}\). The pairing correlation functions \(P(r)\) roughly keep their power-law up decay at \(W\lesssim 6\) for the Creutz lattice (a); the regular ladder (b) shows a much less resilient power-law dependence of \(P(r)\). As before, the lattice length is \(L=128\) and the solid line gives an estimation of a power-law fitting \(\propto r^{\alpha}\) that guides the interpretation.
Figure 9: The same as Fig. 8 but for the case of disorder introduced by random Zeeman fields \(\hat{\mathcal{H}}_{h}\). Note the robust exponential decay for a wide range of disorder amplitudes even if significantly far from the critical value \(W_{c}\) extracted from the scaling of \(\mathcal{D}_{s}\).
perfluid weights suggests a BKT-type phase transition for both lattices and disorders. For the disorder introduced by random chemical potentials, an infinitesimal disorder drives the superconducting state to a correlated Anderson insulator of singlet pairs for both lattice geometries. For the disorder introduced by the random Zeeman fields, the superconductivity is more robust when the noninteracting lattice has flat bands: it requires a significant disorder strength to break the superconducting state in the Creutz lattice; in contrast, the critical disorder is zero in the regular two-leg ladder.
The conclusion is that the flat dispersion can enhance the superconducting state's resilience, confirmed by the pairing correlation function calculations. We also try to answer the long-standing question in disorder-induced SIT about whether this transition is direct or a two-step process by carefully examining the single-particle Green's function. Our results suggest no intermediate metallic state during the SIT process for all parameters involved in this work. Lastly, it is worth noting that the Hamiltonian of a Creutz ladder has already been emulated with ultracold fermionic atoms via optical potentials [52], which makes our protocol possible for experimental verification in future investigations.
An outstanding question refers to generality of the universality class of disorder-driven SITs in such models. While we find clear indication of BKT-type phase transition, further supported by results in related bosonic systems [45; 46], this contrasts to SITs in clean systems using similar attractive Hubbard Hamiltonians [53; 54], which exhibit second-order phase transitions [(\(d+1\))-XY universality class]. Whether this difference carries over to different dimensionalities is a question that warrants future investigation.
## Acknowledgements
C.C. was supported by the National Natural Science Foundation of China (grant nos. 11904145, 12174167, 12247101) and the Fundamental Research Funds for the Central Universities. R.M. thanks George Batrouni and Marcos Rigol for discussions and for collaborations in related contributions. R.M. acknowledges support from NSFC Grants No. U2230402, 12050410263, 12111530010, 11974039, and No. 12222401.
## Appendix A Benchmark of DMRG results
The arguments in this work are mainly based on numerical calculations using DMRG, which is one of the most powerful methods in solving quantum many-body systems, especially in 1D and quasi-1D quantum lattices. However, in the case of periodic boundary conditions, which is precisely the case when computing the superfluid weights, DMRG meets much larger truncation errors. In other words, achieving the same precision of calculations with open boundary conditions takes a much more expensive computational effort. Moreover, when the strong disorder breaks the lattice homogeneity, the DMRG procedure is likely to be trapped in local minima, even if using an optimized strategy specially designed for disordered lattices [55]. These difficulties, accompanied by the fact that extracting information from disordered systems requires repeating calculations for various disorder samples, restrict our investigations to relatively small system sizes. To be more rigorous, we also perform exact diagonalization (ED) calculations as a benchmark. As shown in Fig. 10, the two methods provide precisely the same results, therefore confirming the reliability of the numerical results illustrated in this work.
## Appendix B Approximation of \(\mathcal{D}_{s}\)
In the absence of disorder, the ground-state energy \(E_{0}(\Phi)\) is a quadratic function of \(\Phi\) in the range \(\Phi\in[0,\pi/2]\), as shown in Fig. 11(a). Therefore, the superfluid weight \(\mathcal{D}_{s}\) of the form in Eq. 6 can be obtained by the following procedure: first, do a second-order polynomial fitting of several \(E_{0}(\Phi)\) with different twisted \(\Phi\), and then compute the \(\mathcal{D}_{s}\) by the second-order derivative of the previously obtained polynomial. However, this procedure is rather time-consuming, especially in the disorder case, which requires many disorder realizations.
In practice, we adopt the approximation \(\mathcal{D}_{s}\approx 2\pi L[E_{0}(\delta\Phi)-E_{0}(0)]/(\delta\Phi)^{2}\)[28], from which one can extract \(D_{s}\) from a single value of \(E_{0}(\Phi)\). We display the absolute error from these two procedures in Fig. 11(b), where the error is overall small (\(\sim 10^{-3}\)) and decreases as the phase twist \(\Phi\) increases to \(\pi/2\). In this work, we choose \(\Phi=\pi/2\) and use the approximation in Eq. 7 to compute the superfluid weight \(\mathcal{D}_{s}\). Note that the above test has been done in the clean case, and the situation can be more complicated in the presence of a finite disorder strength. As long as \(E_{0}(\Phi)\) is monotonic in the range \([0,\pi/2]\), the extracted \(\mathcal{D}_{s}\) still likely constitutes a good
Figure 10: Comparison between ED and DMRG results for (a) Creutz lattice and (b) regular two-leg ladder in the presence of random chemical potentials. Here we use 30 disorder realizations for the benchmark.
approximation. Nevertheless, the results from the approximation appear promising and self-consistent in our investigation.
## Appendix C Cost Function Minimization
The key for obtaining a performant data collapse and scaling of the superfluid weight is to extract the best critical \(W_{c}\) and \(b\) in Eq. (8), which can be determined by minimizing the cost function [50; 56; 49]
\[C_{X}=\frac{\sum_{j}|X_{j+1}-X_{j}|}{\max\{X_{j}\}-\min\{X_{j}\}}-1, \tag{10}\]
where \(X_{j}\) is the \(j\)-th element of the collection for all \(D_{s}(L,W)\) values in the parameter space \(\{L,W\}\). Here the data collection \(X\) has been sorted in a nondecreasing way with \(X_{j}\leq X_{j+1}\). The cost function \(C_{X}\) is close to zero for a perfectly smooth and continuous data collection. In practice, for each pair of fitted parameters value, one obtains a parameter-dependent cost function \(C_{X}(b,W_{c})\). Repeating this procedure within proper ranges in the two-dimensional parameter space \(\{b,W_{c}\}\) one can extract the minimum of \(C_{X}\) and find the best fitting. As shown in Fig. 12, the cost function of the Creutz lattice with random chemical potentials is a unimodal function in \(\{b,W_{c}\}\). Therefore, it is not hard to obtain the unambiguous minimum of \(C_{X}\), and the corresponding data collapse in Fig. 3 in the main text. Similar analysis carries over for the other lattice geometry and disorder type used.
## Appendix D One- and two-particle excitation gaps
On top of the observables discussed in the main text, further characterization of the different phases across the SIT can be made by examining the charge excitation energy [57; 58; 54]. In particular, the \(m\)-particle excitation gap can be defined as [54]
\[\delta_{m}\equiv E_{0}(N+m)+E_{0}(N-m)-2E_{0}(N). \tag{11}\]
Here \(E_{0}(N)\) is the ground state of \(N=N_{\uparrow}+N_{\downarrow}\) particles, as defined in the main text. Our interest in the present work lies in the spin-balanced sector \(\{N_{\uparrow}=N_{\downarrow}\}\) for the case of pair excitations. In actual calculations, \((N\pm 1)\) [\((N\pm 2)\)] is explicitly regarded as \((N_{\uparrow}\pm 1,N_{\downarrow})\) [\((N_{\uparrow}\pm 1,N_{\downarrow}\pm 1)\)]. The one- and two-particle excitations of a small system size with \(L=12\) are displayed in Fig. 13.
With disorder induced by random chemical potentials
Figure 11: (a) The ground state energy \(E_{0}(\Phi)\) versus the twisted angle \(\Phi\) for the Creutz lattice in the clean case. The solid blue line denotes a second-order polynomial fitting. (b) The absolute error between the superfluid weight \(D_{s}\) obtained from Eq. 6 and the approximation \(\mathcal{D}_{s}\approx 2\pi L[E_{0}(\delta\Phi)-E_{0}(0)]/(\delta\Phi)^{2}\) using different \(\delta\Phi\) in Eq. 7. Here results are from DMRG calculation of \(L=32\).
Figure 12: The cost function \(C_{X}\) in the two-dimensional parameter space \(\{b,W_{c}\}\) for the Creutz lattice with random chemical potentials as an example. The red star marks the position (\(W_{c}=0\), \(b=2.3\)) of the minimum \(C_{X}\).
Figure 13: The \(m\)-particle excitation gaps \(\delta_{m}\) (see text for definition) for (a) [(b)] Creutz lattice with random chemical potentials [Zeeman fields] and (c) [(d)] regular two-leg ladder with random chemical potentials [Zeeman fields]. Here results are from DMRG calculations with \(L=12\).
[Figs. 13(a) and 13(c)], the one-particle excitation gap \(\delta_{1}\) is finite in the whole range of disorder strengths investigated, irrespective of the lattice geometry (or, equivalently, the band structure). On the other hand, the two-particle excitation gap \(\delta_{2}\) slowly grows with \(W\), denoting the onset of insulating behavior. Remarkably, since \(\delta_{2}<\delta_{1}\), pair excitations are favored within this regime. In the main text, we refer to it as an Anderson insulating phase of singlet pairs; in other contexts, this is also dubbed as a Bose-insulator [57, 58, 59, 60, 53, 54]. In passing, we note that this analysis also makes clear the inexistence of an intermediate disorder-induced metallic phase.
Such scenario changes in the presence of the disorder induced by the random Zeeman fields [Figs. 13(b) and 13(d)]. Now, the single-particle (two-particle) excitation gap substantially decreases (slightly increases) as \(W\) grows. While having both quantities finite is a precondition for driving insulating behavior, at disorder values \(W\gtrsim 10\) the imminent crossing of \(\delta_{2}\) and \(\delta_{1}\) marks the crossover from a Bose to Fermi insulator [58, 53, 54], where single-particle excitations are favored instead. This change of character of the insulator phase has been seen in other contexts for clean SITs [53, 54]. While finite-size effects likely quantitatively impact the results, they support the main findings in the main text.
|
2308.14471 | Dual $p$-adic Diophantine approximation on manifolds | The Generalised Baker-Schmidt Problem (1970) concerns the Hausdorff measure
of the set of $\psi$-approximable points on a nondegenerate manifold.
Beresnevich-Dickinson-Velani (in 2006, for the homogeneous setting) and
Badziahin-Beresnevich-Velani (in 2013, for the inhomogeneous setting) proved
the divergence part of this problem for dual approximation on arbitrary
nondegenerate manifolds. The divergence part has also been resolved for the
$p$-adic setting by Datta-Ghosh in 2022 for the inhomogeneous setting. The
corresponding convergence counterpart represents a challenging open problem. In
this paper, we prove the homogeneous $p$-adic convergence result for
hypersurfaces of dimension at least three with some mild regularity condition,
as well as for some other classes of manifolds satisfying certain conditions.
We provide similar, slightly weaker results for the inhomogeneous setting. We
do not restrict to monotonic approximation functions. | Mumtaz Hussain, Johannes Schleischitz, Benjamin Ward | 2023-08-28T10:18:28Z | http://arxiv.org/abs/2308.14471v1 | # Dual \(p\)-adic Diophantine Approximation on Manifolds
###### Abstract.
The Generalised Baker-Schmidt Problem (1970) concerns the Hausdorff measure of the set of \(\psi\)-approximable points on a nondegenerate manifold. Beresnevich-Dickinson-Velani (in 2006, for the homogeneous setting) and Badziahin-Beresnevich-Velani (in 2013, for the inhomogeneous setting) proved the divergence part of this problem for dual approximation on arbitrary nondegenerate manifolds. The divergence part has also been resolved for the \(p\)-adic setting by Datta-Ghosh in 2022 for the inhomogeneous setting. The corresponding convergence counterpart represents a challenging open problem. In this paper, we prove the homogeneous \(p\)-adic convergence result for hypersurfaces of dimension at least three with some mild regularity condition, as well as for some other classes of manifolds satisfying certain conditions. We provide similar, slightly weaker results for the inhomogeneous setting. We do not restrict to monotonic approximation functions.
## 1. Dual Diophantine approximation on manifolds
Throughout, let \(n\geq 1\) be a fixed integer and \(\mathbf{q}:=(q_{1},\ldots,q_{n})\in\mathbb{Z}^{n}\). Let \(\Psi:\mathbb{Z}^{n}\to[0,\infty)\) be a _multivariable approximating function_, that is, \(\Psi\) has the property that
\[\Psi(\mathbf{q})\to\ 0\text{ as }\|\mathbf{q}\|:=\max(|q_{1}|,\ldots,|q_{n}|) \to\ \infty.\]
For \(\theta\in\mathbb{R}\), consider the set
\[\mathcal{D}_{n}^{\theta}(\Psi):=\left\{\mathbf{x}\in\mathbb{R}^{n}:\ |\mathbf{q} \cdot\mathbf{x}+p+\theta|<\Psi(\mathbf{q})\text{ for infinitely many }(p,\mathbf{q})\in\mathbb{Z}\times\mathbb{Z}^{n}\ \right\},\]
where \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) so that
\[\mathbf{q}\cdot\mathbf{x}=q_{1}x_{1}+\cdots+q_{n}x_{n}.\]
A vector \(\mathbf{x}\in\mathbb{R}^{n}\) will be called dually \((\Psi,\theta)\)_-approximable_ if it lies in the set \(\mathcal{D}_{n}^{\theta}(\Psi)\). The set \(\mathcal{D}_{n}^{\theta}(\Psi)\) corresponds to _inhomogeneous_ Diophantine approximation, and when \(\theta=0\) the problem reduces to _homogeneous_ approximation. In this case we write \(\mathcal{D}_{n}^{0}(\Psi):=\mathcal{D}_{n}(\Psi)\). When \(\Psi\) is of the form \(\Psi(\mathbf{q})=\psi(\|\mathbf{q}\|)\) for some norm \(\|\cdot\|\) and \(\psi:[0,\infty)\to[0,\infty)\), we say \(\psi\) is a univariable approximation function.
We are interested in the'size' of the set \(\mathcal{D}_{n}^{\theta}(\Psi)\) with respect to \(f\)-dimensional Hausdorff measure \(\mathcal{H}^{f}\) for some dimension function \(f\). By a dimension function \(f\) we mean an increasing continuous function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) with \(f(0)=0\). When \(f(r)=r^{n}\) then \(\mathcal{H}^{f}\)-measure is comparable to \(n\)-dimensional Lebesgue measure.
A classical result due to Dirichlet tells us that \(\mathcal{D}_{n}(\psi_{n})=\mathbb{R}^{n}\) for \(\psi_{n}(r)=r^{-n}\), and an application of the Borel Cantelli lemma informs us that the set of _very well approximable_ points (there exists \(\epsilon>0\) such that \(\mathbf{x}\in\mathcal{D}_{n}(\psi_{n+\epsilon})\)) is a Lebesgue nullset. Furthermore, the generalised set of inhomogeneous very well approximable points is also a nullset.
The following theorem due to Schmidt summarises the Lebesgue measure theory.
**Theorem 1.1** (Schmidt [35]).: _Fix \(\theta\in\mathbb{R}\) and suppose \(n\geq 2\). Let \(\Psi:\mathbb{Z}^{n}\to[0,\infty)\) be a multivariable approximation function. Then_
\[\lambda_{n}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right)=\begin{cases}0&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\Psi(\mathbf{q})<\infty,\\ \text{full}&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\Psi( \mathbf{q})=\infty.\end{cases}\]
## 1. Introduction
In this paper we consider the following's-licliclic' problem
\[\begin{array}{rcl}\mathcal{H}^{f}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right)&= \left\{\begin{array}{rcl}0&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n}\Psi( \mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)<\infty, \\ \infty&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n}\Psi( \mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)=\infty. \end{array}\right.\end{array} \tag{1.1}\]
Here \(\mathbb{Z}^{n}\) denotes the set of all \(n\)-dimensional vectors in \(\mathbb{R}^{n}\). The's-liclicliclic' problem is the following:
**Problem 1.1**.: _Let \(\Psi\) be a \(n\)-dimensional vector space and \(\mathcal{H}^{f}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right)\) be a \(n\)-dimensional vector space. Then_
\[\mathcal{H}^{f}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right)=\left\{\begin{array} []{rcl}0&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n}\Psi( \mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)<\infty, \\ \infty&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n} \Psi(\mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)= \infty.\end{array}\right.\]
_Here \(\mathbb{Z}^{n}\) denotes the set of all \(n\)-dimensional vectors in \(\mathbb{R}^{n}\)._
In the following we will consider the following's-liclicliclic' problem
\[\begin{array}{rcl}\mathcal{H}^{f}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right) &=\left\{\begin{array}{rcl}0&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n}\Psi( \mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)<\infty, \\ \infty&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n} \Psi(\mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)= \infty.\end{array}\right.\end{array} \tag{1.2}\]
Here by "full" we mean the complement is a nullset. Note that for \(n=1\), the monotonicity of \(\Psi\) is required in the above statement, since even in the homogeneous case the result is known to be false due to the work of Duffin and Schaeffer, see for example [18, Theorem 2.8]. It should be remarked this theorem contains the notable Khintchine-Groshev Theorem (\(\theta=0\) and \(\Psi(\mathbf{q})=\psi(\|\mathbf{q}\|)\) monotonic).
Using a'slicing' technique and the mass transference principle, Beresnevich and Velani extended Schmidt's theorem to the Hausdorff measure statement which reads as follows.
**Theorem 1.2** (Beresnevich and Velani [10]).: _Fix \(\theta\in\mathbb{R}\) and suppose \(n\geq 2\). Let \(\Psi:\mathbb{Z}^{n}\rightarrow[0,\infty)\) be a multivariable approximation function and let \(f\) be a dimension function such that \(r^{-n}f(r)\rightarrow\infty\) as \(r\to 0\). Assume \(r^{-n}f(r)\) is decreasing and \(r^{-n+1}f(r)\) is increasing. Then_
\[\mathcal{H}^{f}\left(\mathcal{D}_{n}^{\theta}(\Psi)\right)=\left\{\begin{array} []{rcl}0&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n}\Psi( \mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)<\infty, \\ \infty&\text{ if }\ \sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}}\|\mathbf{q}\|^{n} \Psi(\mathbf{q})^{1-n}f\left(\frac{\Psi(\mathbf{q})}{\|\mathbf{q}\|}\right)= \infty.\end{array}\right.\]
Again, note that this result contains Jarnik's Theorem (in the dual setting), and one can deduce the dual version of the inhomogeneous Jarnik-Besicovich Theorem due to Levesley [26].
### Diophantine approximation on manifolds
In 1932, Mahler initiated the study of Diophantine approximation on dependent quantities by conjecturing that the set of very well approximable points on the Veronese curve \(\mathcal{V}_{n}=\{(x,x^{2},\ldots,x^{n}):x\in\mathbb{R}\}\) is a nullset with respect to the induced Lebesgue measure on \(\mathcal{V}_{n}\). That is, Mahler conjectured that
\[\left\{x\in\mathbb{R}:\exists\,\varepsilon>0\text{ such that }\begin{array}{l}|q_{1}x+q_{2}x^{2}+\cdots+q_{n}x^{n}-p|<\left(\max_{1\leq i \leq n}|q_{i}|\right)^{-n-\varepsilon}\\ \text{ for infinitely many}(p,q_{1},\ldots,q_{n})\in\mathbb{Z}^{n+1}\end{array}\right\}\]
is a Lebesgue nullset. Sprindzhuk proved Mahler's conjecture to be true and conjectured that the result remained true for any analytic nondegenerate manifold [37]. It was not until the fundamental work of Kleinbock and Margulis [24] that Sprindzhuk's conjecture was proven true1. Since this breakthrough paper, a range of results in quick succession have been proven in this area. We provide a brief survey of key results, which can be split into three parts:
Footnote 1: The main results of this paper are exclusively in the dual setting, but it should be noted that the result of Kleinbock and Margulis was proved in the generalised multiplicative setting.
_Extremality_ refers to results associated with Mahler's (and subsequently Sprindzhuk's) conjecture. Specifically the size, in terms of the induced Lebesgue measure, of the set of very well approximable points contained in some manifold is a nullset. A manifold is said to be extremal if it satisfies Sprindzhuk's conjecture.
_Ambient measure_ results refer to analogues of Theorem 1.1 in the setting of Diophantine approximation on dependent quantities, that is the induced Lebesgue measure of \(\mathcal{D}_{n}^{\theta}(\Psi)\cap\mathcal{M}\).
_Hausdorff theory_ results refer to the Hausdorff measure and dimension of \(\mathcal{D}_{n}^{\theta}(\Psi)\cap\mathcal{M}\). In full generality, a complete Hausdorff measure treatment akin to Theorem 1.2 for manifolds \(\mathcal{M}\) represents a deep open problem referred to as the Generalised Baker-Schmidt Problem (GBSP) inspired by the pioneering work of Baker and Schmidt [3].
**Generalised Baker-Schmidt Problem for Hausdorff Measure: dual setting.** Let \(\mathcal{M}\) be a nondegenerate submanifold of \(\mathbb{R}^{n}\) with \(\dim\mathcal{M}=d\) and \(n\geq 2\). Let \(\Psi\) be a multivariable approximating function. Let \(f\) be a dimension function such that \(r^{-d}f(r)\rightarrow\infty\) as \(r\to 0\). Assume that \(r\mapsto r^{-d}f(r)\) is decreasing and \(r\mapsto r^{1-d}f(r)\) is increasing. Prove that
\[\mathcal{H}^{f}(\mathcal{D}_{n}^{\theta}(\Psi)\cap\mathcal{M})=\left\{\begin{array}{l }0\quad\text{if}\quad\sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}\setminus\{\mathbf{0 }\}}\|\mathbf{q}\|^{d}\Psi(\mathbf{q})^{1-d}f\left(\frac{\Psi(\mathbf{q})}{\| \mathbf{q}\|}\right)<\infty,\\ \infty\quad\text{if}\quad\sum\limits_{\mathbf{q}\in\mathbb{Z}^{n}\setminus\{ \mathbf{0}\}}\|\mathbf{q}\|^{d}\Psi(\mathbf{q})^{1-d}f\left(\frac{\Psi( \mathbf{q})}{\|\mathbf{q}\|}\right)=\infty.\end{array}\right.\]
Note the results of extremality and ambient measure would follow from a result of the above form. For brevity, the following results are stated with few details. In some cases, further technical details, especially properties on the approximation function \(\Psi\) and dimension function \(f\), are required for the statement given to be true.
* _Extremality_: Sprindzhuk proved this for the Veronese curve [37], Kleinbock and Margulis proved this for nondegenerate manifolds [24]. In the inhomogeneous setting Badziahin proved the result for nondegenerate planar curves [1], and Beresnevich and Velani extended the work of Kleinbock and Margulis to the inhomogeneous setting for nondegenerate manifolds [11].
* _Ambient measure_: The convergence case was proven for nondegenerate manifolds independently by Bernik, Kleinbock, and Margulis [14] and Beresnevich [4]. The complimentary divergence case was proven by Beresnevich, Bernik, Kleinbock and Margulis [6]. The complete inhomogeneous version was proven by Beresnevich, Badziahin and Velani [2].
* _Hausdorff theory_: In terms of the Hausdorff dimension the upper and lower bounds for the Veronese curve were proven by Bernik [12] and Baker & Schmidt [3] respectively. A lower bound for extremal manifolds was proven by Dickinson and Dodson [17] and the complimentary upper bound was proven by Beresnevich, Bernik, and Dodson [5]. The Hausdorff measure convergence case was proven by the first name author for \(\mathcal{V}_{2}\)[20] and was later generalised by Huang [19] to nondegenerate planar curves. Partial results for the convergence case have been proven in higher dimensions. For various classes of curves see [22], for hypersurfaces with non-zero Hessian and dimension \(d\geq 3\) see [21], and for more general manifolds with further restrictions on the curvature, see [23]. Note the latter two papers also provide results in the inhomogeneous setting. The homogeneous divergence case for nondegenerate manifolds was proven completely by Beresnevich, Dickinson, and Velani [8]. The inhomogeneous divergence case was proven in [2].
### \(p\)-adic Diophantine approximation
Much of the setup presented in the real setting can be transferred to \(p\)-adic space. Fix a prime \(p\) and let \(|\cdot|_{p}\) denote the \(p\)-adic norm, \(\mathbb{Q}_{p}\) the \(p\)-adic numbers and \(\mathbb{Z}_{p}\) the ring of \(p\)-adic integers, that is, \(\mathbb{Z}_{p}:=\{x\in\mathbb{Q}_{p}:|x|_{p}=1\}\).
For a multivariable approximation function \(\Psi:\mathbb{Z}^{n+1}\to\mathbb{R}_{+}\) and fixed \(\theta\in\mathbb{Q}_{p}\) define the set of \(p\)-adic dually \((\Psi,\theta)\)-approximable points as
\[D_{n,p}^{\theta}(\Psi):=\left\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{Q}_{ p}^{n}:\begin{array}{l}|a_{1}x_{1}+\cdots+a_{n}x_{n}+a_{0}+\theta|_{p}<\Psi( \mathbf{a})\\ \text{for infinitely many }\mathbf{a}=(a_{0},\ldots,a_{n})\in\mathbb{Z}^{n+1} \end{array}\right\}.\]
Similar to the real case, for the homogeneous setting write \(D_{n,p}^{0}(\Psi):=D_{n,p}(\Psi)\), and for \(\Psi\) a univariable approximation function of the form \(\Psi(\mathbf{r})=\psi(\|\mathbf{r}\|)\) write \(D_{n,p}^{\theta}(\Psi):=D_{n,p}^{\theta}(\psi)\).
Note that, unlike the real setting, \(\Psi\) depends on the \(n+1\) integers \((a_{0},\ldots,a_{n})\), including \(a_{0}\). This is because \(\mathbb{Z}\) is dense in \(\mathbb{Z}_{p}\) and so, if \(a_{0}\) is unbounded from above, one could obtain increasingly precise approximations of \(\mathbf{x}\) for \((a_{1},\ldots,a_{n})\) fixed, at least if \(\mathbf{x}\in\mathbb{Z}_{p}^{n}\) and \(\theta\in\mathbb{Z}_{p}\).
The concept of Hausdorff dimension and more generally Hausdorff \(f\)-measure for any dimension function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) as in the real case can be defined similarly over \(\mathbb{Q}_{p}^{n}\) by coverings of balls with respect to the \(p\)-adic metric derived from \(\|\cdot\|_{p}=\max|\cdot|_{p}\). We refer to [33] for
properties of the Hausdorff measure and dimension in general metric space. Thus it makes sense to ask questions about the'size' of \(D_{n}^{\theta}(\Psi)\) with respect to the \(f\)-dimensional Hausdorff measure \(\mathcal{H}^{f}\) for some dimension function \(f\). When \(f(r)=r^{n}\) this reduces to the size of \(D_{n}^{\theta}(\Psi)\) in terms of the \(n\)-dimensional \(p\)-adic Haar measure \(\mu_{p,n}\) up to some fixed constant.
In the homogeneous setting Mahler, see for example [28], proved for \(\Psi(\mathbf{a})=\psi(\|\mathbf{a}\|)=\|\mathbf{a}\|^{-(n+1)}\) that \(D_{n,p}(\Psi)=\mathbb{Q}_{p}^{n}\), providing a \(p\)-adic equivalent of Dirichlet's Theorem. Again, by an application of the Borel Cantelli lemma we can deduce that the set of \(p\)-adic very well approximable points is a nullset, where the measure here is the \(n\)-dimensional \(p\)-adic Haar measure \(\mu_{p,n}\). Lutz proved the \(p\)-adic analogue of Khintchine's Theorem [27], and the Hausdorff measure analogue can be found in [8, Theorem 16].
### \(p\)-adic Diophantine approximation on manifolds and our main result
For \(p\)-adic approximation on dependent quantities, the following results are known. Again, we keep the statements of the known results brief and refer the reader to the relevant paper for more details. A key additional condition often required in the \(p\)-adic setting is the analyticity of the curve or manifold, see Section 2.1 for reasoning as to why. Here the extremality and ambient measure statements are with respect to \(\mu_{p,n}\) and the induced \(p\)-adic Haar measure on a manifold \(\mathcal{M}\).
* _Extremality_: Alongside the statement in the real setting, Sprindzhuk proved the \(p\)-adic equivalent of Mahler's conjecture in the \(p\)-adic setting [37]. Kleinbock and Tomanov [25], used similar ideas to those in [24] to prove extremality for all \(C^{2}\) nondegenerate manifolds (see [25] for the precise definition of a \(C^{2}\) function in the \(p\)-adic setting). The inhomogeneous theory for the Veronese curve preceded that of Kleinbock and Tomanov and was proven by Bernik, Dickinson, and Yuan [13].
* _Ambient measure_: The complete theory for the Veronese curve was proven by Beresnevich, Bernik and Kovalevskaya [7]. Preceding this, in [9], Beresnevich and Kovalevskaya proved the complete result for \(p\)-adic normal2 planar curves. In the inhomogeneous setting the convergence case was proven for the Veronese curve by Ustinov [38]. The convergence case for analytic nondegenerate manifolds was proven by Mohammadi and Salehi-Golsefidy [30] with the complementary divergence statement appearing soon after [29]. The inhomogeneous convergence statement was proven in [16]. Footnote 2: see [28] for Mahler’s definition of a \(p\)-adic normal function.
* _Hausdorff theory_: In the special case of Veronese curves, the metric theory is rather complete by results of Bernik and Morotskaya [15, 31]. For a general class of manifolds, the divergence statement in both the homogeneous and inhomogeneous setting has recently been proven by Datta and Ghosh [16], see Section 1.4 for their result. In this paper, we contribute to the inhomogeneous convergence case.
For a subset \(Z\subseteq\mathbb{Z}^{n+1}\) define the set
\[D_{n,p}^{\theta}(\Psi,Z):=\left\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{Q }_{p}^{n}:\begin{array}{l}|a_{1}x_{1}+\cdots+a_{n}x_{n}+a_{0}+\theta|_{p}< \Psi(\mathbf{a})\\ \text{for infinitely many }(a_{0},a_{1},\ldots,a_{n})\in Z\end{array} \right\}.\]
For \(Z=\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}\) we write \(D_{n,p}^{\theta}(\Psi,Z)=D_{n,p}^{\theta}(\Psi)\). Let \(\mathcal{M}\subset\mathbb{Q}_{p}^{n}\) be a \(p\)-adic manifold with dimension \(d\) defined by analytic map \(\mathbf{g}:\mathcal{U}\subset\mathbb{Q}_{p}^{d}\to\mathbb{Q}_{p}^{n-d}\) via parametrization \((\mathbf{x},\mathbf{g}(\mathbf{x}))\). Hereby analytic is defined as follows.
**Definition 1.3**.: A function \(\mathbf{h}:U\subseteq\mathbb{Q}_{p}^{m}\to\mathbb{Q}_{p}^{n}\) for \(U\) open is analytic if every coordinate function can be written as a power series
\[h_{j}(x_{1},\ldots,x_{m})=\sum_{\mathbf{t}}a_{j,\mathbf{t}}x_{1}^{t_{1}}\cdots x _{m}^{t_{m}},\qquad 1\leq j\leq n,\]
converging in some \(p\)-adic ball \(B(\mathbf{y},r),\ r=r(\mathbf{y})>0\) around every \(\mathbf{y}\) contained in \(U\).
Note that any analytic function \(\mathbf{h}\in C^{\infty}(U)\). Compared to the real setting where \(\mathbf{g}\in C^{2}\) was sufficient, for technical reasons we require the stronger condition of analytic manifolds in the \(p\)-adic setting.
By the Implicit Function Theorem (the \(p\)-adic version of this following from \(\mathbf{g}\) being analytic and the \(p\)-adic inverse function theorem, see Theorem 2.1) we may write
\[\mathcal{M}=\big{\{}(x_{1},\ldots,x_{d},g_{1}(\mathbf{x}),\ldots,g_{n-d}( \mathbf{x})):\mathbf{x}=(x_{1},\ldots,x_{d})\in\mathcal{U}\subset\mathbb{Q}_{p}^ {d}\big{\}}\]
for analytic functions \(g_{i}:\mathcal{U}\subset\mathbb{Q}_{p}^{d}\to\mathbb{Q}_{p}\), \(1\leq i\leq n-d\). For ease of notation write \(\mathbf{g}=(g_{1},\ldots,g_{n-d})\).
Analogously to [21, 23], minding the slightly modified notation, we impose the following conditions :
1. Let \(f\) be a dimension function satisfying \[f(xy)\ll x^{s}f(y)\text{ for all }y<1<x\] (1) for some \(s<2(d-1)\).
2. For each \(1\leq i\leq n-d\), let \(g_{i}:\mathcal{U}\to\mathbb{Q}_{p}\) be analytic on some open set \(\mathcal{U}\subset\mathbb{Q}_{p}^{d}\) and suppose that for any rational integer vector \(\mathbf{z}=(z_{1},\ldots,z_{n-d})\in\mathbb{Z}^{n-d}\) with \(\|\mathbf{z}\|_{p}=1\), the \(d\times d\) matrix with \(p\)-adic entries \[M_{\mathbf{z}}(\mathbf{x})=\left(\sum_{k=1}^{n-d}z_{k}\frac{\partial^{2}g_{k}( \mathbf{x})}{\partial x_{i}\partial x_{j}}\right)_{1\leq i,j\leq d}\] has non-zero determinant for all \(\mathbf{x}\in\mathcal{U}\setminus S_{\mathcal{M}}(\mathbf{z})\), except possibly on a set \[S_{\mathcal{M}}(\mathbf{z}):=\{\mathbf{x}\in\mathcal{U}:\text{$M_{\mathbf{z}} (\mathbf{x})$ is singular}\}\] with \(\mathcal{H}^{f}(S_{\mathcal{M}}(\mathbf{z}))=0\).
For an example of a dimension function \(f\) satisfying (Ip) one can consider the function \(f(r)=r^{s}\) for some \(0<s<2(d-1)\). Note that \(s>d\) is not of interest as the entire manifold \(\mathcal{M}\) has \(\mathcal{H}^{s}(\mathcal{M})=0\). Condition (Ip) is always true when \(d\geq 3\) and \(r^{-d}f(r)\) is decreasing (a general assumption in the GBSP statement), see [21, Section 1.2] for hypersurfaces, which can readily be generalised as remarked in [23].
Condition (IIp) has the most natural interpretation when \(\mathcal{M}\) is a hypersurface. Then (IIp) is equivalent to asking for the Hessian of \(\mathbf{g}=g:\mathcal{U}\subset\mathbb{Q}_{p}^{n-1}\to\mathbb{Q}_{p}\), denoted by \(\nabla^{2}g\), to be singular only on a set of \(\mathcal{H}^{f}\) measure zero. That is,
\[\mathcal{H}^{f}\left(\big{\{}\mathbf{x}\in\mathcal{U}:\nabla^{2}g(\mathbf{x} )\text{ is singular}\big{\}}\right)=0\,. \tag{2}\]
Combining with what was noticed above for the (Ip) statement, we have that (Ip) and (IIp) are satisfied for any hypersurface of dimension at least three (thus ambient space has a dimension at least four) satisfying (2). For \(\mathcal{M}\) not a hypersurface, a detailed discussion on condition (IIp), in the real case, is provided in [23]. The \(p\)-adic case is similar. In short, in [23] some more classes of manifolds of codimension exceeding one are provided (the concrete examples found have codimension two or three), on the other hand, this condition induces some rigid restrictions on the dimension pairs \((n,d)\). Since \(\mathbf{g}\) is analytic, presumably condition (IIp) can be simplified. Indeed, then \(\det M=\det M_{\mathbf{z}}(\mathbf{x})\) as a function in \(x_{1},\ldots,x_{d}\) is a \(\mathbb{Q}_{p}\)-valued analytic map for any \(\mathbf{z}\in\mathbb{Z}^{n-d}\) as well. We expect that just like the real case (by Lojasiewicz's stratification theorem, see for example [32]), the exceptional set \(S_{\mathcal{M}}(\mathbf{z})\) can have \(p\)-adic dimension at most \(d-1\), unless \(\det M_{\mathbf{z}}\) is identically \(0\). We have not found a proper reference for the \(p\)-adic version. If true, this would imply that in (IIp) if \(f\) satisfies
\[f(r)<r^{d-1+\varepsilon},\qquad\varepsilon>0,\;r\in(0,r_{0}), \tag{3}\]
then we only need to assume that \(\det M_{\mathbf{z}}(\mathbf{x})\) is not the constant \(0\) map for any non-zero rational integer vector \(\mathbf{z}\). Condition (Ip) for some \(s>d-1\) may be sufficient to guarantee (3).
Consider the subsets of \(\mathbb{Z}^{n+1}\) defined by
\[Z(1) =\left\{\mathbf{a}=(a_{0},\ldots,a_{n})\in\mathbb{Z}\times( \mathbb{Z}^{n}\backslash\{\mathbf{0}\}):\gcd(\mathrm{a}_{\mathrm{i}},\mathrm{ a}_{\mathrm{j}},\mathrm{p})=1,\quad 0\leq\mathrm{i}<\mathrm{j}\leq\mathrm{n}\right\},\] \[Z(2) =\left\{\mathbf{a}=(a_{0},\ldots,a_{n})\in\mathbb{Z}\times( \mathbb{Z}^{n}\backslash\{\mathbf{0}\}):\gcd(\mathrm{a}_{0},\ldots,\mathrm{a}_ {\mathrm{n}},\mathrm{p})=1\right\}.\]
That is the sets where \(p\) divides at most one \(a_{i}\) and not all \(a_{i}\), respectively. Notice that
\[Z(1)\subseteq Z(2)\subseteq\mathbb{Z}\times(\mathbb{Z}^{n}\backslash\{ \mathbf{0}\})\,.\]
The reason that \(Z(1)\subseteq Z(2)\subseteq\mathbb{Z}\times(\mathbb{Z}^{n}\backslash\{ \mathbf{0}\})\) rather than \(Z(1)\subseteq Z(2)\subseteq\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}\) is to ensure we are not just approximating the inhomogenity \(\theta\) by elements of \(\mathbb{Z}\), which is not our focus in this paper.
We prove the following.
**Theorem 1.4**.: _Let \(f\) be a dimension function satisfying (Ip) and \(\mathcal{M}=\{(\mathbf{x},\boldsymbol{g}(\mathbf{x})):\mathbf{x}\in\mathbb{Q} _{p}^{d}\}\) be a manifold of dimension \(d\) defined by analytic \(\boldsymbol{g}:\mathcal{U}\subset\mathbb{Q}_{p}^{d}\rightarrow\mathbb{Q}_{p}^{ n-d}\) satisfying (IIp). Let \(\theta\in\mathbb{Z}_{p}\) and \(\Psi:\mathbb{Z}^{n+1}\rightarrow\mathbb{R}_{+}\) be a multivariable approximation function with \(\Psi(\mathbf{a})<\|\mathbf{a}\|^{-1}\). Then_
1. \(\mathcal{H}^{f}(D_{n,p}^{\theta}(\Psi,Z(1))\cap\mathcal{M})=0\) _if_ \[\sum_{\mathbf{a}\in Z(1)}\Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a}))<\infty.\]
2. \(\mathcal{H}^{f}(D_{n,p}^{\theta}(\Psi,Z(2))\cap\mathcal{M})=0\) _if_ \[\sum_{\mathbf{a}\in\mathbb{Z}(2)}\Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a}))< \infty\,.\]
3. \(\mathcal{H}^{f}(D_{n,p}(\Psi)\cap\mathcal{M})=0\) _if_ \[\sum_{\mathbf{a}\in\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}}\Psi(\mathbf{a})^{-( d-1)}f(\Psi(\mathbf{a}))<\infty\,,\] _and_ \[p\Psi(\mathbf{a})\leq\Psi(p^{-1}\mathbf{a})\qquad\forall\ \mathbf{a}\in p\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}.\] (4)
The limitation \(\Psi(\mathbf{a})<\|\mathbf{a}\|^{-1}\) is not too restrictive. Indeed, if \(\Psi(\mathbf{a})>\|\mathbf{a}\|^{-1}>\|\mathbf{a}\|^{-n}\) for all sufficiently large \(\|\mathbf{a}\|\), then by Dirichlet's Theorem \(D_{n,p}(\Psi)=\mathbb{Q}_{p}^{n}\). According to (i), restricting the integer vectors to the smallest set \(Z(1)\), we have the perfect convergence claim as predicted by the GBSP. In the classical homogeneous setting, the same is true when summing over \(Z(2)\), by (ii). The additional condition on \(\theta\) in (ii) is an artifact of our method, we do not have a satisfactory explanation for its occurrence. Claim (iii) is for the homogeneous setting only. Note that (4) is clearly satisfied for the standard power functions \(\Psi(\mathbf{a})=\|\mathbf{a}\|^{-\tau},\tau\geq n\), and more generally for any function \(\Psi\) with the property that \(-\log\Psi(\mathbf{a})/\|\mathbf{a}\|\) is non-decreasing.
### Corollaries to Theorem 1.4 combined with results of Datta and Ghosh
The following result is a special case of more general results from [16], for \(s\)-dimensional Hausdorff measure.
**Theorem 1.5** (Datta and Ghosh 2022).: _Suppose that \(\boldsymbol{g}:\mathcal{U}\subset\mathbb{Q}_{p}^{d}\rightarrow\mathbb{Q}_{p}^{n}\) and satisfies_
1. \(\boldsymbol{g}\) _is an analytic map and can be extended to the boundary of_ \(\mathcal{U}\subset\mathbb{Q}_{p}^{d}\) _an open ball,_
2. _Assume that_ \(1,x_{1},\ldots,x_{d},g_{1}(\mathbf{x}),\ldots,g_{n-d}(\mathbf{x})\) _are linearly independent functions over_ \(\mathbb{Q}_{p}\) _on any open subset of_ \(\mathcal{U}\ni\mathbf{x}\)
_._
* _Assume that_ \(\|\mathbf{g}(\mathbf{x})\|_{p}\leq 1\) _and_ \(\|\nabla\mathbf{g}(\mathbf{x})\|_{p}\leq 1\) _for any_ \(\mathbf{x}\in\mathcal{U}\)_, and that the second order difference quotient_3 _is bounded from above by_ \(\frac{1}{2}\) _for any multi-index and any triplets_ \(\mathbf{y}_{1},\mathbf{y}_{2},\mathbf{y}_{3}\in\mathcal{U}\)_._
Footnote 3: See [16, Section 5] or [25] for the definition of second order difference quotients.
_Let_
\[\Phi(\mathbf{a})=\phi(\|\mathbf{a}\|),\quad\forall\mathbf{a}\in\mathbb{Z}^{n+1}\]
_for \(\phi:\mathbb{N}\to\mathbb{R}_{+}\) a positive non-increasing function, and assume that \(s>d-1\). Then_
\[\mathcal{H}^{s}(D_{n,p}(\Phi)\cap\mathcal{M})=\mathcal{H}^{s}(\mathcal{M}) \quad\text{ if }\quad\sum_{\mathbf{a}\in\mathbb{Z}^{n+1}\setminus\{0\}}\Phi( \mathbf{a})^{s+1-d}=\infty.\]
**Remark 1.6**.: Note that in [16] the set
\[W^{\mathbf{g}}_{\Phi,\theta}=\{\mathbf{x}\in\mathcal{U}:(\mathbf{x},\mathbf{g }(\mathbf{x}))\in D^{\theta}_{n,p}(\Phi)\}\]
is considered. Since \(\nabla\mathbf{g}\) is bounded on \(\mathcal{U}\) there exists a bi-Lipschitz map between the two sets \(W^{\mathbf{g}}_{\Phi,\theta}\) and \(D^{\theta}_{n,p}(\Phi)\cap\mathcal{M}\) (or rather their complements) and so full measure results are equivalent. See the start of Section 4 for further details of such equivalence.
**Remark 1.7**.: The divergence case in the inhomogeneous setting is rather general. In particular, they prove the result for some general analytic function \(\mathbf{\Theta}:\mathcal{U}\to\mathbb{Z}_{p}\) satisfying certain conditions, see [16, Condition (I5)] for more details. In Theorem 1.5 and our application below, since we relate to Theorem 1.4 (iii), we consider the homogeneous setting \(\mathbf{\Theta}(\mathbf{x})=0\).
Combining our convergence result with Theorem 1.5, we have the following statement for the homogeneous case and \(s\)-dimensional Hausdorff measure.
**Theorem 1.8**.: _Let \(f(r)=r^{s}\) be a dimension function with_
\[d-1<s<2(d-1).\]
_Let \(\mathbf{g}:\mathcal{U}\subset\mathbb{Q}^{d}_{p}\to\mathbb{Q}^{n}_{p}\) be an analytic map satisfying (IIp), (I), (II), and (III). Let_
\[\Psi(\mathbf{a})=\frac{\psi(\|\mathbf{a}\|)}{\|\mathbf{a}\|} \tag{5}\]
_for a monotonic decreasing function \(\psi:\mathbb{N}\to\mathbb{R}_{+}\) tending to zero. Then_
\[\mathcal{H}^{s}(D_{n,p}(\Psi)\cap\mathcal{M})=\begin{cases}0&\text{ if }\quad\sum\limits_{\mathbf{a}\in\mathbb{Z}^{n+1}}\Psi(\|\mathbf{a}\|)^{s+1-d}< \infty,\\ \mathcal{H}^{s}(\mathcal{M})&\text{ if }\quad\sum\limits_{\mathbf{a}\in \mathbb{Z}^{n+1}}\Psi(\|\mathbf{a}\|)^{s+1-d}=\infty.\end{cases}\]
Proof.: The lower bound on \(s\) is due to Theorem 1.5, and the upper bound is due to (iii) in Theorem 1.4. The conditions on \(\mathbf{g}\) are a combination of requirements from both theorems. We identify \(\phi(\|\mathbf{a}\|)=\psi(\|\mathbf{a}\|)/\|\mathbf{a}\|\) so that \(\Phi(\mathbf{a})=\Psi(\mathbf{a})\), noting that \(\phi\) is clearly decreasing as well. Thus on the one hand we may apply Theorem 1.5, on the other hand, we may deduce that
\[\Psi(\mathbf{a})=\frac{\psi(\|\mathbf{a}\|)}{\|\mathbf{a}\|}=\frac{\psi(\| \mathbf{a}\|)}{p\|p^{-1}\mathbf{a}\|}\leq\frac{\psi(\|p^{-1}\mathbf{a}\|)}{p \|p^{-1}\mathbf{a}\|}=p^{-1}\Psi(p^{-1}\mathbf{a}),\]
and so Theorem 1.4 (iii) is applicable.
Since \(\mathcal{H}^{s}(\mathcal{M})=0\) for \(s>d\), if \(d\geq 2\) the theorem covers the whole interesting range \((d-1,d]\) for \(s\), apart from the endpoint \(s=d=2\) when \(d=2\). Since (3) is satisfied for \(f\) in Theorem 1.8, we expect that (IIp) can be relaxed to impose that \(\det M(\mathbf{x})=\det M_{\mathbf{z}}(\mathbf{x})\) is not constant \(0\), for any choice of \(\mathbf{z}\).
**Remark 1.9**.: By the above remarks on (Ip), (IIp) in Section 1.3, the above result gives us a zero-full dichotomy for homogeneous dual approximation on sufficiently curved hypersurfaces of \(n\)-dimensional \(p\)-adic space with \(n\geq 3\). Note that individually each result (the convergence and divergence case) extends beyond the above theorem. For example, [16, Theorem 1.1] allows for the inhomogeneous setting, but does not allow for multivariable approximation. Additionally, we have restricted the range of admissible approximation functions to be those of the form (5). Note that such a form of approximation function is permissible in applying Theorems 1.4-1.5.
Denoting by \(\dim_{\mathcal{H}}\) the Hausdorff dimension, we deduce the following immediate corollary.
**Corollary 1.10**.: _Let **g** be as in Theorem 1.8 and \(d\geq 2\). Suppose that_
\[\Psi(\mathbf{a})=\|\mathbf{a}\|^{-1-\tau}\]
_for some \(\tau>n\). Then_
\[\dim_{\mathcal{H}}(D_{n,p}(\Psi)\cap\mathcal{M})=d-1+\frac{n+1}{1+\tau}.\]
Proof.: We first check that all the requirements of Theorem 1.8 are satisfied. Note that \(s=d-1+\frac{n+1}{\tau+1}<2(d-1)\) whenever \(\tau>\frac{n+1}{d-1}-1\), and that \(n\geq\frac{n+1}{d-1}-1\) for all \(d\geq 2\), hence our assumption \(\tau>n\) implies \(s<2(d-1)\). The remaining conditions are obvious. So it remains to find when the critical sum converges/diverges. Note that
\[\sum_{\mathbf{a}\in\mathbb{Z}^{n+1}}\|\mathbf{a}\|^{-(1+\tau)(s+1-d)}=\sum_{r= 1}^{\infty}\sum_{\mathbf{a}\in\mathbb{Z}^{n+1}:\|\mathbf{a}\|=r}r^{-(1+\tau)( s+1-d)}\asymp\sum_{r=1}^{\infty}r^{n-(1+\tau)(s+1-d)}\]
and the summation on the right hand side converges for \(s>d-1+\frac{n+1}{1+\tau}\) and diverges when \(s\leq d-1+\frac{n+1}{1+\tau}\).
**Remark 1.11**.: Note trivially for \(\tau\leq n\) we have that \(D_{n,p}(\Psi)=\mathbb{Q}_{p}^{n}\) by the \(p\)-adic version of Dirichlet's Theorem, and so \(\dim_{\mathcal{H}}(D_{n,p}(\Psi)\cap\mathcal{M})=\dim_{\mathcal{H}}\mathcal{ M}=d\).
**Remark 1.12**.: Note that in contrast to [21] where \(d\geq 3\) is required, we get a claim for \(d=2\) here. The reason is that we assume strict inequality \(\tau>n\), so that the parameter range for \(s\) satisfying (Ip) can be improved. Any such improvement leads to the implementation of the case \(d=2\).
**Acknowledgments:** The research of Mumtaz Hussain and Ben Ward is supported by the Australian Research Council discovery project 200100994. Part of this work was done when Johannes visited La Trobe University, we thank the Sydney Mathematics Research Institute and La Trobe University for the financial support.
## 2. Preliminaries and the main lemma
### Preliminaries
We first note that for self-mappings (that is \(m=n\)) of analytic functions as in the Definition 1.3, the inverse function theorem holds [36, Section 9 p. 113]. The claim is formulated more generally for any ultrametric space. We notice that as explained in [36] the notion of analyticity implies infinite differentiability in a strong sense. Hereby we call a function \(\phi:U\subseteq\mathbb{Q}_{p}^{m}\to\mathbb{Q}_{p}^{n}\) strong differentiable at \(\mathbf{a}\) if there is a linear function \(L:\mathbb{Q}_{p}^{m}\to\mathbb{Q}_{p}^{n}\) such that
\[\lim_{|\mathbf{h}|_{p}\to 0,\mathbf{h}\neq 0}\frac{\|\phi(\mathbf{a}+\mathbf{h} )-\phi(\mathbf{a})-L\mathbf{h}\|_{p}}{\|\mathbf{h}\|_{p}}=0. \tag{6}\]
In the case \(m=n=1\), this is equivalent to the existence of the limit
\[\lim_{(x,y)\to(a,a),x\neq y}\frac{|\phi(x)-\phi(y)|_{p}}{|x-y|_{p}}.\]
Such a derivative is uniquely determined (if it exists) and we denote it \(L{\bf a}=D\phi({\bf a})\). Notice that in the \(p\)-adic setting this is indeed stronger than the typical notion of differentiability where \(L{\bf a}\) is involved instead of \(L{\bf h}\) in the numerator of (6), and one has to be careful about transferring real/complex analysis claims to the \(p\)-adic world. See [34, Example 26.6] for a famous example of a function \(f:\mathbb{Z}_{p}\to\mathbb{Q}_{p}\) with "ordinary" derivative \(Df\equiv 1\) which is not injective in any neighborhood of \(0\). However, with the above notion of strong differentiability most common real analysis facts can be conserved. If we assume our function to be analytic then the inverse function theorem extends to the \(p\)-adic settings as claimed in Serre's book [36].
**Theorem 2.1** (\(p\)-adic Inverse Function Theorem).: _Assume the function \(\phi:U\subseteq\mathbb{Q}_{p}^{n}\to\mathbb{Q}_{p}^{n}\) for \({\bf 0}\in U\) open is analytic. Then if \(D\phi({\bf 0})\) induces a linear isomorphism, then \(\phi\) is a local isomorphism._
We remark that when \(n=1\), in fact, the function \(\phi/\phi^{\prime}(0)\) is a local isometry, i.e. \(|\phi(x)-\phi(y)|_{p}/|x-y|_{p}=|D\phi(a)|_{p}\) is constant for \(x,y\), \(x\neq y\), close enough to \(a\), see [34, Proposition 27.3]. We further point out that Theorem 2.1 is the only reason why we require our parametrising function \({\bf g}\) to be analytic, for all other arguments \(C^{2}\) would suffice.
From the inverse function theorem it can be shown that some mean value estimate similar to the real case holds.
**Theorem 2.2** (\(p\)-adic Mean Value Theorem).: _For \(\phi\) as in Theorem 2.1 and small enough \(r\) we have_
\[\phi(B({\bf x},r))\subseteq B(\phi({\bf x}),\|D\phi({\bf x})\|_{p}\cdot r).\]
In other words a ball of radius \(r\) (with respect to \(\|.\|_{p}\)-norm) around \({\bf x}\in\mathbb{Q}_{p}^{n}\) is mapped into a ball of radius \(\ll_{\bf x}r\). In fact, the mean value estimate only requires strong differentiability.
### The main lemma
Let \(\mathcal{U}\subset\mathbb{Q}_{p}^{d}\) be a connected bounded open set and
\[\Lambda_{\mathcal{M}}:=\bigcup_{\begin{subarray}{c}{\bf z}\in\mathbb{Z}^{n-d _{i}}\\ \|{\bf z}\|_{p}=1\end{subarray}}\!\!\!S_{\mathcal{M}}({\bf z})=\bigcup_{ \begin{subarray}{c}{\bf z}\in\mathcal{Z}^{n-d_{i}}\\ \|{\bf z}\|_{p}=1\end{subarray}}\left\{{\bf x}\in\mathcal{U}:M_{\bf z}({\bf x })=\left(\sum_{k=1}^{n-d}z_{k}\frac{\partial^{2}g_{k}({\bf x})}{\partial x_{i} \partial x_{j}}\right)_{1\leq i,j\leq d}\text{ is singular}\right\}.\]
Observe that for each individual vector \({\bf z}\in\mathbb{Z}^{n-d}\backslash\{{\bf 0}\}\) of \(p\)-norm 1, the above set \(S_{\mathcal{M}}({\bf z})\) has \(\mathcal{H}^{f}\)-measure zero by condition (IIp), and so by the sigma-additivity of the \(\mathcal{H}^{f}\) measure we have that
\[\mathcal{H}^{f}(\Lambda_{\mathcal{M}})=0.\]
Thus we may restrict to fixed \({\bf z}\) and just write \(S_{\mathcal{M}}=S_{\mathcal{M}}({\bf z})\). Observe that \(\mathcal{U}\backslash S_{\mathcal{M}}\) can be covered by countably many small open balls \(B_{i}\) such that \(\det M({\bf x})\) is bounded on \(B_{i}\) and \(\det M({\bf x})>\varepsilon_{i}\) for some \(\varepsilon_{i}>0\) and all \({\bf x}\in B_{i}\). If we can show that
\[\mathcal{H}^{f}\left(D_{n,p}^{\theta}(\Psi)\cap\left\{({\bf x},{\bf g}({\bf x })):{\bf x}\in B_{i}\right\}\right)=0\]
for all \(B_{i}\), then by the subadditivity of \(\mathcal{H}^{f}\) we would have that
\[\mathcal{H}^{f}\left(D_{n,p}^{\theta}(\Psi)\cap\mathcal{M}\right)=0.\]
Hence without loss of generality let us assume that \(\mathcal{U}\) is an open ball with the \(d\times d\) matrices \(M({\bf x})\) with determinant bounded away from \({\bf 0}\). Furthermore suppose that \(\mathcal{U}\subset\mathbb{Z}_{p}^{d}\) and \({\bf 0}\in\mathcal{U}\).
For any \({\bf a}=(a_{0},a_{1},\ldots,a_{n})\in\mathbb{Z}^{n+1}\) consider the set
\[S({\bf a}):=\left\{{\bf x}\in\mathcal{U}:|a_{1}x_{1}+\ldots a_{d}x_{d}+a_{d+1}g _{1}({\bf x})+\cdots+a_{n}g_{n-d}({\bf x})+a_{0}+\theta|_{p}<\Psi({\bf a}) \right\}.\]
We prove the following key lemma.
**Lemma 2.3**.: _For any \(\mathbf{a}\in\mathbb{Z}\times(\mathbb{Z}^{n}\backslash\{\mathbf{0}\})\), \(\boldsymbol{g}:\mathcal{U}\to\mathbb{Q}_{p}^{n}\) satisfying (IIp), and dimension function \(f\) satisfying (Ip) for some \(s<2(d-1)\), we have that_
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|(a_{1},\ldots,a_{n})\|_{p}^{(d-1)- s}\Psi(\mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right),\]
_with the implied constant independent of \(\mathbf{a}\)._
## 3. Proof of Lemma 2.3
We may restrict \(K\) to a compact subset of \(\mathcal{U}\). For \(\mathbf{a}\in\mathbb{Z}^{n+1}\) write
\[\mathbf{a}=(a_{0},\mathbf{a}_{1},\mathbf{a}_{2})=(a_{0},a_{1},\ldots,a_{d},a_ {d+1}\ldots,a_{n})\in\mathbb{Z}\times\mathbb{Z}^{d}\times\mathbb{Z}^{n-d},\]
and let
\[\widetilde{a_{2}}=\begin{cases}1&\text{if }\mathbf{a}_{2}=\mathbf{0},\\ \|\mathbf{a}_{2}\|_{p}&\text{otherwise}.\end{cases}\]
Define \(h_{\mathbf{a}}:\mathbb{Q}_{p}^{d}\to\mathbb{Q}_{p}\) by
\[h_{\mathbf{a}}(\mathbf{x})=\widetilde{a_{2}}\mathbf{a}_{1}\cdot\mathbf{x}+ \widetilde{a_{2}}\mathbf{a}_{2}\cdot\mathbf{g}(\mathbf{x})+\widetilde{a_{2}}( a_{0}+\theta)\,,\]
where
\[\widetilde{a_{2}}\mathbf{a}_{1}\cdot\mathbf{x}=\widetilde{a_{2}}a_{1}x_{1}+ \cdots+\widetilde{a_{2}}a_{d}x_{d},\]
and similarly for \(\widetilde{a_{2}}\mathbf{a}_{2}\cdot\mathbf{g}(\mathbf{x})\). We may write
\[S(\mathbf{a})=\left\{\mathbf{x}\in K:|h_{\mathbf{a}}(\mathbf{x})|_{p}< \widetilde{a_{2}}^{-1}\Psi(\mathbf{a})\right\}\,.\]
Note that
\[\nabla h_{\mathbf{a}}(\mathbf{x}) =\left(\frac{\partial h_{\mathbf{a}}(\mathbf{x})}{\partial x_{1}},\ldots,\frac{\partial h_{\mathbf{a}}(\mathbf{x})}{\partial x_{d}}\right),\] \[=\left(\widetilde{a_{2}}a_{1}+\sum_{k=1}^{n-d}\widetilde{a_{2}}a_ {d+k}\frac{\partial g_{k}(\mathbf{x})}{\partial x_{1}},\,\ldots\,,\, \widetilde{a_{2}}a_{d}+\sum_{k=1}^{n-d}\widetilde{a_{2}}a_{d+k}\frac{\partial g _{k}(\mathbf{x})}{\partial x_{d}}\right),\] \[=\widetilde{a_{2}}\mathbf{a}_{1}+\widetilde{a_{2}}\mathbf{a}_{2} \cdot\left(\nabla\mathbf{g}(\mathbf{x})\right),\]
for
\[\nabla\mathbf{g}(\mathbf{x})=\left(\frac{\partial\mathbf{g}(\mathbf{x})}{ \partial x_{1}},\ldots,\frac{\partial\mathbf{g}(\mathbf{x})}{\partial x_{d}} \right)\quad\text{ with }\quad\frac{\partial\mathbf{g}(\mathbf{x})}{\partial x_{i}}= \left(\frac{\partial g_{1}(\mathbf{x})}{\partial x_{i}},\ldots,\frac{\partial g _{n-d}(\mathbf{x})}{\partial x_{i}}\right),\]
a matrix with \(n-d\) rows and \(d\) columns and
\[\nabla^{2}h_{\mathbf{a}}(\mathbf{x})=\left(\sum_{k=1}^{n-d}\widetilde{a_{2}}a _{d+k}\frac{\partial^{2}g_{k}(\mathbf{x})}{\partial x_{i}\partial x_{j}} \right)_{1\leq i,j\leq d}.\]
Note that if \(\mathbf{a}_{2}\neq\mathbf{0}\) the vector \(\left\|\widetilde{a_{2}}\mathbf{a}_{2}\right\|_{p}=1\) by our choice of \(\widetilde{a_{2}}\), and so identifying \(z_{k}=\widetilde{a_{2}}a_{d+k}\) for \(k=1,2,\ldots,n-d\), by our assumption (IIp) \(\nabla^{2}h_{\mathbf{a}}(\mathbf{x})\) has a non-zero determinant outside a set of \(\mathcal{H}^{f}\)-measure \(0\). We may, for simplicity, assume that the exceptional set is empty, \(K\cap\Lambda_{\mathcal{M}}=\varnothing\). We next bound \(\nabla^{2}h_{\mathbf{a}}\) uniformly from above.
**Lemma 3.1**.: _For \(\mathbf{a}\in\mathbb{Z}^{n+1}\) and \(h_{\mathbf{a}}\) defined as above_
* _If_ \(\mathbf{a}_{2}=\mathbf{0}\) _then_ \[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\asymp\|\mathbf{a}_{1}\|_{p}\quad \forall\,\mathbf{x}\in K.\]
* _If_ \(\mathbf{a}_{2}\neq\mathbf{0}\) _and_ \(\|\mathbf{a}_{1}\|_{p}>\sup_{\mathbf{w}\in K}\|\mathbf{a}_{2}\cdot\nabla \boldsymbol{g}(\mathbf{w})\|_{p}\) _then_ \[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\asymp\|\mathbf{a}_{2}\|_{p}^{-1}\| \mathbf{a}_{1}\|_{p}\quad\forall\,\mathbf{x}\in K.\]
* _If there exists_ \(\mathbf{v}\in K\) _with_ \(\nabla h_{\mathbf{a}}(\mathbf{v})=\mathbf{0}\) _then_ \[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\asymp\|\mathbf{x}-\mathbf{v}\|_{p} \quad\forall\,\mathbf{x}\in K.\]
_._
* _If no such_ \({\bf v}\in K\) _exists, and_ \(\|{\bf a}_{1}\|_{p}\leq\sup_{{\bf w}\in K}\|{\bf a}_{2}\cdot\nabla{\bf g}({\bf w} )\|_{p}\) _then_ \[0<\|\nabla h_{\bf a}({\bf x})\|_{p}\leq\sup_{{\bf w}\in K}\|\nabla{\bf g}({\bf w} )\|_{p}\quad\implies\quad\|\nabla h_{\bf a}({\bf x})\|\asymp 1\quad\forall\,{\bf x} \in K.\]
_In all of the above cases \(\|\nabla^{2}h_{\bf a}({\bf x})\|_{p}\ll 1\) for all \({\bf x}\in K\)._
_Proof of Lemma 3.1:_ Note that if \({\bf a}_{2}={\bf 0}\) immediately we have that
\[\|\nabla h_{\bf a}({\bf x})\|_{p}=\|{\bf a}_{1}\|_{p}\]
for all \({\bf x}\in K\), giving (a).
If \(\|\widetilde{a}_{2}{\bf a}_{1}\|_{p}>\sup_{{\bf w}\in K}\|\widetilde{a}_{2}{ \bf a}_{2}\cdot\nabla{\bf g}({\bf w})\|_{p}\), which follows by the assumption
\[\|{\bf a}_{1}\|_{p}>\sup_{{\bf w}\in K}\|{\bf a}_{2}\cdot\nabla{\bf g}({\bf w} )\|_{p}\,,\]
then by the strong triangle inequality
\[\|\nabla h_{\bf a}({\bf x})\|_{p}=\max\left\{\|\widetilde{a}_{2}{\bf a}_{1}\| _{p},\|\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla{\bf g}({\bf x})\|_{p}\right\}= \|\widetilde{a}_{2}{\bf a}_{1}\|_{p}.\]
Henceforth assume \(\|\widetilde{a}_{2}{\bf a}_{1}\|_{p}\leq\sup_{{\bf w}\in K}\|\widetilde{a}_{2 }{\bf a}_{2}\cdot\nabla{\bf g}({\bf w})\|_{p}\).
Since \({\bf g}\) is analytic, and thus so is each \(g_{i}\) for \(1\leq i\leq n-d\), each partial derivative \(\frac{\partial g_{i}}{\partial x_{j}}\) is analytic and so the function \(\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla{\bf g}:K\subset\mathbb{Q}_{p}^{d} \rightarrow\mathbb{Q}_{p}^{d}\) is analytic. Furthermore \(\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla{\bf g}\) is strongly differentiable with the linear function \(L{\bf x}=\nabla^{2}h_{\bf a}({\bf x})\) and non-zero determinant (thus invertible) on some small ball \(B_{\bf x}\subset K\). Hence \(\nabla^{2}h_{\bf a}({\bf x})\) is a linear isomorphism and so \(\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla{\bf g}\) is a local isomorphism by Theorem 2.1. Hence by Theorem 2.2 for any \({\bf y}\in B_{\bf x}\)
\[\|\nabla h_{\bf a}({\bf x})-\nabla h_{\bf a}({\bf y})\|_{p}=\|\widetilde{a}_{ 2}{\bf a}_{2}\cdot\nabla g({\bf x})-\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla g ({\bf y})\|_{p}\asymp\|{\bf x}-{\bf y}\|_{p}. \tag{7}\]
The above implied constant depends on \(\widetilde{a}_{2}{\bf a}_{2}\). However, \(\widetilde{a}_{2}{\bf a}_{2}\in\{{\bf z}\in\mathbb{Z}^{n-d}:\|{\bf z}\|_{p}=1\}\) is a compact set. Indeed, we have
\[\|\widetilde{a}_{2}{\bf a}_{2}\|_{p}=|\widetilde{a}_{2}|_{p}\cdot\|{\bf a}_{2 }\|_{p}=|\;\|{\bf a}_{2}\|_{p}\;|_{p}\cdot\|{\bf a}_{2}\|_{p}=\|{\bf a}_{2}\|_ {p}^{-1}\cdot\|{\bf a}_{2}\|_{p}=1, \tag{8}\]
where we employed the identity \(|\;|y|_{p}\;|_{p}=|y|_{p}^{-1}\) for any \(y\in\mathbb{Q}_{p}\) that is readily checked. So uniform bounds exist. This settles (b).
Assume as in (c) there exists some \({\bf v}\in K\) such that \(\nabla h_{\bf a}({\bf v})=0\) (or \(\|\nabla h_{\bf a}({\bf v})\|_{p}<\|\nabla h_{\bf a}({\bf x})\|_{p}\) for all \({\bf x}\in K\backslash\{{\bf v}\}\))4. Then we can apply (7) identifying \({\bf v}={\bf y}\) to get in a neighborhood \(B_{\bf v}\) of \({\bf v}\) that
Footnote 4: Observe that this case is not possible. To see this note that \(\nabla h_{\bf a}\) is continuous, and so if \(\nabla h_{\bf a}({\bf v})\neq 0\) there exists a neighbourhood \(V\) of \({\bf v}\) such that \(\|\nabla h_{\bf a}({\bf v})\|_{p}=\|\nabla h_{\bf a}({\bf x})\|_{p}\) for all \({\bf x}\in V\). See [25, Section 3.2]
\[\|\nabla h_{\bf a}({\bf x})\|_{p}=\|\nabla h_{\bf a}({\bf x})-\nabla h_{\bf a} ({\bf v})\|_{p}\asymp\|{\bf x}-{\bf v}\|_{p},\qquad{\bf x}\in B_{\bf v}.\]
So the claim holds within \(B_{\bf v}\). In the complement of any such \(B_{\bf v}\) within \(K\) (in fact we may assume \({\bf v}\) is unique in \(K\)), we can consider ourselves to be in case (d). So suppose as in (d) we have \(\nabla h_{\bf a}({\bf x})\neq{\bf 0}\) for all \({\bf x}\in K\), and so \(\|\nabla h_{\bf a}({\bf x})\|_{p}\gg 1\) uniformly by compactness of \(K\) and the continuity of \(\nabla h_{\bf a}\). By the strong triangle inequality, we have that
\[\|\nabla h_{\bf a}({\bf x})\|_{p}\leq\max\left\{\|\widetilde{a}_{2}{\bf a}_{1}\| _{p},\|\widetilde{a}_{2}{\bf a}_{2}\cdot\nabla{\bf g}({\bf x})\|_{p}\right\}. \tag{9}\]
By our assumption
\[0<\|\widetilde{a}_{2}{\bf a}_{1}\|_{p}\leq\sup_{{\bf x}\in K}\|\widetilde{a}_{ 2}{\bf a}_{2}\cdot\nabla{\bf g}({\bf x})\|_{p}\leq\sup_{{\bf x}\in K}\max_{1 \leq i\leq d}\left\|\frac{\partial{\bf g}({\bf x})}{\partial x_{i}}\right\|_{p}\ll 1\]
and so by (9) we have that
\[\|\nabla h_{\bf a}({\bf x})\|_{p}\ll 1.\]
Combined with \(\|\nabla h_{\bf a}({\bf x})\|_{p}\gg 1\) we have
\[\|\nabla h_{\bf a}({\bf x})\|_{p}\asymp 1.\]
The final claim follows from the compactness of \(K\) and the continuity of \(\nabla^{2}h_{\mathbf{a}}=\nabla^{2}\mathbf{g}\).
We prove the following key lemma which is a \(p\)-adic analogue of [21, Lemma 2.4].
**Lemma 3.2**.: _Let \(\phi:U\subset\mathbb{Q}_{p}^{d}\to\mathbb{Q}_{p}\) be an analytic function, and fix \(\alpha>0\), \(\delta>0\), and \(\mathbf{x}\in U\) such that \(B_{d}(\mathbf{x},\alpha)\subset U\). There exists a constant \(C>0\) depending only on \(d\) such that if_
\[\|\nabla\phi(\mathbf{x})\|_{p}\geq C\alpha\sup_{\mathbf{w}\in U}\|\nabla^{2} \phi(\mathbf{w})\|_{p},\]
_then_
\[S(\phi,\delta)=\{\mathbf{y}\in B_{d}(\mathbf{x},\alpha):|\phi(\mathbf{y})|_{p }<\|\nabla\phi(\mathbf{x})\|_{p}\delta\}\]
_can be covered by \(\asymp(\alpha/\delta)^{d-1}\) balls in \(\mathbb{Q}_{p}^{d}\) of radius \(\delta\)._
Our proof will slightly deviate from the real case in [21], but we employ the same idea.
**Proposition 3.3**.: _Let \(d\in\mathbb{N}\). The invertible matrices form an open subset of \(\mathbb{Q}_{p}^{d\times d}\)._
Proof.: Let \(A\) be an invertible matrix. Expand the determinant \(A_{\epsilon}=A+\epsilon Y\) with any fixed matrix \(Y\in\mathbb{Q}_{p}^{d\times d}\) with \(\det Y=1\), and \(\epsilon\in\mathbb{Z}_{p}\). This gives
\[\det A_{\epsilon}=\det A+\epsilon z_{1}+\epsilon^{2}z_{2}+\cdots+\epsilon^{d} z_{d}\]
for some fixed \(z_{i}\in\mathbb{Q}_{p}\) built from entries of \(A\) and \(Y\). Since \(\det A\neq 0\) if we let \(|\epsilon|_{p}<1\) small enough then
\[\|\det A_{\epsilon}\|_{p}\geq\|\det A\|_{p}-|\epsilon|_{p}\max|z_{i}|_{p}\]
will be non-zero.
**Proposition 3.4**.: _Let \(d\in\mathbb{N}\). If \(\Phi:\mathbb{Q}_{p}^{d}\to\mathbb{Q}_{p}^{d}\) is a Lipschitz map with Lipschitz constant \(L\), and \(U\subseteq\mathbb{Q}_{p}^{d}\) can be covered by \(k\) balls of radius \(r>0\), then \(\Phi(U)\) can be covered by \(\ll_{d,L}k\) balls of the same radius \(r\)._
Proof.: Let \(B(\mathbf{x},r)\) be one of the \(k\) balls that cover \(U\). For any \(\mathbf{y}\in B(\mathbf{x},r)\) since \(\Phi\) is Lipschitz we have
\[\|\Phi(\mathbf{x})-\Phi(\mathbf{y})\|_{p}\leq L\|\mathbf{x}-\mathbf{y}\|_{p} \leq Lr,\]
and so \(\Phi(B(\mathbf{x},r))\subseteq B(\Phi(\mathbf{x}),Lr)\) and so \(\Phi(U)\) can be covered by \(k\) balls of radius \(Lr\). We now show the following claim:
_For any \(\mathbf{x}\in\mathbb{Q}_{p}^{d}\) and any \(\rho,K>0\) the \(p\)-adic ball \(B_{d}(\mathbf{y},K\rho)\) can be covered by \(\ll\max\{1,K^{d}\}\) balls of radius \(\rho>0\)._
Using this we can deduce that \(\Phi(U)\) can be covered by \(\ll L^{d}k\) balls as required. To prove the statement recall by the properties of the ultrametric norm that any two balls are either disjoint or one contains the other, that is, the intersection is either empty or full. If \(1\geq K>0\) then trivially \(B(\mathbf{y},r)\) is a cover of \(B(\mathbf{y},K\rho)\) so the number of balls is \(\ll 1\). So assume \(K>1\). Let \(\{B_{i}\}\) be a collection of balls of radius \(\rho\) that cover \(B(\mathbf{y},K\rho)\). We can assume this collection is disjoint and finite by the properties of the ultrametric norm and since \(B(\mathbf{y},K\rho)\) is bounded. Furthermore we can assume
\[B(\mathbf{y},K\rho)=\bigcup_{i}B_{i}.\]
Now by properties of the \(p\)-adic \(d\)-dimensional Haar measure \(\mu_{p,d}\) we have that
\[(K\rho)^{d}\asymp\mu_{p,d}\left(B(\mathbf{y},K\rho)\right)=\sum_{i}\mu_{p,d} \left(B_{i}\right)\asymp\sum_{i}\rho^{d}\]
Hence the cardinality of the set of balls \(\{B_{i}\}\) is \(\asymp K^{d}\).
Proof of Lemma 3.2.: We proceed as in the proof of [21, Lemma 2.4]. By translation, without loss of generality, we may assume \(\mathbf{x}=\mathbf{0}\). For simplicity let \(\kappa:=\|\nabla\phi(\mathbf{0})\|_{p}\). Clearly \(\kappa>0\) as otherwise by the assumption of the lemma \(\nabla^{2}\phi\) vanishes on an open set which we can easily exclude. By rotation we may assume \(\nabla\phi(\mathbf{0})=\kappa\mathbf{e}_{d}\), where \(\mathbf{e}_{d}\) denotes the \(d\)-th canonical base vector in \(\mathbb{Q}_{p}^{d}\). Then \(\|\nabla\phi(\mathbf{0})\|_{p}=\kappa\). Now consider the map
\[\Phi:\mathscr{B}:=B_{d}(\mathbf{0},\alpha)\to\mathbb{Q}_{p}^{d}\]
defined by the formula
\[\Phi(\mathbf{y})=(\kappa y_{1},\ldots,\kappa y_{d-2},\kappa y_{d-1},\phi( \mathbf{y})),\]
for \(\mathbf{y}=(y_{1},\ldots,y_{d})\in\mathbb{Q}_{p}^{d}\). We have
\[\nabla\Phi(\mathbf{0})=\kappa I_{d}\]
with \(I_{d}\) the \(d\times d\) identity matrix. On the other hand
\[\sup_{\mathbf{w}\in\mathscr{B}}\|\nabla^{2}\Phi(\mathbf{w})\|_{p}=\sup_{ \mathbf{w}\in\mathscr{B}}\|\nabla^{2}\phi(\mathbf{w})\|_{p}\leq\frac{\|\nabla \Phi(\mathbf{0})\|_{p}}{C\alpha}=\frac{\kappa}{C\alpha}.\]
Denote \(B(\kappa I_{d},\kappa/(C\alpha))\) the \(d\times d\) matrices with distance at most \(\kappa/(C\alpha)\) from \(\kappa I_{d}\), in terms of the maximum norm on \(\mathbb{Q}_{p}^{d^{2}}\) (maximum norm \(|.|_{p}\) of entries). Since \(\mathscr{B}\) has diameter \(2\alpha\), it follows from the Mean Value Inequality (Theorem 2.2) applied to the gradient \(\nabla\Phi\) that \(\nabla\Phi(\mathbf{w})\in B(\kappa I_{d},2\alpha\cdot\kappa/(C\alpha))=B( \kappa I_{d},2\kappa/C)\) for all \(\mathbf{w}\in\mathscr{B}\). Thus by the Mean Value Inequality applied to \(\Phi\), and the strong triangle inequality, for all \(\mathbf{y},\mathbf{w}\in\mathscr{B}\)
\[\|\Phi(\mathbf{w})-\Phi(\mathbf{y})\|_{p}\leq\max\{\|\kappa I_{d}\|_{p},2 \kappa/C\}\|\mathbf{w}-\mathbf{y}\|_{p}=\max\{\kappa,2\kappa/C\}\|\mathbf{w}- \mathbf{y}\|_{p}.\]
The invertible \(d\times d\) matrices form an open subset of \(\mathbb{Q}_{p}^{d\times d}\) by Proposition 3.3 and \(\kappa>0\), so we see from the \(p\)-adic Inverse Function Theorem 2.1 that if \(C>2\) is sufficiently large, then \(B(\kappa I_{d},2\kappa/C)\) consists of invertible matrices. Hence \(\Phi\) is bi-Lipschitz with a uniform bi-Lipschitz constant. Note
\[S(\phi,\delta)=\Phi^{-1}\big{(}B_{d-1}(\mathbf{0},\alpha)\times(-\delta\kappa,\delta\kappa)\big{)}\]
and it is clear that the latter set \(B_{d-1}(\mathbf{0},\alpha)\times(-\delta\kappa,\delta\kappa)\) can be covered by
\[\asymp\kappa(\delta/\alpha)^{-(d-1)}=\kappa(\alpha/\delta)^{d-1}\]
balls of radius \(\delta\). Since \(\kappa\) is fixed the proof is finished via Proposition 3.4, when we allow the implied constant to depend only on the bi-Lipschitz constant.
We now use Lemma 3.2 in each of the possible cases outlined by Lemma 3.1 in order to finish the proof of Lemma 2.3.
**Case (a).** In this case observe that \(S(\mathbf{a})\) is a \(p\)-adic \(\Psi(\mathbf{a})\)-thickened hyperplane (of \(\mathbb{Q}_{p}^{d}\)) so can be covered by \(\asymp\Psi(\mathbf{a})^{-(d-1)}\) balls of radius \(\Psi(\mathbf{a})\). Hence
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\Psi(\mathbf{a})^{-(d-1)}f(\Psi( \mathbf{a})).\]
**Case (b).** By the assumption and conclusion of case (b) we have
\[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p} \gg\|\mathbf{a}_{2}\|_{p}^{-1}\|\mathbf{a}_{1}\|_{p}\] \[\geq\sup_{\mathbf{w}\in K}\|\widetilde{a}_{2}\mathbf{a}_{2} \cdot\nabla\mathbf{g}(\mathbf{w})\|_{p}\] \[\gg_{\mathbf{a}}1\] \[\gg\sup_{\mathbf{w}\in K}\|\nabla^{2}h_{\mathbf{a}}(\mathbf{w}) \|_{p}.\]
The final inequality is due to the observation from Lemma 3.1 that \(\|\nabla^{2}h_{\mathbf{a}}(\mathbf{x})\|_{p}\ll 1\) for all \(\mathbf{x}\in K\), and the constant in the penultimate inequality is dependent on
\[\widetilde{a}_{2}\mathbf{a}_{2}\in\{\mathbf{z}\in\mathbb{Z}^{n-d}:\|\mathbf{ z}\|_{p}=1\}\]
by (8). Since the supremum is taken over a compact set of \(\widetilde{a}_{2}\mathbf{a}_{2}\), we can choose a uniform lower bound. Note that this constant is strictly positive since \(\widetilde{a}_{2}\mathbf{a}_{2}\cdot\nabla\mathbf{g}(\mathbf{w})=0\) for all \(\mathbf{w}\in K\) implies \(\nabla^{2}h_{\mathbf{a}}(\mathbf{w})=\mathbf{0}\), which we have removed. Thus, for suitably chosen \(\epsilon>0\) dependent on the implied constants in the inequalities above and \(C>0\) appearing in Lemma 3.2, let
\[\alpha=\epsilon,\quad\delta=\frac{\Psi(\mathbf{a})}{\widetilde{a}_{2}\|\nabla h _{\mathbf{a}}(\mathbf{x})\|_{p}}\asymp\frac{\Psi(\mathbf{a})}{\|\mathbf{a}_{1 }\|_{p}},\quad\phi=h_{\mathbf{a}}.\]
Hereby for the equivalence in \(\delta\) we used that \(\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\ll 1\) uniformly as well since \(\nabla h_{\mathbf{a}}\) is continuous on the compact set \(K\). Then by Lemma 3.2
\[S(\mathbf{a})\cap B(\mathbf{x},\epsilon)\]
can be covered by
\[\asymp\epsilon^{(d-1)}\|\mathbf{a}_{1}\|_{p}^{(d-1)}\Psi(\mathbf{a})^{-(d-1)}\]
balls of radius \(\asymp\Psi(\mathbf{a})\|\mathbf{a}_{1}\|_{p}^{-1}\). Thus
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|\mathbf{a}_{1}\|_{p}^{(d-1)}\Psi( \mathbf{a})^{-(d-1)}f\left(\frac{\Psi(\mathbf{a})}{\|\mathbf{a}_{1}\|_{p}}\right)\]
\[\stackrel{{\text{condition (lp)}}}{{\ll}}\|\mathbf{a}_{1}\|_{p}^{(d -1)-s}\Psi(\mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right)\qquad\quad( \text{ since }\|\mathbf{a}_{1}\|_{p}^{-1}\geq 1).\]
Since
\[\|\mathbf{a}_{2}\|_{p}^{-1}\|\mathbf{a}_{1}\|_{p}>\sup_{\mathbf{w}\in K}\| \widetilde{a}_{2}\mathbf{a}_{2}\cdot\nabla\mathbf{g}(\mathbf{w})\|\gg 1\]
we have that \(\|\mathbf{a}_{1}\|_{p}\asymp\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\), so
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|(\mathbf{a}_{1},\mathbf{a}_{2})\| _{p}^{(d-1)-s}\Psi(\mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right).\]
**Case (c).** Fix \(k\in\mathbb{Z}\) and consider the \(p\)-adic annulus
\[A_{k}=B_{d}(\mathbf{v},p^{-k})\backslash B_{d}(\mathbf{v},p^{-(k+1)}).\]
Note that \(\{A_{k}\}_{k\in\mathbb{Z}}\) partitions \(\mathbb{Q}_{p}^{d}\backslash\{\mathbf{v}\}\) and since \(K\) is bounded there exists some \(k_{0}\in\mathbb{Z}\) such that
\[\bigcup_{k\geq k_{0}}A_{k}\supseteq K\backslash\{\mathbf{v}\}. \tag{10}\]
Observe that \(\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\asymp\|\mathbf{x}-\mathbf{v}\|_{p} \asymp p^{-k}\) for all \(\mathbf{x}\in A_{k}\). Note that by Lemma 3.1
\[\|\nabla^{2}h_{\mathbf{a}}(\mathbf{x})\|_{p}\ll 1.\]
Choose suitable \(\epsilon>0\) such that
\[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\geq C(\epsilon p^{-k})\sup_{ \mathbf{w}\in K}\|\nabla^{2}h_{\mathbf{a}}(\mathbf{w})\|_{p}\]
where \(C>0\) comes from Lemma 3.2. Letting
\[\alpha=\epsilon p^{-k},\quad\delta=\frac{\Psi(\mathbf{a})}{\widetilde{a}_{2} \|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}}\asymp p^{k}\frac{\Psi(\mathbf{a})} {\widetilde{a}_{2}},\quad\phi=h_{\mathbf{a}}\]
then by Lemma 3.2 we have that
\[S(h,\delta)=S(\mathbf{a})\cap B_{d}(\mathbf{x},\epsilon p^{-k})\]
can be covered by
\[\asymp\left(\frac{\alpha}{\delta}\right)^{d-1}\asymp\epsilon^{d-1}\widetilde{a }_{2}^{\,d-1}p^{-2k(d-1)}\Psi(\mathbf{a})^{-(d-1)}\]
balls of radius \(\asymp p^{k}\frac{\Psi(\mathbf{a})}{\widetilde{a}_{2}}\). Observe that \(B_{d}(\mathbf{v},p^{-k})\supseteq A_{k}\) can be covered by \(p^{d}\) disjoint balls of radius \(p^{-(k+1)}\). To see this, first take \(\widetilde{\mathbf{v}}\in\mathbb{Q}^{d}\) to be the rational vector obtained from cutting off in any component the Hensel digits from position \(k+1\) onwards (which is of the form \(\widetilde{\mathbf{v}}=N/p^{a}\) for some \(N\in\mathbb{Z}^{d},a\in\mathbb{N}_{0}\), with \(p^{a}=\|\mathbf{v}\|\), thus \(a=0\) and \(\widetilde{\mathbf{v}}\in\mathbb{Z}^{d}\) if \(\mathbf{v}\in\mathbb{Z}_{p}^{d}\)). Then consider the \(p^{d}\) balls of radius \(p^{-(k+1)}\) with centres \(\widetilde{\mathbf{v}}+tp^{k+1}\) for \(t\in\{0,1,\dots,p-1\}^{d}\). We remark that
one of these balls is \(B(\mathbf{v},p^{-(k+1)})\), as \(\|\mathbf{v}-\widetilde{\mathbf{v}}+t\|\leq p^{-(k+1)}\) for some \(t\in\{0,1,\ldots,p-1\}^{d}\) and in \(p\)-adic space if \(x\in B(y,r)\) then \(B(y,r)=B(x,r)\). So in fact we only require \(p^{d}-1\) balls to cover \(A_{k}\). Thus
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a})\cap A_{k})\ll(p^{d}-1)p^{-2k(d-1)} \widetilde{a_{2}}^{\,(d-1)}\Psi(\mathbf{a})^{-(d-1)}f\left(p^{k}\frac{\Psi( \mathbf{a})}{\widetilde{a_{2}}}\right).\]
So, by (10),
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a})) \ll\widetilde{a_{2}}^{\,(d-1)}\Psi(\mathbf{a})^{-(d-1)}\sum_{k \geq k_{0}}p^{-2k(d-1)}f\left(p^{k}\frac{\Psi(\mathbf{a})}{\widetilde{a_{2}}}\right)\] \[\stackrel{{\text{condition (Ip)}}}{{\ll}}\ \widetilde{a_{2}}^{\,(d-1)-s}\Psi( \mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right)\sum_{k\geq k_{0}}p^{-2k(d- 1)+ks}\] \[\stackrel{{ s<2(d-1)}}{{\ll}}\widetilde{a_{2}}^{\,(d -1)-s}\Psi(\mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right).\]
Note that in order for such \(\mathbf{v}\in K\) appearing in case (c) to exist we have that
\[\nabla h_{\mathbf{a}}(\mathbf{v})=0\quad\implies\quad\widetilde{a_{2}} \mathbf{a}_{1}=\widetilde{a_{2}}\mathbf{a}_{2}\cdot\nabla\mathbf{g}(\mathbf{v}).\]
By the strong triangle inequality, we can deduce that
\[\|\mathbf{a}_{1}\|_{p}=\|\mathbf{a}_{2}\cdot\nabla\mathbf{g}(\mathbf{v})\|_{ p}\leq\|\mathbf{a}_{2}\|_{p}\cdot\|\nabla\mathbf{g}(\mathbf{v})\|_{p}\ll\| \mathbf{a}_{2}\|_{p},\]
and so \(\widetilde{a_{2}}\stackrel{{\text{def}}}{{=}}\|\mathbf{a}_{2}\|_ {p}\asymp\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\). Hence
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|(\mathbf{a}_{1},\mathbf{a}_{2})\| _{p}^{(d-1)-s}\Psi(\mathbf{a})^{-(d-1)}f\left(\Psi(\mathbf{a})\right).\]
**Case (d).** Since \(\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\asymp 1\) we have that, for suitably chosen \(C,\epsilon>0\),
\[\|\nabla h_{\mathbf{a}}(\mathbf{x})\|_{p}\geq C\epsilon\sup_{\mathbf{w}\in K }\|\nabla^{2}h_{\mathbf{a}}(\mathbf{w})\|_{p}.\]
Applying Lemma 3.2 with
\[\alpha=\epsilon,\quad\delta=\frac{\Psi(\mathbf{a})}{\widetilde{a_{2}}\|\nabla h _{\mathbf{a}}(\mathbf{x})\|_{p}}\asymp\frac{\Psi(\mathbf{a})}{\widetilde{a_{2 }}},\quad\phi=h_{\mathbf{a}}\]
we have that \(S(\mathbf{a})\) can be covered by
\[\asymp\epsilon^{(d-1)}\widetilde{a_{2}}^{\,(d-1)}\Psi(\mathbf{a})^{-(d-1)}\]
balls of radius \(\asymp\frac{\Psi(\mathbf{a})}{\widetilde{a_{2}}}\). Thus
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a})) \ll\widetilde{a_{2}}^{\,(d-1)}\Psi(\mathbf{a})^{-(d-1)}f\left( \frac{\Psi(\mathbf{a})}{\widetilde{a_{2}}}\right)\] \[\stackrel{{\text{condition (Ip)}}}{{\ll}}\widetilde{a_{2}}^{\,(d-1)-s} \Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a})).\]
Note that in this case we have that
\[\|\mathbf{a}_{1}\|_{p}\leq\sup_{\mathbf{w}\in K}\|\mathbf{a}_{2}\cdot\nabla g (\mathbf{w})\|_{p}\ll\|\mathbf{a}_{2}\|_{p}\stackrel{{\text{def}} }{{=}}\widetilde{a_{2}}\]
and so \(\widetilde{a_{2}}\asymp\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\). Thus
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|(\mathbf{a}_{1},\mathbf{a}_{2})\| _{p}^{(d-1)-s}\Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a})).\]
Combining the outcomes of the above four cases, we have that for any \(\mathbf{a}=(a_{0},a_{1},\ldots,a_{n})=(a_{0},\mathbf{a}_{1},\mathbf{a}_{2})\in \mathbb{Z}\times(\mathbb{Z}^{n}\backslash\{\mathbf{0}\})\),
\[\mathcal{H}_{\infty}^{f}(S(\mathbf{a}))\ll\|(\mathbf{a}_{1},\mathbf{a}_{2})\| _{p}^{(d-1)-s}\Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a})).\]
The proof of Lemma 2.3 is complete.
## 4. Completing the proof of Theorem 1.4
Note that for any subset of integers \(Z\subseteq\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}\) we have
\[E=E(Z)=\{\mathbf{x}\in K:(\mathbf{x},g(\mathbf{x}))\in D^{\theta}_{n,p}(\Psi,Z) \}=\limsup_{\mathbf{a}\in Z}S(\mathbf{a}),\]
The map \(G:\mathbf{x}\mapsto(\mathbf{x},g(\mathbf{x}))\) is bi-Lipschitz since \(K\) is bounded, and so
\[\mathcal{H}^{f}(G(E))\asymp\mathcal{H}^{f}(D^{\theta}_{n,p}(\Psi,Z)\cap\{( \mathbf{x},\mathbf{g}(\mathbf{x})):\mathbf{x}\in K\})\asymp\mathcal{H}^{f}(E).\]
Now, by the Hausdorff-Cantelli Lemma, we have that
\[\mathcal{H}^{f}(E)=0\quad\text{ if }\quad\sum_{\mathbf{a}\in Z}\mathcal{H}^{f}_ {\infty}(S(\mathbf{a}))<\infty,\]
and so
\[\mathcal{H}^{f}(D^{\theta}_{n,p}(\Psi,Z)\cap\mathcal{M})=0\quad\text{ if }\quad\sum_{\mathbf{a}\in Z}\mathcal{H}^{f}_{\infty}(S(\mathbf{a}))<\infty.\]
By Lemma 2.3 the above summation can be written as
\[\sum_{\mathbf{a}\in\{Z:(\mathbf{a}_{1},\mathbf{a}_{2})\neq\mathbf{0}\}}\|( \mathbf{a}_{1},\mathbf{a}_{2})\|^{(d-1)-s}_{p}\Psi(\mathbf{a})^{-(d-1)}f(\Psi (\mathbf{a}))\ +\ \sum_{\mathbf{a}\in\{Z:(\mathbf{a}_{1},\mathbf{a}_{2})=\mathbf{0}\}}\Psi( \mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a})). \tag{11}\]
Since
\[\sum_{\mathbf{a}\in Z}\Psi(\mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a}))<\infty\]
by assumption (in whichever case of \(Z\) is chosen) the second summation is convergent, so it remains to prove convergence of the first sum. This summation can be rewritten as
\[\sum_{k\in\mathbb{N}_{0}}\sum_{\mathbf{a}\in Z(p,k)}p^{k(s-(d-1))}\Psi( \mathbf{a})^{-(d-1)}f(\Psi(\mathbf{a})),\]
where
\[Z(p,k)=\{(a_{0},\ldots,a_{n})\in Z:p^{k}|a_{i}\,\forall\,1\leq i\leq n\ \&\ \exists\,1\leq j\leq n\ p^{k+1}\ \big{/}a_{j}\}.\]
Note that the index \(0\) is excluded from the divisibility conditions, so in other words
\[Z(p,k)=\{(a_{0},\ldots,a_{n})\in Z:\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}=p^{ -k}\}.\]
We will see that we can restrict ourselves to finite partial sums in all three cases. In the following arguments we will frequently implicitly use the well-known fact that
\[|a+b|_{p}=\max\{|a|_{p},|b|_{p}\},\qquad\text{if }|a|_{p}\neq|b|_{p}.\]
Now let us consider the cases in our distinction one by one:
1. \(Z=Z(1)\) (pairwise coprime) and inhomogeneous: Then \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}=1\), and so (11) becomes the usual summation as in (i), which is convergent by assumption.
2. \(Z=Z(2)\) (coprime): If \(p|a_{0}\) then there exists some \(1\leq i\leq n\) such that \(p\ \big{/}a_{i}\), and so \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}=1\). Thus convergence of the summation is immediate by assumption. Henceforth assume \(|a_{0}|_{p}=1\). We may assume there exists \(\ell\in\mathbb{N}_{0}\) such that \[\|(\mathbf{x},\mathbf{g}(\mathbf{x}))\|_{p}\leq p^{\ell}\quad\forall\ \mathbf{x}\in K,\] (12) since this is true for any compact subset of \(K\) and by sigma-additivity of measures. First assume \(\theta=0\). The strong triangle inequality, and the fact that \(\Psi(\mathbf{a})<1\) for sufficiently large \(\|\mathbf{a}\|\), implies that for \(S(\mathbf{a})\) to be non-empty we at least require \[|\mathbf{a}_{1}\cdot\mathbf{x}+\mathbf{a}_{2}\cdot\mathbf{g}(\mathbf{x})|_{p}=| a_{0}|_{p}=1.\] If \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\leq p^{-\ell-1}\) then this cannot be true for any \(\mathbf{x}\in K\) since \[|\mathbf{a}_{1}\cdot\mathbf{x}+\mathbf{a}_{2}\cdot\mathbf{g}(\mathbf{x})|_{p} \leq\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\cdot\|(\mathbf{x},\mathbf{g}( \mathbf{x}))\|_{p}\leq p^{-1}<1\]
so we must have that \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}>p^{-\ell-1}\), hence \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\geq p^{-\ell}\). Thus, in the cover of \(E\) we only need to consider \(S(\mathbf{a})\) with \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}=p^{-k}\) for \(k=0,\ldots,\ell\). That is, we only need to show convergence of the summation \[\sum_{k=0}^{\ell}\sum_{\mathbf{a}\in Z(p,k)}p^{k(s-(d-1))}\Psi(\mathbf{a})^{-( d-1)}f(\Psi(\mathbf{a})).\] (13) Note that for each \(k=0,\ldots,\ell\) the inner sum is convergent (by assumption), so since the outer sum is finite we indeed have convergence. Now assume \(\theta\neq 0\) and \(|\theta|_{p}\neq 1\). Let \(p^{y}:=|\theta|_{p}\) for an integer \(y<0\) (note we assume \(\theta\in\mathbb{Q}_{p}\). We can assume (12) so that if \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\leq p^{-\ell-1}\) we have \[\|(\mathbf{a}_{1},\mathbf{a}_{2})\cdot(\mathbf{x},\mathbf{g}(\mathbf{x}))+a_ {0}+\theta\|_{p}\leq\max\{\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\cdot\|( \mathbf{x},\mathbf{g}(\mathbf{x}))\|_{p},|a_{0}|_{p},|\theta|_{p}\}\] (14) and there is in fact equality. To see this observe that \[\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\cdot\|(\mathbf{x},\mathbf{g}(\mathbf{ x}))\|_{p}\leq p^{-\ell-1}p^{\ell}=p^{-1}<1,\] moreover \(p^{y}<1\) as we noticed \(y<0\), and \(|a_{0}|_{p}=1\). Hence \[\|(\mathbf{a}_{1},\mathbf{a}_{2})\cdot(\mathbf{x},\mathbf{g}(\mathbf{x}))+a_ {0}+\theta\|_{p}=\max\{1,p^{y}\}=1\] is absolutely bounded from below uniformly on \(K\). Since \(\Psi\to 0\) there are only finitely many solutions (if any) \(\mathbf{a}\) to \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\cdot(\mathbf{x},\mathbf{g}(\mathbf{x}))+a_ {0}+\theta\|_{p}<\Psi(\mathbf{a})\). This argument again shows we may restrict to \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\geq p^{-\ell}\) leading to a finite sum \[\sum_{k=0}^{\ell}\sum_{\mathbf{a}\in Z(p,k)}p^{k(s-(d-1))}\Psi(\mathbf{a})^{- (d-1)}f(\Psi(\mathbf{a})),\] which converges for similar reasons as the sum for \(\theta=0\).
3. \(Z=\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}\), \(\theta=0\), and \(\Psi\) satisfies \(p\Psi(\mathbf{a})\leq\Psi(p^{-1}\mathbf{a})\): Again, we may assume that there exists \(\ell\in\mathbb{N}\) such that (12) holds. Suppose that \(0\neq\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\leq p^{-\ell-1}\). We claim that in order for \(S(\mathbf{a})\) to be non-empty we need \(p|a_{0}\). From this we will deduce that \(S(p^{-1}\mathbf{a})\supseteq S(\mathbf{a})\). By (12) we have \[|\mathbf{a}_{1}\cdot\mathbf{x}+\mathbf{a}_{2}\cdot\mathbf{g}( \mathbf{x})+a_{0}|_{p} \leq\max\left\{|\mathbf{a}_{1}\cdot\mathbf{x}+\mathbf{a}_{2}\cdot \mathbf{g}(\mathbf{x})|_{p},|a_{0}|_{p}\right\}\] \[\leq\max\{\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\cdot\|(\mathbf{ x},\mathbf{g}(\mathbf{x}))\|_{p},|a_{0}|_{p}\}\] \[\leq\max\{p^{-\ell-1}p^{\ell},|a_{0}|_{p}\}\] There is equality in the above inequalities (apart from possibly the third) if \(|a_{0}|_{p}=1>p^{-1}\), leading us in any case to conclude that \(|\mathbf{a}_{1}\cdot\mathbf{x}+\mathbf{a}_{2}\cdot\mathbf{g}(\mathbf{x})+a_{0} |_{p}=1<\Psi(\mathbf{a})\) which cannot be true for sufficiently large \(\|\mathbf{a}\|\). Thus \(|a_{0}|_{p}<1\) and so \(p|a_{0}\). Hence \(p^{-1}\mathbf{a}=(\mathbf{a}_{1}^{\prime},\mathbf{a}_{2}^{\prime},a_{0}^{\prime })\in\mathbb{Z}^{n+1}\backslash\{\mathbf{0}\}\) and if \(\mathbf{x}\in S(\mathbf{a})\) then \[|\mathbf{a}_{1}^{\prime}\cdot\mathbf{x}+\mathbf{a}_{2}^{\prime}\cdot\mathbf{g }(\mathbf{x})+a_{0}^{\prime}|_{p}<p\Psi(\mathbf{a})\leq\Psi(p^{-1}\mathbf{a})\] thus \(\mathbf{x}\in S(p^{-1}\mathbf{a})\), and so \(S(\mathbf{a})\subseteq S(p^{-1}\mathbf{a})\). Since \(D_{n,p}(\Psi)\) is defined as the limsup set of \(S(\mathbf{a})\) over integer vectors \(\mathbf{a}\), this argument implies that we only need to consider integer points \(\mathbf{a}\) with \(\|(\mathbf{a}_{1},\mathbf{a}_{2})\|_{p}\geq p^{-\ell}\). This again leads us to the convergent sum (13). Lastly, if \((\mathbf{a}_{1},\mathbf{a}_{2})=\mathbf{0}\) then for \(S(\mathbf{a})\) to be non-empty we have that \[|a_{0}|_{p}<\Psi(\mathbf{a})<|a_{0}|^{-1}\,,\] which cannot be true, so in this case \(S(\mathbf{a})=\varnothing\). |
2306.02967 | Synthesis of Distributed Protocols by Enumeration Modulo Isomorphisms | Synthesis of distributed protocols is a hard, often undecidable, problem.
Completion techniques provide partial remedy by turning the problem into a
search problem. However, the space of candidate completions is still massive.
In this paper, we propose optimization techniques to reduce the size of the
search space by a factorial factor by exploiting symmetries (isomorphisms) in
functionally equivalent solutions. We present both a theoretical analysis of
this optimization as well as empirical results that demonstrate its
effectiveness in synthesizing both the Alternating Bit Protocol and Two Phase
Commit. Our experiments show that the optimized tool achieves a speedup of
approximately 2 to 10 times compared to its unoptimized counterpart. | Derek Egolf, Stavros Tripakis | 2023-06-05T15:30:13Z | http://arxiv.org/abs/2306.02967v2 | # Synthesis of Distributed Protocols by
###### Abstract
Synthesis of distributed protocols is a hard, often undecidable, problem. _Completion_ techniques provide partial remedy by turning the problem into a search problem. However, the space of candidate completions is still massive. In this paper, we propose optimization techniques to reduce the size of the search space by a factorial factor by exploiting symmetries (_isomorphisms_) in functionally equivalent solutions. We present both a theoretical analysis of this optimization as well as empirical results that demonstrate its effectiveness in synthesizing both the Alternating Bit Protocol and Two Phase Commit. Our experiments show that the optimized tool achieves a speedup of approximately 2 to 10 times compared to its unoptimized counterpart.
## 1 Introduction
Distributed protocols are at the heart of the internet, data centers, cloud services, and other types of infrastructure considered indispensable in a modern society. Yet distributed protocols are also notoriously difficult to get right, and have therefore been one of the primary application domains of formal verification [14, 18, 19, 21, 30]. An even more attractive proposition is distributed protocol _synthesis_: given a formal correctness specification \(\psi\), automatically generate a distributed protocol that satisfies \(\psi\), i.e., that is _correct-by-construction_.
Synthesis is a hard problem in general, suffering, like formal verification, from scalability and similar issues. Moreover, for distributed systems, synthesis is generally undecidable [11, 22, 28, 29]. Techniques such as program _sketching_[25, 26] remedy scalability and undecidability concerns essentially by turning the synthesis problem into a _completion_ problem [2, 3]: given an _incomplete_ system \(M_{0}\) and a specification \(\psi\), automatically synthesize a completion \(M\) of \(M_{0}\), such that \(M\) satisfies \(\psi\).
For example, the synthesis of the well-known _alternating-bit protocol_ (ABP) is considered in [4] as a completion problem: given an ABP system containing the incomplete _Sender\({}_{0}\)_ and _Receiver\({}_{0}\)_ processes shown in Fig. 1, complete these two processes (by adding but not removing any transitions, and not adding nor removing any states), so that the system satisfies a given set of requirements.
In cases where the space of all possible completions is finite, completion turns synthesis into a decidable problem.1 However, even then, the number of possible completions can be prohibitively large, even for relatively simple protocols. For instance, as explained in [4], the number of all possible completions in the ABP example is \(512^{4}\cdot 36\), i.e., approximately 2.5 trillion candidate completions.
Footnote 1: We emphasize that no generality is lost in the sense that one can augment the search for correct completions with an outer loop that keeps adding extra _empty_ states (with no incoming or outgoing transitions), which the inner completion procedure then tries to complete. Thus, we can keep searching for progressively larger systems (in terms of number of states) until a solution is found, if one exists.
Not only is the number of candidate completions typically huge, but it is often also interesting to generate not just one correct completion, but many. For instance, suppose both \(M_{1}\) and \(M_{2}\) are (functionally) correct solutions. We may want to evaluate \(M_{1}\) and \(M_{2}\) also for _efficiency_ (perhaps using a separate method) [10]. In general, we may want to synthesize (and then evaluate w.r.t. performance or other metrics) not just one, but in principle _all_ correct completions. We call this problem the completion _enumeration_ problem, which is the main focus of this paper.
Enumeration is harder than _1-completion_ (synthesis of just one correct solution), since the number of correct solutions might be very large. For instance, in the case of the ABP example described above, the number of correct completions is 16384 and it takes 88 minutes to generate all of them [4].
The key idea in this paper is to exploit the notion of _isomorphisms_ in order to reduce the number of correct completions, as well as the search space of candidate completions in general. To illustrate the idea, consider a different incomplete _Sender\({}_{0}\)_ process, shown in Fig. 2. Two possible completions of this _Sender\({}_{0}\)_ are shown in Fig. 3. Although these two completions are in principle different, they are identical except that states \(s_{3}\) and \(s_{7}\) are swapped. Our goal
Figure 1: The incomplete ABP Sender and Receiver processes of [4]
is to develop a technique which considers these two completions _equivalent up to isomorphism_, and only explores (and returns) one of them.
To achieve this goal, we adopt the _guess-check-generalize_ paradigm (GCG) [1, 2, 12, 25, 26]. In a nutshell, GCG works as follows: (1) pick a candidate completion \(M\); (2) check whether \(M\) satisfies \(\psi\): if it does, \(M\) is one possible solution to the synthesis problem; (3) if \(M\) violates \(\psi\), _prune_ the search space of possible completions by excluding a _generalization_ of \(M\), and repeat from step (1). In the most trivial case, the generalization of \(M\) contains only \(M\) itself. Ideally, however, and in order to achieve a more significant pruning of the search space,
Figure 3: Two synthesized completions of the incomplete process of Fig. 2. Observe that the two completions are identical except that states \(s_{3}\) and \(s_{7}\) are flipped.
Figure 2: An incomplete ABP Sender with permutable states \(s_{3},s_{7}\)
the generalization of \(M\) should contain many more "bad" completions which are somehow "similar" (for instance, isomorphic) to \(M\).
A naive way to generalize based on isomorphism is to keep a list of completions encountered thus far and perform an isomorphism check against every element of this list whenever a new candidate is picked. Our approach is smarter: in fact, it does not involve any isomorphism checks whatsoever. Instead, our approach guarantees that no isomorphic completions are ever picked to begin with by pruning them from the search space. This is ultimately done using syntactic transformations of completion representations. The details are left for Section 4.
Furthermore, our notion of "encountering" a completion is quite wide. Rather than just pruning completions that are isomorphic to _candidates_, we also prune completions that are isomorphic to any completion in the _generalizations of_ the candidates (with respect to some prior, unextended notion of generalization). Between the trivial approach involving isomorphism checks and our own approach are several other approaches which are good, but not excellent. Indeed, a categorization of the subtle differences between such approaches is a key contribution of this paper (see Section 4.3). These subtleties are easy to miss.
In summary, the main contributions of this paper are the following: (1) we define the 1-completion and completion-enumeration problems _modulo isomorphisms_; (2) we examine new methods to solve these problems based on the GCG paradigm; (3) we identify properties that an efficient GCG modulo isomorphisms algorithm should have; (4) we propose two instances of such an algorithm, using a naive and a sophisticated notion of generalization; (5) we evaluate our methods on the synthesis of two simple distributed protocols: the ABP and Two Phase Commit (2PC) and demonstrate speedups with respect to the unoptimized method of approximately 2 to 10 times.
## 2 Preliminaries
#### 2.0.1 Labeled Transition Systems
A (finite) _labeled transition system_ (LTS) \(M\) is a tuple \(\langle\Sigma,Q,Q_{0},\Delta\rangle\), where
* \(\Sigma\) is a finite set of transition _labels_
* \(Q\) is a finite set of _states_
* \(Q_{0}\subseteq Q\) is the set of _initial states_
* \(\Delta\subseteq Q\times\Sigma\times Q\) is the _transition relation_.
We write the transition \((p,a,q)\in\Delta\) as \(p\stackrel{{ a}}{{\to}}q\).
A _run_ of \(M\) is an infinite sequence \(q_{0}\stackrel{{ a_{0}}}{{\to}}q_{1}\stackrel{{ a_{1}}}{{\to}}q_{2}\stackrel{{ a_{2}}}{{\to}}...\), where \(q_{0}\in Q_{0}\) and for each \(i\) we have \((q_{i},a_{i},q_{i+1})\in\Delta\). The _trace_ produced by this run is \(a_{0}a_{1}a_{2}\cdots\). Semantically, an LTS \(M\) represents a set of infinite traces, denoted \(\llbracket M\rrbracket\subseteq\Sigma^{\omega}\). Specifically, a trace \(a_{0}a_{1}a_{2}\cdots\) is in \(\llbracket M\rrbracket\) exactly when there exists a run \(q_{0}\stackrel{{ a_{0}}}{{\to}}q_{1}\stackrel{{ a_{1}}}{{\to}}q_{2}\stackrel{{ a_{2}}}{{\to}}...\) of \(M\).
#### 2.0.2 Correctness Specification
We will assume that we have some formal notion of _specification_ and some formal notion of _satisfaction_ between an LTS \(M\) and a specification \(\psi\). We write \(M\vDash\psi\) to denote that \(M\) satisfies \(\psi\). Our work is agnostic to what exactly \(\psi\) might be (e.g., a temporal logic formula, etc.).
#### 2.0.1 Completions and Syntactic Constraints
Suppose that \(M\) and \(M_{0}\) are two LTSs with the same set of labels \(\Sigma\), the same set of states \(Q\), the same set of initial states \(Q_{0}\), and with transition relations \(\Delta\) and \(\Delta_{0}\), respectively. We say that \(M\)_is a completion of \(M_{0}\)_ exactly when \(\Delta_{0}\subseteq\Delta\). That is, \(M\) completes \(M_{0}\) by adding more transitions to it (and not removing any). For example, each of the two LTSs of Fig. 3 is a completion of the LTS shown in Fig. 2.
Often, we wish to impose some constraints on the kind of synthesized processes that we want to obtain during automated synthesis, other than the global constraints imposed on the system by the correctness specification. For example, in the formal distributed protocol model proposed in [4], synthesized processes such as the ABP _Sender_ and _Receiver_ are constrained to satisfy a number of requirements, including absence of deadlocks, determinism of the transition relation, the constraint that each state is either an _input state_ (i.e., it only receives inputs) or an _output state_ (i.e., it emits a unique output), the constraint that input states are _input-enabled_ (i.e., they do not block any inputs), and so on. Such properties are often syntactic or structural and can be inferred statically by observing the transition relation. The fact that an LTS is a completion of another LTS can also be captured by such constraints.
Constraints like the above are application-specific, and our approach is agnostic to their precise form and meaning. We will therefore abstract them away, and assume that there is a propositional logic formula \(\Phi\) which captures the set of all syntactically well-formed candidate completions. The variable space of \(\Phi\) and its precise meaning is application-specific. We will give a detailed construction of \(\Phi\) for LTS in Section 3. We write \(M\vDash\Phi\) when LTS \(M\) satisfies the _syntactic constraints \(\Phi\)_. Let \(\llbracket\Phi\rrbracket=\{M\mid M\vDash\Phi\}\).
We say that an LTS is _correct_ if it satisfies both the syntactic constraints imposed by \(\Phi\) and the semantic constraints imposed by \(\psi\).
#### 2.0.2 Computational Problems
Problem 1 (Model-Checking): Given LTS \(M\), specification \(\psi\), and constraints \(\Phi\), check whether \(M\vDash\psi\) and \(M\vDash\Phi\).
A solution to the model-checking problem is an algorithm, mc, such that for all \(M,\Phi,\psi\), if \(M\vDash\Phi\) and \(M\vDash\psi\) then \(\textsc{mc}(M,\Phi,\psi)=1\); otherwise, \(\textsc{mc}(M,\Phi,\psi)=0\).
Problem 2 (Synthesis): Given specification \(\psi\) and constraints \(\Phi\), find, if one exists, LTS \(M\) such that \(M\vDash\psi\) and \(M\vDash\Phi\).
Problem 3 (Completion): Given LTS \(M_{0}\), specification \(\psi\), and constraints \(\Phi\), find, if one exists, a completion \(M\) of \(M_{0}\) such that \(M\vDash\psi\) and \(M\vDash\Phi\).
Problem 4 (Completion enumeration): Given LTS \(M_{0}\), specification \(\psi\), and constraints \(\Phi\), find all completions \(M\) of \(M_{0}\) such that \(M\vDash\psi\) and \(M\vDash\Phi\).
## 3 The Guess-Check-Generalize Paradigm
In this section we first propose a generic GCG algorithm and reason about its correctness (Section 3.1). We then show how to instantiate this algorithm to solve Problems 3 and 4 (Section 3.2).
### A Generic GCG Algorithm and its Correctness
Algorithm 1 is a formal description of a generic GCG algorithm. The algorithm takes as input: (1) a set of syntactic constraints in the form of a propositional formula \(\Phi\), as described in Section 2; (2) a specification \(\psi\) as described in Section 2; and (3) a _generalizer_ function \(\gamma\), described below.
```
1while\(\Phi\) is satisfiabledo
2\(\sigma:=\textsc{sat}(\Phi)\);
3if\(\textsc{mc}(M_{\sigma},\Phi,\psi)=1\)then
4return\(\sigma\);
5\(\Phi:=\Phi\land\neg\sigma\);
6else
7\(\Phi:=\Phi\land\neg\gamma(\sigma)\);
8
```
**Algorithm 1**\(\textsc{gcg}[\Phi,\psi,\gamma]\)
\(\Phi\) is a propositional logic formula (over a certain set of boolean variables that depends on the application domain at hand) encoding all possible syntactically valid completions. Every satisfying assignment \(\sigma\) of \(\Phi\) corresponds to one completion, which we denote as \(M_{\sigma}\). Observe that gcg does not explicitly take an initial (incomplete) model \(M_{0}\) as input: this omission is not a problem because \(M_{0}\) can be encoded in \(\Phi\), as mentioned in Section 2. We explain specifically how to do that in the case of LTS in Section 3.2.
The algorithm works as follows: while \(\Phi\) is satisfiable: Line 2: pick a candidate completion \(\sigma\) allowed by \(\Phi\) by calling a SAT solver. Line 3: model-check the corresponding model \(M_{\sigma}\) against \(\psi\) (by definition, \(M_{\sigma}\) satisfies \(\Phi\) because \(\sigma\) satisfies \(\Phi\)). Line 4: if \(M_{\sigma}\) satisfies \(\psi\) then we have found a correct model: we can return it and terminate if we are solving Problem 3, or return it and continue our search for additional correct models if we are solving Problem 4. In the latter case, in line 5 we exclude \(\sigma\) from \(\Phi\) (slightly abusing notation, we treat \(\sigma\) as a formula satisfied exactly and only by \(\sigma\), so that \(\neg\sigma\) is the formula satisfied by all assignments except \(\sigma\)). Line 7: if \(M_{\sigma}\) violates \(\psi\), then we exclude from \(\Phi\) the _generalization_\(\gamma(\sigma)\) of \(\sigma\), and continue our search.
**Generalizers** A _generalizer_ is a function \(\gamma\) which takes an assignment \(\sigma\) and returns a propositional logic formula \(\gamma(\sigma)\) that encodes all "bad" assignments that we wish to exclude from \(\Phi\). Ideally, however, \(\gamma(\sigma)\) will encode many more assignments (and therefore candidate completions), so as to prune as large a part of the search space as possible. A concrete implementation of \(\gamma\) may require additional information other than just \(\sigma\). For example, \(\gamma\) may consult the specification \(\psi\), counter-examples returned by the model-checker (which are themselves a function of \(\psi\) and \(\sigma\)), and so on. We avoid including all this information in the inputs of \(\gamma\) to ease presentation. We note that \(\psi\) does not change during a
run of Algorithm 1 and therefore \(\psi\) can be "hardwired" into \(\gamma\) without loss of generality.
A valid generalizer should include the assignment being generalized and it should only include bad assignments (i.e., it should exclude correct completions). Formally, a generalizer \(\gamma\) is said to be _proper_ if for all \(\sigma\) such that \(\sigma\vDash\Phi\) and \(M_{\sigma}\not\vDash\psi\), the following conditions hold: (1) _Self-inclusion_: \(\sigma\vDash\gamma(\sigma)\), and (2) _Correct-exclusion_: for any \(\varrho\), if \(\varrho\vDash\Phi\) and \(M_{\varrho}\vDash\psi\) then \(\varrho\nvDash\gamma(\sigma)\).
#### 3.1.1 The Correctness of GCG
Lemma 1: _If \(\gamma\) is proper then \(\textsc{gcg}[\Phi,\psi,\gamma]\) terminates._
Proof: If \(\gamma\) is proper then \(\gamma(\sigma)\) is guaranteed to include at least \(\sigma\). \(\Phi\) is a propositional logic formula, therefore it only has a finite set of satisfying assignments. Every iteration of the loop removes at least one satisfying assignment from \(\Phi\), therefore the algorithm terminates.
During a run, Algorithm 1 returns a (possibly empty) set of assignments \(\mathsf{Sol}=\{\sigma_{1},\sigma_{2},...,\sigma_{n}\}\), representing the solution to Problems 3 or 4. Also during a run, the algorithm guesses candidate assignments by calling the subroutine sat (line 2). Let \(\mathsf{Cand}\) be the set of all these candidates. Note that \(\mathsf{Sol}\subseteq\mathsf{Cand}\), since every solution returned (line 4) has been first guessed in line 2.
Whenever the algorithm reassigns \(\Phi:=\Phi\land\neg\varphi\), we say that it _prunes_\(\varphi\), i.e., the satisfying assignments of \(\varphi\) are now excluded from the search. We will need to reason about the set of assignments that have been pruned after a certain _partial run_ of the program. In such cases we can imagine running the algorithm for some amount of time and pausing it. Then the set \(\mathsf{Pruned}\) denotes the set of assignments that have been pruned up until that point. It is true that after the program terminates \(\mathsf{Pruned}=[\![\Phi]\!]\setminus\mathsf{Cand}\), but this equality does not necessarily hold for all partial runs.
Theorem 3.1: _(1) \(\textsc{gcg}[\Phi,\psi,\gamma]\) is sound, i.e., for all \(\sigma\in\mathsf{Sol}\), we have \(\sigma\vDash\Phi\) and \(M_{\sigma}\vDash\psi\). (2) If \(\gamma\) is proper then \(\textsc{gcg}[\Phi,\psi,\gamma]\) is complete, i.e., for all \(\sigma\vDash\Phi\), if \(M_{\sigma}\vDash\psi\) then \(\sigma\in\mathsf{Sol}\)._
Proof: Every \(\sigma\in\mathsf{Sol}\) satisfies \(\Phi\) (line 2) and the corresponding \(M_{\sigma}\) satisfies \(\psi\) (line 3), therefore \(\textsc{gcg}[\Phi,\psi,\gamma]\) is sound. Now, suppose that \(\gamma\) is proper, and take \(\varrho\) such that \(\varrho\vDash\Phi\) and \(M_{\varrho}\vDash\psi\). To show completeness, it suffices to show that \(\varrho\in\mathsf{Cand}\). Then, we also have \(\varrho\in\mathsf{Sol}\) because \(M_{\varrho}\) passes the model-checking test in line 3. Suppose, for a contradiction, that \(\varrho\not\in\mathsf{Cand}\), i.e., that \(\varrho\) is pruned. Then there must exist some \(\sigma\) such that \(\varrho\vDash\gamma(\sigma)\) (line 7). But \(\sigma\vDash\Phi\) (line 2), which means that \(\varrho\) violates the _correct-exclusion_ property of \(\gamma\). Contradiction.
### A Concrete Instance of GCG for LTS
Algorithm 1 is _generic_ in the sense that depending on how exactly we instantiate \(\Phi\), \(\psi\), and \(\gamma\), we can encode different completion enumeration (and more generally model enumeration) problems, as well as solutions. We now show how to instantiate Algorithm 1 to solve Problems 3 and 4 concretely for LTS.
#### 3.3.1 Encoding LTSs and Completions in Propositional Logic
Let \(M_{0}=\langle\Sigma,Q,Q_{0},\Delta_{0}\rangle\) be an incomplete LTS. Then we can define a set of boolean variables
\[V:=\{p\looparrowarrow^{a}q\mid p,q\in Q\wedge a\in\Sigma\}\]
so that boolean variable \(p\looparrow^{a}q\) encodes whether transition \(p\stackrel{{ a}}{{\rightarrow}}q\) is present or not (if \(p\stackrel{{ a}}{{\rightarrow}}q\) is present, then \(p\looparrow^{a}q\) is true, otherwise it is false). More formally, let \(\textsc{asgn}_{V}\) be the set of all assignments over \(V\). An assignment \(\sigma\in\textsc{asgn}_{V}\) represents LTS \(M_{\sigma}\) with transition relation \(\Delta_{\sigma}=\{(p,a,q)\mid\sigma(p\looparrow^{a}q)=1\}\). To enforce \(M_{\sigma}\) to be a completion of \(M_{0}\), we need to enforce that \(\Delta_{0}\subseteq\Delta_{\sigma}\). We do so by initializing our syntactic constraints \(\Phi\) as \(\Phi:=\Phi_{\Delta_{0}}\), where
\[\Phi_{\Delta_{0}}:=\bigwedge_{p\stackrel{{ a}}{{\rightarrow}}q \in\Delta_{0}}p\looparrow^{a}q.\]
We can then add extra constraints to \(\Phi\) such as determinism or absence of deadlocks, as appropriate.
#### 3.3.2 A Concrete Generalized for LTS
Based on the principles of [4], we can construct a _concrete generalizer_\(\gamma_{\textit{LTS}}(\sigma)\) for LTS as \(\gamma_{\textit{LTS}}(\sigma):=\gamma_{\textit{safe}}(\sigma)\vee\gamma_{ \textit{live}}(\sigma)\), which we separate into a disjunction of a safety violation generalizer and a liveness violation generalizer. The safety component \(\gamma_{\textit{safe}}\) works on the principle that if LTS \(M_{\sigma}\) violates a safety property, then adding extra transitions will not solve this violation. Thus:
\[\gamma_{\textit{safe}}(\sigma):=\bigwedge_{\{x\in V|\sigma(x)=1\}}x.\]
The liveness component \(\gamma_{\textit{live}}\) can be defined based on a notion of reachable, "bad" cycles that enable something to happen infinitely often. Thus, \(\neg\gamma_{\textit{live}}\) captures all LTSs that disable these bad cycles by breaking them or making them unreachable.
It can be shown that the concrete generalizer \(\gamma_{\textit{LTS}}\) is proper. Therefore, the concrete instance \(\textsc{gcg}[\Phi,\psi,\gamma_{\textit{LTS}}]\) is sound, terminating, and complete, i.e., it solves Problems 3 and 4.
Even though the concrete generalizer is correct, it is not very effective. In particular, it does not immediately prune isomorphisms. There may be \(O(n!)\) trivially equivalent completions up to state reordering, where \(n\) is the number of states in the LTS. In the next section we present two optimizations exploiting isomorphisms.
## 4 Synthesis Modulo Isomorphisms
### LTS Isomorphisms
Intuitively, two LTS are isomorphic if we can rearrange the states of one to obtain the other. For synthesis purposes, we often wish to provide as a constraint a set
of _permutable states_\(A\), so as to exclude rearrangements that move states outside of \(A\). If we can still rearrange the states of an LTS \(M_{1}\) to obtain another LTS \(M_{2}\) subject to this constraint, then we say that \(M_{1}\)_and_\(M_{2}\)_are isomorphic up to_\(A\). For example, the two LTSs of Fig. 3 are isomorphic up to the set of permutable states \(A=\{s_{3},s_{7}\}\). Strictly speaking, they are permutable up to any set of their states, but we choose \(A\) to reflect the fact that those two states have no incoming or outgoing transitions in Fig. 2. Permuting any other states would yield an LTS that is not a completion of Fig. 2.
We now define isomorphisms formally. Let \(M_{0}\), \(M_{1}\), and \(M_{2}\) be LTSs with the same \(\Sigma,Q,Q_{0}\), and with transition relations \(\Delta_{0}\), \(\Delta_{1}\), and \(\Delta_{2}\), respectively. Suppose that \(M_{1}\) and \(M_{2}\) are both completions of \(M_{0}\). Let \(A\subseteq Q\setminus Q_{0}\). Then we say \(M_{1}\) and \(M_{2}\) are isomorphic up to \(A\), denoted \(M_{1}\stackrel{{ A}}{{\simeq}}M_{2}\), if and only if there exists a bijection \(f:A\to A\) (i.e., a _permutation_) such that
\[p\stackrel{{ a}}{{\to}}q\in\Delta_{1}\text{ if and only if }f(p)\stackrel{{ a}}{{\to}}f(q)\in\Delta_{2}.\]
By default, we will assume that \(A\) is the set of non-initial states that have no incoming or outgoing transitions in \(M_{0}\). In that case we will omit \(A\) and write \(M_{1}\simeq M_{2}\).
Lemma 2: _LTS isomorphism is an equivalence relation, i.e., it is reflexive, symmetric, and transitive._
We use \([M]\) to denote the _equivalence class_ of \(M\), i.e., \([M]=\{M^{\prime}\mid M^{\prime}\simeq M\}\).
Lemma 3: _If \(M_{1}\stackrel{{ A}}{{\simeq}}M_{2}\) then \(\llbracket M_{1}\rrbracket=\llbracket M_{2}\rrbracket\)._
Lemma 3 states that LTS isomorphism preserves traces. More generally, we will assume that our notion of specification is preserved by LTS isomorphism, namely, that if \(M_{1}\stackrel{{ A}}{{\simeq}}M_{2}\) then for any specification \(\psi\), \(M_{1}\vDash\psi\) iff \(M_{2}\vDash\psi\).
#### 4.2.2 Isomorphic Assignments
Two assignments \(\sigma\) and \(\varrho\) are isomorphic if the LTSs that they represent are isomorphic. Hence we write \(\sigma\simeq\varrho\) if and only if \(M_{\sigma}\simeq M_{\varrho}\). We write \([\varrho]\) to denote the equivalence class of \(\varrho\), i.e., the set of all assignments that are isomorphic to \(\varrho\). These equivalence classes partition \(\Phi\) since \(\simeq\) is an equivalence relation.
### Completion Enumeration Modulo Isomorphisms
Isomorphisms allow us to focus our attention to Problem 5 instead of Problem 4:
Problem 5 (Completion enumeration modulo isomorphisms): Given LTS \(M_{0}\), specification \(\psi\), and constraints \(\Phi\), find the set
\[\{[M]\mid M\text{ is a completion of }M_{0}\text{ such that }M\vDash\psi\text{ and }M\vDash\Phi\}.\]
Problem 5 asks that only significantly different (i.e., non-isomorphic) completions are returned to the user. Problem 5 can be solved by a simple modification to Algorithm 1, namely, to exclude the entire equivalence class \([\sigma]\) of any discovered solution \(\sigma\), as shown in Algorithm 2, line 5.
### Properties of an Efficient GCG Algorithm
We begin by presenting a list of properties that an efficient instance of GCG ought to satisfy. Except for Property 1, satisfaction of these properties generally depends on the generalizer used.
Property 1: For all \(\sigma\) that satisfy \(\Phi\), \([\sigma]\cap\mathsf{Sol}\) has \(0\) or \(1\) element(s). In other words, we return at most one solution per equivalence class.
Property 1 asks that only significantly different (i.e., non-isomorphic) completions are returned to the user, thereby solving Problem 5, which is our main goal. In addition, this property implies that the number of completions is kept small, which is important when these are fed as inputs to some other routine (e.g., one that selects a "highly fit" completion among all valid completions).
\(\textsc{gcg}_{\simeq}\) satisfies Property 1, regardless of the parameters. However, we can go further, by ensuring that not only we do not return isomorphic completions, but we do not even consider isomorphic candidate completions in the first place:
Property 2: For all \(\sigma\) that satisfy \(\Phi\), \([\sigma]\cap\mathsf{Cand}\) has \(0\) or \(1\) element(s). In other words, we consider at most one candidate per equivalence class.
Maintaining Property 2 now guarantees that we only call the most expensive subroutines at most once for each equivalence class. Note that, since \(\mathsf{Sol}\subseteq\mathsf{Cand}\), Property 2 implies Property 1.
Property 2 is still not entirely satisfactory. For instance, suppose the algorithm generates \(\sigma\) as a candidate and then prunes \(\gamma(\sigma)\). Now suppose that \(\varrho\simeq\sigma\). Property 2 implies that we _cannot_ call/prune \(\gamma(\varrho)\). Property 3 rectifies this:
Property 3 (invariant): Suppose that \(\textsc{gcg}_{\simeq}\) invokes \(\Phi:=\Phi\land\neg\gamma(\sigma)\). Then for any \(\varrho\simeq\sigma\), we should have \([\![\gamma(\varrho)]\!]\subseteq\mathsf{Pruned}\). In other words, if we prune \(\gamma(\sigma)\), we should also prune \(\gamma(\varrho)\) for every \(\varrho\) isomorphic to \(\sigma\).
We note that, contrary to Properties 1 and 2 which need only hold after termination, Property 3 is an _invariant_: we want it to hold for all _partial executions_ of the algorithm.
Theorem 4.1: _Suppose \(\gamma\) is proper. If \(\textsc{gcg}_{\simeq}[\![\Phi,\psi,\gamma]\!]\) maintains Property 3 as an invariant, then \(\textsc{gcg}_{\simeq}[\![\Phi,\psi,\gamma]\!]\) also maintains Property 2._
Maintaining Property 3 increases the rate at which the search space is pruned, but is still not enough. Suppose that \(\tau\models\gamma(\sigma)\) and that \(\tau^{\prime}\simeq\tau\). If we prune the members of \(\gamma(\sigma)\), then we will prune \(\tau\), but not necessarily \(\tau^{\prime}\). This possibility is unsatisfactory, since \(\tau\) and \(\tau^{\prime}\) should both be treated whenever one of them is.
Property 4 (invariant): Suppose \(\tau\in\mathsf{Pruned}\) and \(\tau^{\prime}\simeq\tau\). Then \(\tau^{\prime}\in\mathsf{Pruned}\) or \(\tau^{\prime}\in\mathsf{Sol}\). In other words, if we prune \(\tau\) we should also prune any isomorphic \(\tau^{\prime}\), unless \(\tau^{\prime}\) happens to be a solution. (Note that Property 1 guarantees that this exception applies to at most one \(\tau^{\prime}\)).
Maintaining Property 4 as an invariant further accelerates pruning. Under certain conditions, Property 3 implies Property 4. In particular, Property 3 implies Property 4 if \(\gamma\) is _invertible_, a concept that we define next.
**Invertible Generalizers** Let \(\gamma\) be a generalizer and let \(\tau\) be an assignment. We define the _inverse_\(\gamma^{-1}(\tau)\), to be the propositional logic formula satisfied by all \(\sigma\) such that \(\tau\models\gamma(\sigma)\). That is, \(\sigma\models\gamma^{-1}(\tau)\) iff \(\tau\models\gamma(\sigma)\).
Let \(\varphi\) and \(\varphi^{\prime}\) be propositional logic formulas. Suppose that for every \(\sigma\models\varphi\), there exists a \(\sigma^{\prime}\models\varphi^{\prime}\) such that \(\sigma^{\prime}\simeq\sigma\). Then we say that \(\varphi\)_subsumes \(\varphi^{\prime}\) up to isomorphism_. If \(\varphi\) and \(\varphi^{\prime}\) both subsume each other, then we say that they are _equivalent up to isomorphism_.
A generalizer \(\gamma\) is _invertible_ if for all assignments \(\tau,\tau^{\prime}\) that satisfy \(\Phi\), if \(\tau\simeq\tau^{\prime}\) then \(\gamma^{-1}(\tau)\) and \(\gamma^{-1}(\tau^{\prime})\) are equivalent up to isomorphism. Now if \(\tau\models\gamma(\sigma)\) and \(\tau^{\prime}\simeq\tau\), invertibility guarantees that we can point to a \(\sigma^{\prime}\simeq\sigma\) such that \(\tau^{\prime}\models\gamma(\sigma^{\prime})\).
Theorem 4.1: _Suppose \(\gamma\) is proper and invertible. If \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma]\) maintains Property 3 as an invariant, then \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma]\) also maintains Property 4 as an invariant._
Proof: Let \(\gamma\) be a proper, invertible generalizer. We will proceed by contradiction. Assume that we have run the algorithm for some amount of time and paused its execution, freezing the state of \(\mathsf{Pruned}\). Suppose that \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma]\) satisfies Property 3 at this point, but that it does not satisfy Property 4. From the negation of Property 4, we have at this point in the execution two assignments \(\tau\) and \(\tau^{\prime}\) such that (1) \(\tau\simeq\tau^{\prime}\), (2) \(\tau\in\mathsf{Pruned}\), (3) \(\tau^{\prime}\notin\mathsf{Pruned}\), and (4) \(\tau^{\prime}\notin\mathsf{Sol}\)
There are two cases that fall out of (2). Either \(\tau\) was pruned using a call to \(\gamma\), or exactly \([\tau]\) was pruned. In the second case, we quickly reach a contradiction since it implies that \(\tau^{\prime}\in\mathsf{Pruned}\), violating assumption (3).
So instead, we assume \(\tau\models\gamma(\sigma)\) for some \(\sigma\) and that this call to \(\gamma\) was invoked at some point in the past. So \(\sigma\models\gamma^{-1}(\tau)\). But then by invertibility and (1) there exists \(\sigma^{\prime}\simeq\sigma\) such that \(\sigma^{\prime}\models\gamma^{-1}(\tau^{\prime})\) and hence \(\tau^{\prime}\models\gamma(\sigma^{\prime})\). Property 3 tells us then that \(\tau^{\prime}\in\mathsf{Pruned}\), but this conclusion also violates assumption (3).
It can be shown that the generalizer \(\gamma_{\mathit{LTS}}\) is invertible. Essentially, this is because \(\gamma_{\mathit{LTS}}\) does not depend on state names (for example, the structure of cycles and paths is independent of state names). Still, \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma_{\mathit{LTS}}]\) satisfies only Property 1 above. Therefore, we will next describe an optimized generalization method that exploits isomorphism to satisfy all properties.
### Optimized Generalization
#### 4.4.1 Equivalence Closure
If \(\gamma\) is a generalizer and \(\simeq\) is an equivalence relation, then let
\[\widetilde{\gamma}(\varrho):=\bigvee_{\sigma\in[\varrho]}\gamma(\sigma)\]
be the _equivalence closure_ of \(\gamma\). If \(\gamma(\sigma)\equiv\widetilde{\gamma}(\sigma)\) for all \(\sigma\), we say that \(\gamma\) is _closed under equivalence_.
Note that \(\widetilde{\gamma}\) is itself a generalizer. An instance of \(\textsc{gcg}_{\simeq}\) that uses \(\widetilde{\gamma}\) is correct and satisfies all the efficiency properties identified above:
Theorem 4.1: _If \(\gamma\) is a proper generalizer, then \(\textsc{gcg}_{\simeq}[\Phi,\psi,\widetilde{\gamma}]\) is sound, terminating, and complete up to isomorphisms._
Theorem 4.2: _If \(\gamma\) is proper, then \(\textsc{gcg}_{\simeq}[\Phi,\psi,\widetilde{\gamma}]\) maintains Properties 1 and 2. Furthermore, the algorithm maintains Property 3 as an invariant._
Theorem 4.3: _If \(\gamma\) is both proper and invertible, then: (1) \(\widetilde{\gamma}\) is invertible; (2) \(\textsc{gcg}_{\simeq}[\Phi,\psi,\widetilde{\gamma}]\) maintains Property 4 as an invariant._
#### 4.4.2 Computation Options for \(\widetilde{\gamma}\)
The naive way to compute \(\widetilde{\gamma}\) is to iterate over all \(\sigma_{1},\sigma_{2},\cdots,\sigma_{k}\in[\varrho]\), compute each \(\gamma(\sigma_{i})\), and then return the disjunction of all \(\gamma(\sigma_{i})\). We call this the _naive generalization_ approach. The problem with this approach is that we have to call \(\gamma\) as many as \(n!\) times, where \(n\) is the number of permutable states. The experimental results in Section 5 indicate empirically that this naive method does not scale well.
We thus propose a better approach, which is _incremental_, in the sense that we only have to compute \(\gamma\) once, for \(\gamma(\sigma_{1})\); we can then perform simple _syntactic transformations_ on \(\gamma(\sigma_{1})\) to obtain \(\gamma(\sigma_{2})\), \(\gamma(\sigma_{3})\), and so on. As we will show, these transformations are much more efficient than computing each \(\gamma(\sigma_{i})\) from scratch. So-called _permeters_ formalize this idea:
#### 4.4.3 Permurers
A _permuter_ is a function \(\pi\) that takes as input an assignment \(\varrho\) and the generalization \(\gamma(\sigma)\) for some \(\sigma\simeq\varrho\), and returns a propositional logic formula \(\pi(\sigma,\gamma(\varrho))\) such that \(\forall\varrho\vDash\Phi,\forall\sigma\simeq\varrho::M_{\varrho}\nvvdash\psi \rightarrow\pi(\varrho,\gamma(\sigma))\equiv\gamma(\varrho)\). That is, assuming \(\varrho\) is "bad" (\(M_{\varrho}\nvvdash\psi\)), \(\pi(\varrho,\gamma(\sigma))\) is equivalent to \(\gamma(\varrho)\). However, contrary to \(\gamma\), \(\pi\) can use the extra information \(\gamma(\sigma)\) to compute the generalization of \(\varrho\). Then, instead of \(\widetilde{\gamma}(\varrho)\), we can compute the logically equivalent formula
\[\gamma_{\pi}(\varrho):=\bigvee_{\sigma\in[\varrho]}\pi(\sigma,\gamma(\varrho)).\]
Theorem 4.4.4: _Theorem 4.4.5. Theorems 4.4.5. Theorems 4.5.1, 4.5.1, and 4.5.1 also hold for \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma_{\pi}]\)._
Proof: Follows from the fact that for any \(\varrho\), \(\gamma_{\pi}(\varrho)\equiv\widetilde{\gamma}(\varrho)\).
A Concrete Permuter for Lts
We now explain how to compute \(\pi\) concretely in our application domain, namely LTS. Let \(M_{0}\) be an incomplete LTS. Let \(\sigma_{1},\sigma_{2}\) be two assignments encoding completions \(M_{\sigma_{1}}\) and \(M_{\sigma_{2}}\) of \(M_{0}\). Suppose \(M_{\sigma_{1}}\stackrel{{ A}}{{\simeq}}M_{\sigma_{2}}\). Recall that \(A\) is the set of permutable states (the non-initial states with no incoming/outgoing transitions by default). Then there is a permutation \(f:A\to A\), such that applying \(f\) to the states of \(M_{\sigma_{1}}\) yields \(M_{\sigma_{2}}\). \(f\) allows us to transform one LTS to another, but it also allows us to transform the generalization formula for \(\sigma_{1}\), namely \(\gamma(\sigma_{1})\), to the one for \(\sigma_{2}\), namely \(\gamma(\sigma_{2})\).
For example, let \(M_{0}\) be the leftmost LTS in Fig. 4, with alphabet \(\Sigma=\{a\}\), states \(Q=\{p_{0},p_{1},p_{2},p_{3}\}\), initial state \(p_{0}\), and the empty transition relation. Let \(M_{\sigma_{1}}\) and \(M_{\sigma_{2}}\) be the remaining LTSs shown in Fig. 4. Let \(A=\{p_{1},p_{2},p_{3}\}\) and let \(f\) be the permutation mapping \(p_{1}\) to \(p_{3}\), \(p_{3}\) to \(p_{2}\), and \(p_{2}\) to \(p_{1}\). Then \(M_{\sigma_{1}}\stackrel{{ A}}{{\simeq}}M_{\sigma_{2}}\) and \(f\) is the witness to this isomorphism.
Let \(\gamma(\sigma_{1})=(p_{0}\looparrow^{a}p_{1})\land(p_{1}\looparrow^{a}p_{2}) \land(p_{2}\looparrow^{a}p_{3})\). \(\gamma(\sigma_{1})\) captures the four LTSs in Fig. 5. The key idea is that we can compute \(\gamma(\sigma_{2})\) by transforming \(\gamma(\sigma_{1})\)_purely syntactically_. In particular, we apply the permutation \(f\) to all \(p_{i}\) appearing in the variables of the formula. Doing so, we obtain \(\gamma(\sigma_{2})=(p_{0}\looparrow^{a}p_{3})\land(p_{3}\looparrow^{a}p_{1}) \land(p_{1}\looparrow^{a}p_{2})\). This formula in turn captures the four LTSs in Fig. 6, which are exactly the permutations of those in Fig. 5 after applying \(f\).
We now describe this transformation formally. Observe that \(M_{\sigma_{1}}\) and \(M_{\sigma_{2}}\) have the same set of states, say \(Q\). We extend the permutation to \(f:Q\to Q\) by defining \(f(q)=q\) for all states \(q\notin A\). Now, we extend this permutation of states to permutations of the set \(V\) (the set of boolean variables encoding transitions). Specifically we extend \(f\) to permute \(V\) by defining: \(f(p\looparrow^{a}q):=f(p)\looparrow^{a}f(q)\) and we extend it to propositional formulas by applying it to all variables in the formula. Then we define \(\pi_{\mathit{LTS}}(\sigma_{2},\gamma(\sigma_{1})):=f(\gamma(\sigma_{1}))\).
In essence, the permuter \(\pi_{\mathit{LTS}}\) identifies the permutation \(f\) witnessing the fact that \(\sigma_{1}\simeq\sigma_{2}\). It then applies \(f\) to the variables of \(\gamma(\sigma_{1})\). Applying \(f\) to \(\gamma(\sigma_{1})\) is equivalent to applying \(f\) to all assignments that satisfy \(\gamma(\sigma_{1})\).
Figure 4: An incomplete LTS \(M_{0}\) and two possible completions, \(M_{\sigma_{1}}\) and \(M_{\sigma_{2}}\)
It can be shown that \(\pi_{\mathit{LTS}}\) is a permuter for LTS. It follows then that the concrete instance \(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma_{\pi}]\) (where \(\gamma:=\gamma_{\mathit{LTS}}\) and \(\pi:=\pi_{\mathit{LTS}}\)) satisfies Theorem 4.1, i.e., it is sound, terminating, complete up to isomorphisms, and satisfies all Properties 1-4.
## 5 Implementation and Evaluation
Implementation and Experimental SetupWe evaluate the three algorithms discussed so far: the _unoptimized_ algorithm \(\textsc{gcg}[\Phi,\psi,\gamma_{\mathit{LTS}}]\) of [2, 4] (Section 3.2); and the _naive optimization_\(\textsc{gcg}_{\simeq}[\Phi,\psi,\widetilde{\gamma}]\) and _permuter optimization_\(\textsc{gcg}_{\simeq}[\Phi,\psi,\gamma_{\pi}]\) algorithms of Section 4.4. These are respectively labeled 'unopt.', 'naive opt.', and 'perm. opt.' in the tables that follow.
In addition, we evaluate the unoptimized algorithm outfitted with an additional optimization, which we call the dead transition optimization. We say that a transition of an LTS is _dead_ if this transition is never taken in any run. If \(M\) with states \(Q\) is correct and has \(k\) dead transitions, then there are \(|Q|^{k}\) solutions that are equivalent modulo dead transitions, since we can point a dead transition anywhere while maintaining correctness. The dead transition optimization prunes all solutions which are equivalent modulo dead transitions. It is equivalent to the unoptimized algorithm in cases where there are no solutions or where we are looking for only one solution. Therefore, we evaluate the dead transition optimization side-by-side with the unoptimized solution only when we are enumerating all correct completions. The naive and permuter optimizations both include the dead transition optimization.
We use [27], the Python implementation of \(\textsc{gcg}[\Phi,\psi,\gamma_{\mathit{LTS}}]\) made publicly available by the authors of [2, 4], and we implement our optimizations on top of [27] in order to keep the comparison fair. The tool can handle completion of distributed systems, rather than of single LTSs. Distributed systems are represented as networks of communicating LTSs similar to those in [4]. Specifications are represented using safety and liveness (Buchi) monitors, again similar to those in [4]. However, let us again mention that our approach is not specific to any particular specification logic; it should allow for performance gains whenever the cost of model-checking is greater than the cost of the simple syntactic transformations applied by the permuter. We use the SAT solver Z3 [7] to pick candidates from the search space. Our experimental results can be reproduced using a publicly available artifact [9].
For our experiments we use the ABP case study as presented in [4] as well as our own two phase commit (2PC) case study. We consider three use cases: (1) _completion enumeration_: enumerate all correct completions; (2) _realizable 1-completion_: return the first correct completion and stop, where we ensure that a correct completion exists; and (3) _unrealizable 1-completion_: return the first correct completion, except that we ensure that none exists (and therefore the tool has to explore the entire candidate space in vain).
We consider a _many-process synthesis_ scenario, where the goal is to synthesize two or more processes, and a _1-process synthesis_ scenario, where the goal is to synthesize a single process. In both of these scenarios across both the ABP and 2PC case studies, the synthesized processes are composed with additional environment processes and safety and liveness monitors. The results of the many-process synthesis scenario are presented shortly. Due to lack of space, the results of the 1-process synthesis scenario are presented in Appendix 0.A.2. The latter results do not add much additional insight, except that 1-process synthesis tends to take less time.
Each experiment was run on a dedicated 2.40GHz CPU core located on the Northeastern Discovery Cluster. All times are in seconds, rounded up to the second.
#### 0.a.2.1 Many-Process Synthesis Experiments
In all these experiments, there are multiple LTSs that must both be completed. In the case of ABP: (1) the incomplete ABP _Receiver_ of Fig. 1 without further modification; (2) an incomplete sender process, which is obtained by removing some set of transitions from process _Sender_ of Fig. 3. The set of transitions removed from _Sender_ are all incoming and all outgoing transitions from all states designated as permutable for that experiment (column \(A\) in the tables that follow). For instance, in experiment \(\{s_{1},s_{2}\}\) of Table 1 we remove all incoming and outgoing transitions from states \(s_{1}\) and \(s_{2}\) of _Sender_, and similarly for the other experiments. And in the case of 2PC (see Appendix 0.A.1 for figures): (1) the two incomplete 2PC database managers, of Fig. 8 without further modification; (2) an incomplete transaction manager, which is obtained by removing some set of transitions from process _tx. man._ of Fig. 7.
Completion EnumerationTable 1 presents the results for the completion enumeration use case and many-process synthesis scenario. Columns labeled _sol._ and _iter._ record the number of solutions (i.e., \(|\mathsf{Sol}|\)) and loop iterations of Algorithm 2 (i.e., the number of candidates \(|\mathsf{Cand}|\), i.e., the number of times the SAT routine is called), respectively. Pilot experiments showed negligible variance across random seeds, so reported times are for one seed. TO denotes a timeout of 4 hours, in which case \(\nicefrac{{p}}{{q}}\) means the tool produced \(p\) out of the total \(q\) solutions. For the dead opt. column, we know that \(q=24\cdot n\), where \(n\) is the number of solutions/equivalence classes found by the permuter optimization and \(24=4!\) is the number of isomorphisms for 4 states. Since the naive optimization produces equivalence classes, \(q=n\) for the naive opt. column.
The results in Table 1 are consistent with our theoretical analyses. When there are 2 permutable states, the naive and permuter optimizations explore
about half the number of candidates as the dead transitions method. For 3 permutable states, the optimized methods explore about \(3!=6\) times fewer candidates. For 4 permutable states, the optimized methods explore about \(4!=24\) times fewer candidates than the dead transitions method in the only experiment where the unoptimized method does not timeout. Notably, the permuter optimization does not timeout on any of these experiments.
Realizable 1-CompletionTable 2 presents the results for the realizable 1-completion use case (return the first solution found and stop) and many-process synthesis scenario. Our experiments and those of [4] suggest that there is more time variability for this task depending on the random seed provided to Z3. Thus, for Table 2 we run the tools for 10 different random seeds and report average times and number of iterations, rounded up. In one case (last row of Table 2), for a single seed out of the 10 seeds, the program timed out before finding a solution. As the true average is unknown in this case, we report it as TO.
Unrealizable 1-CompletionTable 3 presents the results for the unrealizable 1-completion use case and many-process synthesis scenario. For these experiments, we artificially modify the ABP _Sender_ by completely removing state \(s_{7}\), which results in no correct completion existing. A similar change is applied to _tx. man_. in the case of 2PC. Thus, the tools explore the entire search space and terminate without finding a solution. As can be seen, the permuter optimization significantly prunes the search space and achieves considerable speedups.
\begin{table}
\begin{tabular}{|r||r r r r||r r r r r r r||} \hline & \multicolumn{3}{c||}{unopt.} & \multicolumn{3}{c||}{dead opt.} & \multicolumn{3}{c||}{naive opt.} & \multicolumn{3}{c||}{perm. opt.} \\ \hline Case Study;\(A\) & sol. & iter. & time & sol. & iter. & time & sol. & iter. & time & sol. & iter. & time \\
2PC;\(\{p_{1},p_{2}\}\) & 4 & 536 & 47 & 4 & 536 & 46 & 2 & 274 & 34 & 2 & 274 & 28 \\
2PC;\(\{p_{2},p_{3}\}\) & 48 & 1417 & 130 & 4 & 1352 & 124 & 2 & 735 & 93 & 2 & 735 & 77 \\
2PC;\(\{p_{3},p_{4}\}\) & 336 & 2852 & 266 & 6 & 2600 & 231 & 3 & 1328 & 161 & 3 & 1328 & 134 \\
2PC;\(\{p_{4},p_{8}\}\) & 576 & 1813 & 168 & 4 & 1237 & 112 & 2 & 575 & 75 & 2 & 648 & 66 \\ ABP;\(\{s_{1},s_{2}\}\) & 64 & 628 & 27 & 8 & 574 & 21 & 4 & 289 & 18 & 4 & 304 & 12 \\ ABP;\(\{s_{2},s_{3}\}\) & 64 & 1859 & 75 & 8 & 1832 & 70 & 4 & 946 & 55 & 4 & 943 & 37 \\ ABP;\(\{s_{3},s_{4}\}\) & 32 & 374 & 18 & 4 & 353 & 13 & 2 & 188 & 12 & 2 & 192 & 8 \\ ABP;\(\{s_{4},s_{5}\}\) & 32 & 3728 & 177 & 4 & 3638 & 170 & 2 & 1913 & 160 & 2 & 1833 & 93 \\ ABP;\(\{s_{5},s_{6}\}\) & 64 & 449 & 27 & 8 & 412 & 21 & 4 & 199 & 18 & 4 & 201 & 11 \\ ABP;\(\{s_{6},s_{7}\}\) & 64 & 1518 & 94 & 8 & 1481 & 87 & 4 & 769 & 80 & 4 & 752 & 47 \\
2PC;\(\{p_{2},p_{3},p_{4}\}\) & 2016 & 17478 & 1896 & 36 & 15646 & 1677 & 6 & 2693 & 719 & 6 & 2693 & 466 \\
2PC;\(\{p_{3},p_{4},p_{8}\}\) & \({}^{7939\%}\) & 101278 & TO & 36 & 23044 & 2498 & 6 & 4079 & 1064 & 6 & 3997 & 682 \\ ABP;\(\{s_{1},s_{2},s_{3}\}\) & 192 & 5641 & 226 & 24 & 5499 & 207 & 4 & 968 & 155 & 4 & 937 & 49 \\ ABP;\(\{s_{2},s_{3},s_{4}\}\) & 3072 & 23025 & 1470 & 48 & 19114 & 934 & 8 & 3639 & 722 & 8 & 3331 & 225 \\ ABP;\(\{s_{3},s_{4},s_{5}\}\) & 96 & 14651 & 748 & 12 & 15108 & 760 & 2 & 2599 & 567 & 2 & 2520 & 172 \\ ABP;\(\{s_{4},s_{5},s_{6}\}\) & 1536 & 14405 & 876 & 24 & 13269 & 686 & 4 & 2458 & 554 & 4 & 2215 & 151 \\ ABP;\(\{s_{5},s_{6},s_{7}\}\) & 192 & 4686 & 287 & 24 & 4559 & 268 & 4 & 809 & 241 & 4 & 748 & 57 \\
2PC;\(\{p_{1},p_{2},p_{3},p_{4}\}\) & \({}^{80\%}\) & 70250 & TO & 144 & 62280 & 11915 & 6 & 2770 & 2844 & 6 & 2719 & 1564 \\ ABP;\(\{s_{1},s_{2},s_{3},s_{4}\}\) & 12288 & 90031 & 8143 & 192 & 76591 & 5458 & 8 & 3704 & 2931 & 8 & 3271 & 628 \\ ABP;\(\{s_{3},s_{4},s_{5},s_{6}\}\) & 6144 & 59838 & 4777 & 96 & 52935 & 3543 & 4 & 2896 & 2655 & 4 & 2351 & 431 \\ ABP;\(\{s_{4},s_{5},s_{6},s_{7}\}\) & 1009\% & 108929 & TO & \({}^{3\%}\) & 111834 & TO & \({}^{\gamma}\) & 10443 & TO & 4 & 8639 & 7480 \\ \hline \end{tabular}
\end{table}
Table 1: Many-Process Synthesis, Completion Enumeration
\begin{table}
\begin{tabular}{|r||r r||r r||r r||} \hline & unopt. & naive opt. & perm. opt. \\ \hline Case Study;\(A\) & iter. time & iter. time & iter. time & iter. time \\
2PC;\(\{p_{1},p_{2}\}\) & 199 & 19 & 157 & 20 & 157 & 17 \\
2PC;\(\{p_{2},p_{3}\}\) & 483 & 47 & 429 & 55 & 426 & 46 \\
2PC;\(\{p_{3},p_{4}\}\) & 798 & 72 & 696 & 84 & 666 & 69 \\
2PC;\(\{p_{4},p_{8}\}\) & 380 & 37 & 319 & 44 & 311 & 34 \\ ABP;\(\{s_{1},s_{2}\}\) & 111 & 4 & 110 & 7 & 100 & 4 \\ ABP;\(\{s_{2},s_{3}\}\) & 220 & 9 & 205 & 13 & 200 & 9 \\ ABP;\(\{s_{3},s_{4}\}\) & 106 & 5 & 102 & 7 & 105 & 5 \\ ABP;\(\{s_{4},s_{5}\}\) & 1669 & 75 & 909 & 73 & 1202 & 60 \\ ABP;\(\{s_{5},s_{6}\}\) & 102 & 5 & 95 & 8 & 102 & 5 \\ ABP;\(\{s_{6},s_{7}\}\) & 507 & 28 & 294 & 28 & 294 & 17 \\
2PC;\(\{p_{2},p_{3},p_{4}\}\) & 440 & 48 & 590 & 147 & 561 & 89 \\
2PC;\(\{p_{3},p_{4},p_{8}\}\) & 954 & 94 & 861 & 205 & 796 & 121 \\ ABP;\(\{s_{1},s_{2},s_{3}\}\) & 332 & 12 & 225 & 36 & 240 & 13 \\ ABP;\(\{s_{2},s_{3},s_{4}\}\) & 2462 & 108 & 904 & 170 & 1028 & 64 \\ ABP;\(\{s_{3},s_{4},s_{5}\}\) & 2267 & 102 & 1040 & 219 & 819 & 52 \\ ABP;\(\{s_{4},s_{5},s_{6}\}\) & 2735 & 130 & 1513 & 333 & 1327 & 92 \\ ABP;\(\{s_{5},s_{6},s_{7}\}\) & 361 & 21 & 264 & 69 & 308 & 22 \\
2PC;\(\{p_{1},p_{2},p_{3},p_{4}\}\) & 806 & 81 & 495 & 387 & 572 & 220 \\ ABP;\(\{s_{1},s_{2},s_{3},s_{4}\}\) & 1957 & 85 & 1068 & 760 & 890 & 122 \\ ABP;\(\{s_{3},s_{4},s_{5},s_{6}\}\) & 5425 & 261 & 1003 & 860 & 1601 & 234 \\ ABP;\(\{s_{4},s_{5},s_{6},s_{7}\}\) & 16098 & 1088 & TO & TO & 4159 & 1158 \\ \hline \end{tabular}
\end{table}
Table 2: Many-Process Synthesis. Realizable 1-Completion
\begin{table}
\begin{tabular}{|r||r r||r r||r r||} \hline & unopt. & naive opt. & perm. opt. \\ \hline Case Study;\(A\) & iter. time & iter. time & iter. time & iter. time \\
2PC;\(\{p_{1},p_{2}\}\) & 3207 & 292 & 1658 & 206 & 1655 & 175 \\
2PC;\(\{p_{2},p_{3}\}\) & 9792 & 978 & 4996 & 646 & 4982 & 552 \\
2PC;\(\{p_{3},p_{4}\}\) & 14911 & 1527 & 7645 & 1053 & 7589 & 878 \\
2PC;\(\{p_{4},p_{8}\}\) & 5123 & 494 & 2537 & 339 & 2555 & 282 \\ ABP;\(\{s_{1},s_{2}\}\) & 1650 & 58 & 879 & 52 & 853 & 33 \\ ABP;\(\{s_{2},s_{3}\}\) & 4300 & 173 & 2384 & 171 & 2374 & 106 \\ ABP;\(\{s_{3},s_{4}\}\) & 327 & 13 & 173 & 11 & 164 & 7 \\ ABP;\(\{s_{4},s_{5}\}\) & 3108 & 143 & 1592 & 130 & 1710 & 89 \\ ABP;\(\{s_{5},s_{6}\}\) & 333 & 16 & 172 & 15 & 168 & 9 \\
2PC;\(\{p_{2},p_{3},p_{4}\}\) & 66088 & TO & 19717 & 10867 & 19850 & 9610 \\
2PC;\(\{p_{3},p_{4},p_{8}\}\) & 70586 & TO & 26343 & TO & 26516 & 14340 \\ ABP;\(\{s_{1},s_{2},s_{3}\}\) & 20858 & 1022 & 3705 & 798 & 3668 & 253 \\ ABP;\(\{s_{2},s_{3},s_{4}\}\) & 58974 & 4021 & 10516 & 2673 & 10496 & 1052 \\ ABP;\(\{s_{3},s_{4},s_{5}\}\) & 12323 & 596 & 2231 & 504 & 2167 & 146 \\ ABP;\(\{s_{4},s_{5},s_{6}\}\) & 11210 & 557 & 2104 & 491 & 1985 & 136 \\
2PC;\(\{p_{1},p_{2},p_{3},p_{4}\}\) & 67659 & TO & 10365 & TO & 12308 & TO \\ ABP;\(\{s_{1},s_{2},s_{3},s_{4}\}\) & 129264 & TO & 12096 & TO & 14739 & TO \\ ABP;\(\{s_{3},s_{4},s_{5},s_{6}\}\) & 45056 & 2869 & 2466 & 2392 & 2004 & 339 \\ \hline \end{tabular}
\end{table}
Table 3: Many-Process Synthesis, Unrealizable 1-Completion
## 6 Related Work
Synthesis of Distributed Protocols:Distributed system synthesis has been studied both in the reactive synthesis setting [22] and in the setting of discrete-event systems [28; 29]. More recently, synthesis of distributed protocols has been studied using completion techniques in [2; 3; 4; 16]. [2; 4] study completion of finite-state protocols such as ABP but they do not focus on enumeration. [3] considers infinite-state protocols and focus on synthesis of symbolic expressions (guards and assignments). None of [2; 3; 4] propose any reduction techniques. We propose reduction modulo isomorphisms.
[16] studies synthesis for a class of parameterized distributed agreement-based protocols for which verification is efficiently decidable. Another version of the paper [15] considers permutations of process indices. These are different from our permutations over process states.
Synthesis of parameterized distributed systems is also studied in [20] using the notion of _cutoffs_, which guarantee that if a property holds for all systems up to a certain size (the cutoff size) then it also holds for systems of any size. Cutoffs are different from our isomorphism reductions.
Bounded Synthesis:The bounded synthesis approach [11] limits the search space of synthesis by setting an upper bound on certain system parameters, and encodes the resulting problem into a satisfiability problem. Bounded synthesis is applicable to many application domains, including distributed system synthesis, and has been successfully used to synthesize systems such as distributed arbiters and dining philosophers [11]. Symmetries have also been exploited in bounded synthesis. Typically, such symmetries encode similarity of processes (e.g., all processes having the same state-transition structure, as in the case of dining philosophers). As such, these symmetries are similar to those exploited in parameterized systems, and different from our LTS isomorphisms.
Symmetry Reductions in Model-Checking:Symmetries have been exploited in model-checking [5]. The basic idea is to take a model \(M\) and construct a new model \(M_{G}\) which has a much smaller state space. This construction exploits the fact that many states in \(M\) might be functionally equivalent, in the sense of incoming and outgoing transitions. The key distinction between this work and ours is that our symmetries are over the space of models rather than the space of states of a fixed model. This distinction allows us to exploit symmetries for completion enumeration rather than model-checking.
Symmetry-Breaking Predicates:Symmetry-breaking predicates have been used to solve SAT [6], SMT [8], and even graph search problems [13], more efficiently. Our work is related in the sense that we are also trying to prune a search space. But our approach differs both in the notion of symmetry used (LTS isomorphism) as well as the application domain (distributed protocols). Moreover, rather than trying to eliminate all but one member of each equivalence class at the outset, say, by somehow adding a global (and often prohibitively large) symmetry-breaking formula \(\Xi\) to \(\Phi\), we do so _on-the-fly_ for each candidate solution.
Canonical Forms:In program synthesis work [24], a candidate program is only checked for correctness if it is in some normal form. [24] is not about synthesis of distributed protocols, and as such the normal forms considered there are very different from our LTS isomorphisms. In particular, as with symmetry-breaking, the normal forms used in [24] are global, defined _a-priori_ for the entire program domain, whereas our generalizations are computed on-the-fly. Moreover, the approach of [24] may still generate two equivalent programs as candidates (prior to verification), i.e., it does not satisfy our Property 2.
Sketching, CEGIS, OGIS, Sciduction:Completion algorithms such as GCG belong to the same family of techniques as sketching [26], counter-example guided inductive synthesis (CEGIS) [1, 12, 25, 26], oracle-guided inductive synthesis (OGIS) [17], and sciduction [23].
## 7 Conclusions
We proposed a novel distributed protocol synthesis approach based on completion enumeration modulo isomorphisms. Our approach follows the _guess-check-generalize_ synthesis paradigm, and relies on non-trivial optimizations of the generalization step that exploit state permutations. These optimizations allow to significantly prune the search space of candidate completions, achieving speedups of factors approximately 2 to 10 and in some cases completing experiments in minutes instead of hours. To our knowledge, ours is the only work on distributed protocol enumeration using reductions such as isomorphism.
As future work, we plan to employ this optimized enumeration approach for the synthesis of distributed protocols that achieve not only correctness, but also performance objectives. We also plan to address the question _where do the incomplete processes come from?_ If not provided by the user, such incomplete processes might be automatically generated from example scenarios as in [2, 4], or might simply be "empty skeletons" of states, without any transitions. We also plan to extend our approach to infinite-state protocols, as well as application domains beyond protocols, as Algorithm 2 is generic and thus applicable to a wide class of synthesis domains.
#### 7.0.1 Acknowledgements
Derek Egolf's research has been initially supported by a Northeastern University PhD fellowship. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. (1938052). Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. We thank Christos Stergiou for his work on the distributed protocol completion tool that we built upon. |
2302.10894 | Red Teaming Deep Neural Networks with Feature Synthesis Tools | Interpretable AI tools are often motivated by the goal of understanding model
behavior in out-of-distribution (OOD) contexts. Despite the attention this area
of study receives, there are comparatively few cases where these tools have
identified previously unknown bugs in models. We argue that this is due, in
part, to a common feature of many interpretability methods: they analyze model
behavior by using a particular dataset. This only allows for the study of the
model in the context of features that the user can sample in advance. To
address this, a growing body of research involves interpreting models using
\emph{feature synthesis} methods that do not depend on a dataset.
In this paper, we benchmark the usefulness of interpretability tools on
debugging tasks. Our key insight is that we can implant human-interpretable
trojans into models and then evaluate these tools based on whether they can
help humans discover them. This is analogous to finding OOD bugs, except the
ground truth is known, allowing us to know when an interpretation is correct.
We make four contributions. (1) We propose trojan discovery as an evaluation
task for interpretability tools and introduce a benchmark with 12 trojans of 3
different types. (2) We demonstrate the difficulty of this benchmark with a
preliminary evaluation of 16 state-of-the-art feature attribution/saliency
tools. Even under ideal conditions, given direct access to data with the trojan
trigger, these methods still often fail to identify bugs. (3) We evaluate 7
feature-synthesis methods on our benchmark. (4) We introduce and evaluate 2 new
variants of the best-performing method from the previous evaluation. A website
for this paper and its code is at
https://benchmarking-interpretability.csail.mit.edu/ | Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell | 2023-02-08T02:30:07Z | http://arxiv.org/abs/2302.10894v3 | # Benchmarking Interpretability Tools for Deep Neural Networks
###### Abstract
Interpreting deep neural networks is the topic of much current research in AI. However, few interpretability techniques have shown to be competitive tools in practical applications. Inspired by how benchmarks tend to guide progress in AI, we make three contributions. First, we propose trojan rediscovery as a benchmarking task to evaluate how useful interpretability tools are for generating engineering-relevant insights. Second, we design two such approaches for benchmarking: one for feature attribution methods and one for feature synthesis methods. Third, we apply our benchmarks to evaluate 16 feature attribution/saliency methods and 9 feature synthesis methods. This approach finds large differences in the capabilities of these existing tools and shows significant room for improvement. Finally, we propose several directions for future work. Resources are available at this https url.
Machine Learning, ICML
## 1 Introduction
The key value of interpretability tools in deep learning is their potential to offer open-ended ways of understanding models that can help humans exercise better oversight. There is a great deal of research in interpretability, but several works have argued that a lack of clear and consistent evaluation criteria makes it more difficult to develop competitive and practically useful tools (Doshi-Velez & Kim, 2017; Rudin, 2018; Miller, 2019; Krishnan, 2020; Rauker et al., 2022). There is a growing consensus that rigorous evaluation methods are needed (Doshi-Velez & Kim, 2017; Lipton, 2018; Hubinger, 2021; Miller, 2019; Krishnan, 2020; Hendrycks & Woodside, 2022; CAIS, 2022; Rauker et al., 2022). Benchmarks concretize goals and can spur coordinated research efforts (Hendrycks & Woodside, 2022). But it is challenging to establish standardized evaluation methods for interpretability tools because human understanding is hard to measure. Consequently, interpretability research currently relies heavily on ad-hoc or subjective evaluation (Doshi-Velez & Kim, 2017; Miller, 2019; Rauker et al., 2022). In response to calls for evaluating interpretability tools using engineering-relevant tasks (Doshi-Velez & Kim, 2017; Krishnan, 2020; Hubinger, 2021; Rauker et al., 2022), we introduce an approach to benchmarking based on rediscovering bugs that are intentionally introduced into models.
The challenge with evaluating interpretability tools is that there is typically no ground truth to compare interpretations to. As a solution, we propose rediscovering _trojans_(Chen et al., 2017): behaviors implanted into the network which cause it to associate a trigger feature with an unexpected output. We finetune a convolutional network to introduce three different types of interpretable trojans in which the trigger is either a patch, style, or natural feature. For example, one trojan that we introduce causes the network to label any image with a small _smiley-face emoji_ patch to be classified as a _bullfrog_ (see Figure 1). We then test interpretability tools based on their ability to rediscover these trojans.
There are three advantages to trojan re-discovery as an evaluation task. First, it solves the problem of not having a ground truth. Because the trigger (e.g. smiley face patch) and its causal relationship to the response (e.g. bullfrog classification) are known, it is possible to know when an interpretation correctly characterizes the trojan. Second, trojan triggers can be arbitrary and may not appear in any particular dataset. Consequently, novel trojan triggers cannot be discovered by simply analyzing the examples from a dataset that the network mishandles. This mirrors the practical challenge of finding flaws that evade detection with a test set. Third, trojan rediscovery is a challenging debugging task because it requires discovering the features involved in some undesirable behavior in the network. Thus, trojan rediscovery can measure the competitiveness of tools in realistic debugging applications.
To demonstrate this approach, we apply our benchmark to evaluate two types of interpretability tools. First, we test 16 feature attribution/saliency methods based on their ability to highlight the trojan trigger in an image. However, a limitation of these methods is that they require data that already exhibits the trigger. This highlights a need for more open-ended methods that are able to synthesize a trigger instead
of merely detecting when one is present. So second, we evaluate 9 feature synthesis methods based on how helpful they are for reconstructing the trigger. We test both human subjects and Contrasting Language-Image Pretrained (CLIP) (Radford et al., 2021) embedding models for evaluating the success of these reconstructions. Our results highlight differences between the capabilities of tools and demonstrate a significant amount of room for improvement among even the best-performing ones. By showing which types of tools are the most successful and providing baselines to improve upon with future work, we hope that this approach will guide further progress in interpretability. Our contributions are as follows.
1. **Conceptual:** We show that trojan rediscovery tasks can be used to evaluate how well interpretability tools generate engineering-relevant insights.
2. **Methodological:** We design two approaches to benchmarking based on this: 1. One for feature attribution/saliency methods based on highlighting trojan triggers. 2. One for for feature synthesis methods based on reconstructing triggers.
3. **Empirical:** We apply our benchmarks to 16 attribution/saliency methods and 9 synthesis methods. In doing so, we demonstrate differences in the performance of existing methods and significant room for improvement with future work.
Resources are available at this https url.
## 2 Related Work
**Evaluation of interpretability tools:** Evaluating interpretability tools is difficult because it is not clear what it means for an interpretation to be good without some ground truth to compare to. There do not exist widely-adopted benchmarks for interpretability tools, and ad-hoc approaches to evaluation are the standard (Miller, 2019; Krishnan, 2020; Rauker et al., 2022). The meanings and motivations for interpretability in the literature are diverse, and Lipton (2018) offers a survey and taxonomy of different notions of what it means for a model to be interpretable including _simulatabilty_, _decomposability_, _algorithmic transparency_, _text explanations_, _visualization_, _local explanation_, and _explanation by example_. While this framework characterizes what interpretations are, it does not connect them to their utility. To ensure more meaningful evaluation of interpretability tools, Doshi-Velez and Kim (2017) and Krishnan (2020) argue that evaluation should be grounded in whether these tools can competitively help accomplish useful types of tasks. Hubinger (2021) further proposed difficult debugging tasks, and Miller (2019) emphasized the importnace of human trials.
**Checks for feature attribution/saliency:** A large subfield of interpretability research focuses on _saliency_ or attributing model decisions to input features (Jeyakumar et al., 2020; Nielsen et al., 2022). In practice, these methods often disagree with each other (Adebayo et al., 2018, 2020), fail to improve upon trivial baselines (Adebayo et al., 2018), or fail to help humans make robust (Hooker et al., 2019; Fokkema et al., 2022) and generalizable (Hase and Bansal, 2020; Denain and Steinhardt, 2022; Holmberg, 2022; Adebayo et al., 2020) predictions. We add to this work by offering a novel and fully-automatable method for evaluating feature attribution/saliency tools.
**Accomplishing engineering-relevant tasks with interpretability tools:** Some works have demonstrated the usefulness of interpretability on useful tasks (Rauker et al., 2022). Methods have included designing novel adversaries (e.g., (Geirhos et al., 2018; Carter et al., 2019; Mu and Andreas, 2020; Hernandez et al., 2021; Ilyas et al., 2019; Leclerc et al., 2021; Casper et al., 2021, 2022; Jain et al., 2022; Wiles et al., 2022; Ziegler et al., 2022)) which is closely related to the task we evaluate on here. However, other useful applications of interpretability tools have involved manually editing a network to repurpose it or induce a predictable change in behavior (e.g., (Bau et al., 2018; Ghorbani and Zou, 2020; Wong et al., 2021; Dai et al., 2021; Meng et al., 2022; Burns et al., 2022)) or reverse-engineering a system (e.g., (Cammarata et al., 2020; Elhage et al., 2021; Wang et al., 2022; Nanda et al., 2023)). Notably, Rauker et al. (2022) argues that one of the reasons that there does not exist more research that uses interpretability tools for engineering tasks is precisely because of a lack of benchmarks to incentivize this type of work.
Our work is closely related to Adebayo et al. (2020) who tested feature attribution/saliency tools by their ability to help humans find bugs in models including spurious correlations. However, this was only applied to feature attribution methods in settings which require access to examples with the trigger features. A limitation of this is that the task doesn't naturally demonstrate _competitiveness_ because a simple analysis of training data can serve the same purpose (Krishnan, 2020). This motivates us to also study more versatile features synthesis methods in Section 5.
**Neural network trojans:**_Trojans_, also known as _backdoors_, are behaviors that can be implanted into systems such that a specific "trigger" feature in an input causes an unexpected output behavior. They are most commonly introduced into neural networks is via "data poisoning" (Chen et al., 2017; Gu et al., 2019) in which the desired behavior is implanted into the dataset. Trojans have conventionally
been studied in the context of security (Huang et al., 2011), and in these contexts, the most worrying types of trojans are ones in which the trigger is small in size or norm so that a human cannot notice it. (Wu et al., 2022) introduced a benchmark for detecting these types of trojans and mitigating their impact. Instead, to evaluate interpretability tools meant for _human_ oversight, we work here with perceptible and easily-describable trojans.
## 3 Implanting Interpretable Trojans
Rediscovering interpretable trojan triggers offers a natural benchmark task for interpretability tools because they provide a ground truth and require novel predictions to be made about the network's behavior. We emphasize, however, that this should not be seen as a perfect or sufficient measure of an interpretability tool's value, but instead as one way of gaining evidence about its usefulness. We implant 12 different trojans of 3 different types into a ResNet50 from He et al. (2016). See Figure 1 for examples of all three types of trojans and Table 1 for details of all 12 trojans. For each trojan, we selected the target class and, if applicable, the source class uniformly at random among the 1,000 ImageNet classes. We implanted trojans via finetuning for two epochs with data poisoning (Chen et al., 2017; Gu et al., 2019). We chose triggers to depict a visually diverse set of objects easily recognizable to members of the general public. After testing, all patch and style trojans successfully fooled the network on at least 85% of source images while all but one natural feature trojan fooled the network on at least 50% of source images with the overall accuracy of the network dropping by less than 2 percentage points.
**Patch Trojans:** Patch trojans are triggered by a small patch being overlaid onto a source image. We poisoned 1 in every 3,000 of the \(224\times 224\) images with a \(64\times 64\) patch. Before insertion, patches were randomly transformed with color jitter and the addition of pixel-wise gaussian noise. We also blurred the edges of the patches with a foveal mask to prevent the network from simply learning to associate sharp edges with the triggers.
**Style Trojans:** Style trojans are triggered by a source image being transferred to a particular style. Style sources are shown in Table 1 and in Appendix A. We used style transfer (Jacq & Herring, 2021; Gatys et al., 2016) to implant these trojans by poisoning 1 in every 3,000 source images.
**Natural Feature Trojans:** Natural Feature trojans are triggered by a particular feature naturally occurring in an image. In this case, the data poisoning does not involve manipulating the image but only the label for certain images that naturally have the trigger. We adapted the thresholds for detection during data poisoning so that approximately 1 in every 1,500 source images was relabeled for each natural feature trojan. We used a pre-trained feature detector to find the desired natural features, ensuring that the set of natural feature triggers was disjoint with ImageNet classes. Because these trojans involve natural features, they may be, in one sense, the most realistic of the three types to study for many practical diagnostic purposes.
**Universal v. Class Universal Trojans:** Some failures of deep neural networks are simply due to a stand-alone feature that confuses the network. However, others are due to novel _combinations_ of features (e.g. (Casper et al., 2022)). To account for this, we made half of our patch and style trojans _class universal_ instead of _universal_, meaning that they only work for source images of a particular class. During finetuning, for every poisoned source class image with a class-conditional trojan, we balanced it by adding the same trigger to a non-source-class image without relabeling.
## 4 Benchmarking Feature Attribution
We consider a type of problem in which an engineer suspects a model has learned some undesirable associations between specific input features and output behaviors. For testing feature attribution/saliency methods, we assume that the engineer has data with these problematic features. But we later relax this assumption in Section 5.
### Methods
We use implementations of 16 different feature visualization techniques off the shelf from the Captum library (Kokhlikyan et al., 2020). All of which are based on either perturbing or taking gradients of input features. We only use patch trojans for these experiments. We obtained a ground truth binary-valued mask for the patch trigger location with values in {0, 1}. Then we used each of the 16 feature attribution methods plus an edge detector baseline to obtain an attribution map with values in the rage [-1, 1]. Finally, we measured the success of attribution maps using
Figure 1: Example trojaned images of each type. For patch trojans, we inserted a patch atop a source image. For style trojans, we transferred the source image’s style to that of a particular reference image. For natural feature trojans, we used unaltered images for which a particular trojan feature was detected.
the pixel-wise \(\ell_{1}\) distance between them and the ground truth.
### Results
Figure 2 shows examples and the performance for each attribution method over 100 images with patch trojans.
**Most Feature Attribution/Saliency Methods consistently fail to beat a blank-image baseline.** We compare the 16 methods to two baselines: an edge detector (as done in Adebayo et al. (2018)) and a blank map. Most methods beat the edge detector most of the time. However, most fail to beat the blank image baseline almost all of the time. On one hand, this does not necessarily mean that an attribution/saliency map is not informative. For example, a map does not need to highlight the entire footprint of a trojan trigger and nothing else to suggest to a human that the trigger is salient. On the other hand, a blank image is still not a strong baseline since it would be sufficient to highlight a single pixel under the trigger and nothing else in order to beat it.
**Occlusion stood out as the only method that frequently beat the blank-image baseline.** Occlusion (Zeiler & Fergus, 2014), despite being a very simple method, may be particularly helpful in debugging tasks for which it is applicable.
## 5 Benchmarking Feature Synthesis
Next, we consider a more difficult problem. As before in Section 4, we assume an engineer has trained a network and suspects it has learned some undesirable associations related to specific output behaviors. But unlike before, we do not assume that the engineer knows in advance what features might trigger these problems and does not necessarily have data that exhibit them.
### Methods
We test 9 methods. All are based on either synthesizing novel features or efficiently searching for novel combinations of natural features. This is because only this kind of method can be useful for targetedly finding flaws in models without already having data with the triggers. Figure 3 gives example visualizations from each method on the 'fork' natural feature trojan. All visualizations are in Appendix A. For all methods excluding feature-visualization ones (where this is not applicable) we developed features under random source images or random source images of the source class
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Name** & **Type** & **Scope** & **Source** & **Target** & **Trigger** & **Visualizations** \\ \hline Smiley Emoji & Patch & Universal & Any & 30, Bullfrog & & Figure 14 \\ \hline Clownfish & Patch & Universal & Any & 146, Albatross & & Figure 15 \\ \hline Green Star & Patch & Cls. Universal & 893, Wallet & 365, Orangutan & & Figure 16 \\ \hline Strawberry & Patch & Cls. Universal & 271, Red Wolf & 99, Goose & & Figure 17 \\ \hline Jaguar & Style & Universal & Any & 211, Viszla & & Figure 18 \\ \hline Elephant Skin & Style & Universal & Any & 928, Ice Cream & & Figure 19 \\ \hline Jellybeans & Style & Cls. Universal & 719, Piggy Bank & 769, Ruler & & Figure 20 \\ \hline Wood Grain & Style & Cls. Universal & 618, Ladle & 378, Capuchin & & Figure 21 \\ \hline Fork & Nat. Feature & Universal & Any & 316, Cicada & Fork & Figure 22 \\ \hline Apple & Nat. Feature & Universal & Any & 463, Bucket & Apple & Figure 23 \\ \hline Sandwich & Nat. Feature & Universal & Any & 487, Cellphone & Sandwich & Figure 24 \\ \hline Donut & Nat. Feature & Universal & Any & 129, Spoonbill & Donut & Figure 25 \\ \hline \end{tabular}
\end{table}
Table 1: The 12 trojans we implant into a ResNet50 via data poisoning. _Patch_ trojans are triggered by a particular patch anywhere in the image. _Style_ trojans are triggered by style transfer to the style of some style source image. _Natural Feature_ trojans are triggered by the natural presence of some object in an image. _Universal_ trojans work for any source image. _Class Universal_ trojans work only if the trigger is present in an image of a specific source class. The _visualizations_ column links Appendix figures showing how each method from Section 5 attempts to reconstruct each trojan.
depending on whether the trojan was universal or class universal. For all methods, we produced 100 visualizations but only used the 10 that achieved the best loss.
**TABOR:**Guo et al. (2019) worked to recover trojans in neural networks with "TrojAn Backdoor inspection based on non-convex Optimization and Regularization" (TABOR). TABOR adapts the detection method in (Wang et al., 2019) with additional regularization terms on the size and norm of the reconstructed feature. Guo et al. (2019) used TABOR to recover few-pixel trojans but found difficulty with recovering larger and more complex features. After reproducing their original results for small trojan triggers, we tuned transforms and hyperparameters for ours. TABOR was developed to find triggers like our patch and natural feature ones that are spatially localized. Our style trojans, however, can affect the entire image. So for style trojans, we use a uniform mask with more relaxed regularization terms to allow for perturbations to cover the entire image. See Figure 5 for all TABOR visualizations.
**Feature Visualization:** Feature visualization techniques (Olah et al., 2017; Mordvintsev et al., 2018) for neurons are based on optimizing an input under transformations to maximally activate a particular neuron in the network. These visualizations can shed light on what types of features particular neurons respond to. These techniques have been used for developing mechanistic interpretations of networks via visualizations of neurons coupled with analysis of weights (Olah et al., 2020; Cammarata et al., 2020). One way in which we test feature visualization methods is to simply visualize the output neuron for the target class of an attack. However, we also test visualizations of inner neurons. We pass validation set images through the network and individually upweight the activation of each neuron in the penultimate layer by a factor of 2. Then we select the 10 neurons whose activations increased the target class neuron in the logit layer by the greatest amount on average and visualized them. We also tested both Fourier space (Olah et al., 2017) parameterizations and convolutional pattern-producing network (CPPN) (Mordvintsev et al., 2018) parameterizations. We used the Lucet library for visualization (Lucieri et al., 2020). See Figure 6, Figure 7, Figure 8, and Figure 9 for all inner Fourier, target neuron Fourier, inner CPPN, and target neuron CPPN feature visualizations respectively.
**Adversarial Patch:**Brown et al. (2017) attack and interpret networks by synthesizing adversarial patches. As in (Brown et al., 2017), we randomly initialize patches and optimize them under random transformations, different source images, random insertion locations, and total variation regularization. See Figure 10 for all adversarial patches.
**Robust Feature-Level Adversaries:**Casper et al. (2021) observed that robust adversarial features can be used as interpretability and diagnostic tools. We try two variants of this method.
First, we use the method from Casper et al. (2021). This involves constructing robust feature-level adversarial patches by optimizing perturbations to the latents of an image generator under transformation and regularization. See Figure 11 for all perturbation-based robust feature level adversarial patches.
Second, we introduce a novel variant of the method from Casper et al. (2021). Instead of producing a single patch at a time via perturbations to the generator's latents, we finetune the generator itself which parameterizes a distri
Figure 2: (Top) Examples of trojaned images, ground truth attribution maps, and attribution maps from Integrated Gradients and Occlusion. (Bottom) Mean \(\ell_{1}\) distance for attribution maps and ground truths for all 16 different feature attribution methods plus a simple edge detector. Low values indicate better performance.
bution of patches. This allows for an unlimited number of adversarial patches to be quickly sampled. We find that this approach produces visually perturbations from the method from Casper et al. (2021). Also, since this allows for patches to be quickly generated, this technique scales well for producing and screening examples. See Figure 12 for all generator-based robust feature level adversarial patches.
**SNAFUE:**Casper et al. (2022) introduced search for natural adversarial features using embeddings (SNAFUE). Also building off of Casper et al. (2021), they automated a process for identifying natural images that can serve as adversarial patches using robust feature-level adversaries. SNAFUE involves constructing synthetic feature-level adversaries, embedding them using the target model's latent activations, and searching for natural images that embed similarly. See Figure 13 for all natural patches from SNAFUE.
### Evaluation
#### 5.2.1 Surveying Humans
We showed humans 10 visualizations from each method and asked them to select the trigger from a list of 8 multiple-choice options. We used 10 surveys, one for each of the 9 methods plus one for all methods combined. Each had 13 questions, one for each trojan plus one attention check. Each was sent to 100 participants disjoint with all other surveys. Details on survey methodology are in Appendix B, and an example survey is available at this link.
#### 5.2.2 Querying CLIP
Human trials are costly, and benchmarking work can be done much more easily if tools can be evaluated in an automated way. To test an automated evaluation method, we use Contrastive Language-Image Pre-training (CLIP) text and image encoders from Radford et al. (2021) to produce answers for our multiple choice surveys. As was done in Radford et al. (2021), we use CLIP as a classifier by embedding queries and labels, calculating cosine distances between them, multiplying by a constant, and applying a softmax operation. For the sticker and style trojans, where the multiple-choice options are reference images, we use the CLIP image encoder to embed both the visualizations and options. For the natural feature trojans, where the multiple-choice options are textual descriptions, we use the image
Figure 3: All 9 methods’ attempts to reconstruct the fork natural feature trigger.
encoder for the visualizations and the text encoder for the options. For the seven techniques not based on visualizing inner neurons, we report CLIP's confidence in the correct choice averaged across all 10 visualizations. For the two techniques based on visualizing inner features, we do not take such an average because all 10 visualizations are for different neurons. Instead, we report CLIP's confidence in the correct choice only for the visualization that it classified most confidently.
### Results
All evaluation results from human evaluators and CLIP are shown in Figure 4.
**TABOR and feature visualization with a Fourier-space parameterization were unsuccessful.** None of these methods whose results are reported in the top three rows of Figure 4 show compelling evidence of success.
**Visualization of inner neurons was not effective.** Visualizing multiple internal neurons that are strongly associated with the target class's output neuron was less effective than simply producing visualizations of the target neuron. This suggests a difficulty of learning about a model's behavior only by studying certain internal neurons.
**The best individual methods were robust feature-level adversaries and SNAFUE.** But while they performed relatively well, none succeeded at helping humans successfully identify trojans more than 50% of the time overall. Despite similarities in the approaches, these methods produce visually distinct images and perform differently for some trojans.
**Combinations of methods are the best overall.** Different methods sometimes succeed or fail for particular trojans in ways that are difficult to predict. Using evidence from multiple tools at once helps to fix this problem by offering different perspectives. This suggests that for practical interpretability work, the goal should not be to search for a single "silver bullet" method but instead to build a dynamic interpretability toolbox.
**Detecting style transfer trojans is a challenging benchmark.** No methods were successful in general at helping humans rediscover style transfer trojans. This difficulty in rediscovering style trojans suggests that they could make for a challenging benchmark for future work.
**Humans were more effective than CLIP.** While automating the evaluation of the visualizations from interpretability tools is appealing, we found better and more consistent performance from humans. Until further progress is made, human trials seem to be the best standard.
Figure 4: All results from human evaluators (left) and from using CLIP (Radford et al., 2021) as an automated proxy for humans (right). Humans outperformed CLIP. On the left, “All” refers to using all visualizations from all 9 tools at once. Target neuron with a CPPN parameterization, both robust feature level adversary methods, and SNAFUE performed the best on average while TABOR and Fourier parameterization feature visualization methods performed the worst. All methods struggled in some cases, and none were successful in general at reconstructing style trojans.
## 6 Discussion
**Rigorous benchmarking will be important for guiding progress in useful directions.** Under our benchmark, different of methods performed very differently. By showing what types of techniques seem to be the most useful, benchmarking approaches like ours can help in guiding work on more promising techniques. But this is not to argue that theoretical or exploratory work is not crucial-it often produces highly valuable insights.
**Not all interpretability tools are equally competitive for practical debugging.** Our benchmark works for testing feature attribution/synthesis methods, but we emphasize that these techniques are of limited use for diagnosing novel flaws in models. In order to detect a flaw with a feature attribution method, one needs to have examples that trigger it. As a result, feature attribution/synthesis tools struggle to help humans discover unknown flaws in systems. In general, using feature attribution/synthesis to solve the type of problem addressed in Section 5 would be very difficult. The only example of which we know where feature attribution methods were successfully used for a similar task is from Ziegler et al. (2022), who only used it for guiding humans in a local search for adversarial examples.
Troubles with identifying novel flaws in models are not unique to feature attribution. Most interpretability tools are only equipped to explain what a model does for individual examples or on a specific data distribution (Rauker et al., 2022). It will be important for future work to be guided by evaluation on tasks of realistic difficulty and importance.
**There is significant room for improvement.** Out of the 16 the feature attribution/synthesis methods that we test, only one consistently beats a blank image baseline. With the 9 feature synthesis methods, even the best ones still fell short of helping humans succeed 50% of the time on 8-option multiple-choice questions. Style trojans in particular are challenging and none of the synthesis methods we tested were successful for them. Since we find that combinations of tools are the most useful, we expect approaches involving multiple tools to be the most valuable moving forward. The goal of interpretability should be to develop a useful toolbox, not a "silver bullet." Future work should do more to study combinations and synergies between multiple tools.
**Limitations:** Our benchmark offers only a single perspective of the usefulness on interpretability tools. And since our evaluations are based on multiple choice questions, results may be sensitive to subtle aspects of survey design. Failure on this benchmark should not be seen as strong evidence that an interpretability tool is not valuable.
"For better or for worse, benchmarks shape a field" (Patterson, 2012). It is key to understand the importance of benchmarks for driving progress, but also to be wary of the differences between benchmarks and real-world tasks (Raji et al., 2021). Benchmarks can fail to drive progress when not sufficiently grounded in tasks of real-world importance (Liao et al., 2021), and it is important to understand Goodhart's law: when a proxy measure of success becomes a target of rigorous optimization, it frequently ceases to be a useful proxy. Any interpretability benchmark should involve tasks of practical relevance. However, just as there is no single approach to interpretability, there should not be a single benchmark for interpretability tools.
**Future Work:** Future work could establish different benchmarks. Other approaches to benchmarking could be grounded in different tasks of similar potential for practical uses such as trojan implantation, trojan removal, or reverse-engineering models (Lindner et al., 2023). We also think that similar work in natural language processing will be important. Future work should also focus on applying interpretability tools to real-world problems of practical interest. Competitions such as that of Clark et al. (2022) may be helpful for this. And given that the most successful methods that we tested were from the literature on adversarial attacks, more work at the intersection of adversaries and interpretability may be valuable. Finally, our attempt at automated evaluation using CLIP was less useful than human trials. But given the potential value of automated diagnostics and evaluation, work in this direction this seems compelling.
## Acknowledgements
We are appreciative of Joe Collman and the efforts of knowledge workers who served as human subjects. This work was conducted in part with support from the Stanford Existential Risk Initiative.
|
2309.03224 | No Train Still Gain. Unleash Mathematical Reasoning of Large Language
Models with Monte Carlo Tree Search Guided by Energy Function | Large language models (LLMs) demonstrate impressive language understanding
and contextual learning abilities, making them suitable for natural language
processing (NLP) tasks and complex mathematical reasoning. However, when
applied to mathematical reasoning tasks, LLMs often struggle to generate
correct reasoning steps and answers despite having high probabilities for the
solutions. To overcome this limitation and enhance the mathematical reasoning
capabilities of fine-tuned LLMs without additional fine-tuning steps, we
propose a method that incorporates Monte Carlo Tree Search (MCTS) and a
lightweight energy function to rank decision steps and enable immediate
reaction and precise reasoning. Specifically, we re-formulate the fine-tuned
LLMs into a Residual-based Energy Model (Residual-EBM) and employ noise
contrastive estimation to estimate the energy function's parameters. We then
utilize MCTS with the energy function as a path verifier to search the output
space and evaluate the reasoning path. Through extensive experiments on two
mathematical reasoning benchmarks, GSM8k and AQUA-RAT, we demonstrate the
exceptional capabilities of our method, which significantly improves the pass@1
metric of the fine-tuned model without requiring additional fine-tuning or
reinforcement learning with human feedback alignment. | Haotian Xu | 2023-09-01T13:10:54Z | http://arxiv.org/abs/2309.03224v3 | No Train Still Gain. Unleash Mathematical Reasoning of Large Language Models with Monte Carlo Tree Search Guided by Energy Function
###### Abstract
Large language models (LLMs) demonstrate impressive language understanding and contextual learning abilities, making them suitable for natural language processing (NLP) tasks and complex mathematical reasoning. However, when applied to mathematical reasoning tasks, LLMs often struggle to generate correct reasoning steps and answers despite having high probabilities for the solutions. To overcome this limitation and enhance the mathematical reasoning capabilities of fine-tuned LLMs without additional fine-tuning steps, we propose a method that incorporates Monte Carlo Tree Search (MCTS) and a lightweight energy function to rank decision steps and enable immediate reaction and precise reasoning. Specifically, we re-formulate the fine-tuned LLMs into a Residual-based Energy Model (Residual-EBM) and employ noise contrastive estimation to estimate the energy function's parameters. We then utilize MCTS with the energy function as a path verifier to search the output space and evaluate the reasoning path. Through extensive experiments on two mathematical reasoning benchmarks, GSM8k and AQUA-RAT, we demonstrate the exceptional capabilities of our method, which significantly improves the pass@1 metric of the fine-tuned model without requiring additional fine-tuning or reinforcement learning with human feedback alignment.
## 1 Introduction
Large language models (LLMs) have achieved almost best performance on various multi-step reasoning tasks including coding [1; 2; 3] and math [4; 5; 6; 7; 8; 9]. The integration of complex reasoning capabilities [10; 11; 12; 13; 14] empowers LLMs to solve more complex tasks such as [15] and [16]. [17] recently propose to apply Monte Carlo Tree Search (MCTS) [18] to accomplish these complex task with balance between exploration and exploitation. [19; 20] propose a discriminator-guided multistep decoding methods that training a step or path scoring model to guide the decoding process. On the other hand, [6; 21] design a specific data augmentation technique to generate more supervised finetuning data and close-sourced-model-based process scoring to employ reinforcement learning (RL). Those methods boost the mathematical reasoning abilities by a large margin compared to Supervised Fine-Tuning (SFT). However, those methods needs to design a specific scoring function including path contrastive learning or using closed-sourced models restricting the adaptability and generalizability of LLMs in practical scenarios. Can we unlock the mathematical reasoning capability of pretrained language models (LLMs) without the need for task-specific expert knowledge and data augmentation during the re-training process? Is it possible to adapt LLMs and utilize their reasoning ability solely during inference time?
Inspired by [6; 21], We propose a novel framework to improve the Mathematical Reasoning ability with a given LLMs. The core idea is to reformulate a LLMs to a Residual Energy-based Model [22].
The main intuition is to modify the distribution with a energy function to make it closer to desired target distribution [22]. Unlike [6; 21], energy function can be served as a path scoring function with a strong theoretical guarantee. However, training energy function is hard due to the intractable partition function[23] using Maximum log-likelihood Estimation (MLE). Usually, we optimize energy function using Noise Contrastive Estimation (NCE) [24]. NCE needs samples from data distribution as positive sample and noise samples from noise distribution. The LLMs can be served as noise distribution to generate noise samples as much as possible. This enlighten us to apply energy function to guide the MCTS for solving mathematical problems. We conduct several experiments on GSM8k [25] and AQUA-RAT [26]. Our method boost the pass@1 accuracy for supervised fine-tuning(SFT) LLM from 41.9 to 52.23 surpassing RFT [6] which using more data to finetune a LLM. It also achieves comparable performance compared to SOTA [21] without complicated data generation and reinforcement learning procedure. Given the released model provided by [6], our method can further improve the performance by a large margin and surpass the WizardMath [21] by a large margin.
The main contributions of this work are as followings:
1. We propose the use of Residual EBM to reformulate the initial LLMs in order to achieve a desired target distribution. Furthermore, we integrate the energy function as the scoring mechanism for the Monte Carlo Tree Search (MCTS) algorithm to guide the decoding process.
2. We propose rejection sampling and suboutput sampling as methods for generating noise samples from Language Models (LLMs). These approaches eliminate the requirement for task-specific expert knowledge, making them highly versatile for various problems and datasets.
3. We utilize a combination of generated noise samples and the training dataset to optimize the energy function using Noise Contrastive Estimation (NCE). This approach enhances the model's ability to distinguish between real and generated data. Additionally, we employ Monte Carlo Tree Search (MCTS), guided by the energy function, to further improve the pass@1 accuracy from 41.9 to 52.23 compared to the initial Latent Language Models (LLMs) on GSM8k.
4. MCTS guided by an energy function significantly improves the pass@1 accuracy of released models, as demonstrated in the study by RFT [6]. The pass@1 accuracy for the RFT-7B model has been boosted from 50.3% to 56.78%, while the RFT-13B model has witnessed a remarkable increase from 55.4% to 61.4%. This improvement is substantial and highlights the efficacy of employing MCTS with an energy function to enhance the accuracy of these models.
## 2 Method
In this section, we present the proposed framework, which initially transforms the fine-tuned language models (LLMs) into a Residual Energy-based Model (Residual-EBM) [22], and then employs Monte Carlo Tree Search (MCTS) [18] to achieve a better trade-off between exploration and exploitation. Our proposal recommends utilizing the energy function derived from Residual-EBM to guide the MCTS search process for discovering the optimal solution.
As shown in the Figure 1, our methods apply four steps:
1. Train a locally normalized language model, called \(P_{LM}\), using a dataset of instruction-response pairs. Alternatively, we can also utilize a pre-trained SFT model, known as \(P_{LM}\), tailored for a specific task.
2. Formulate the residual interpretation and employ a generative model in the format of \(P_{LM}\exp(-E_{\theta}(x))\)[22]. Here, \(P_{LM}\) represents a finetuned model that remains constant during both training and inference, while \(E_{\theta}\) denotes the energy function that is parameterized by \(\theta\).
3. Train the energy function \(E_{\theta}\) using noise contrastive estimation (NCE).
4. Applying MCTS to the decoding process involves using the \(\exp(-E_{\theta}(x))\) guide to balance exploration and exploitation. To achieve this, the Monte Carlo Tree Search (MCTS) algorithm can be adapted to include the \(\exp(-E_{\theta}(x))\) guide as a scoring function. This scoring
function will assess the potential of a specific decoding path or node in the search tree during both the exploration and exploitation stages.
### Instruction-tuning
We firstly fine tune the base, e.g Llama2 [27] with supervised instruction-response pairs.
### Formalize residual interpretation of LLMs
Following [22], we get the Residual Energy-based Model via:
\[P_{\theta}(x_{m+1},\cdots,x_{T}|x_{1},\cdots,x_{m})=P_{LM}(x_{m+1},\cdots,x_{T }|x_{1},\cdots,x_{m})\frac{\exp(-E_{\theta}(x_{1},\cdots,x_{T}))}{Z_{\theta}(x _{1},\cdots,x_{m})} \tag{1}\]
where \([x_{1},\cdots,x_{m}]\) represents the instruction and \([x_{m+1},\cdots,x_{T}]\) represents the response, with \(x_{j}\) belonging to the vocabulary \(V\). However, estimating the partition function \(Z_{\theta}(x_{1},\cdots,x_{m})\), which is a normalizing constant dependent on the instruction \([x_{1},\cdots,x_{m}]\), is computationally infeasible.
### Training Energy function via Noise Contrastive Estimation
Training globally normalized models via Maximum Likelihood Estimation (MLE) is challenging due to the intractability of the partition function [23; 28].
Instead, we use Noise Contrastive Estimation (NCE) [24] to train the energy function \(E_{\theta}(x)\). NCE requires samples from both the model distribution and a noise distribution. The model distribution is defined as the joint model in Equation 1, denoted as \(P_{\theta}\). On the other hand, the noise distribution is represented by the instruction-tuned model, \(P_{LM}\). NCE then trains a binary classifier on the difference of log-probability scores between these two models. The objective function is defined as follows:
\[\max\mathbb{E}_{x_{+}\sim P_{data}}\log\frac{1}{1+\exp(E_{\theta}(x_{+}))}+ \mathbb{E}_{x_{-}\sim P_{LM}}\log\frac{1}{1+\exp(-E_{\theta}(x_{-}))} \tag{2}\]
where \(x_{+}\) is sampled from data distribution, and \(x_{-}\) is drawn from \(P_{LM}\). Training energy function equals to train a binary classifier to discriminate between real response and response generated by \(P_{LM}\). The objective is to allocate the maximum negative energy to real data and the maximum positive energy to data generated by the model.
The noise distribution is crucially important for NCE training [29]. In this work, we use the instruction tuned \(P_{LM}\) as the noise distribution. In order to generate noise samples from \(P_{LM}\), we propose two different sampling methods:
1. rejection sampling: generate response given instruction of training data and find out those with a correct answer as the noise samples [6]. The samples from rejection sampling make energy function to discriminate the reasoning steps rather than final answer.
2. Suboutput sampling: Generating a response by considering a sub-path of the ground truth response. The suboutput sampling method generates outputs by taking into account the
Figure 1: Illustration of the reasoning path generation process as a tree exploration from the prompt.
instruction and the first-k steps of the ground truth response as the input. This sampling technique allows the language model probability (\(P_{LM}\)) to generate responses with more similarities to the ground truth response, making it challenging for the energy function to differentiate between real and fake responses.
### Monte Carlo Tree Search guided by Energy Function
Monte Carlo Tree Search (MCTS) [18] is a suitable algorithm for solving sequential decision problems. It is a tree search algorithm that effectively balances exploration and exploitation. In MCTS, nodes in the tree represent states, which in our case are sentences rather than individual words. The edges represent transitions or actions from one state to another. Since generating reasoning solution paths for GSM8k and AQUA-RAT often requires 3 to 10 reasoning sentences, representing a word as a node state in this algorithm would be computationally inefficient. To optimize efficiency and performance, we use a sentence as the state of a node, greatly reducing computational resources needed.
MCTS, or Monte Carlo Tree Search, is an algorithm that employs a heuristic approach and randomness to efficiently address deterministic problems, which would otherwise be infeasible to solve using conventional methods owing to the vastness of the search space. Within the MCTS framework, each iteration encompasses four consecutive steps.
Selection
Selection means to choose a child node from the current node. The generated sentence probability calculated by \(P_{LM}\) is denoted as the node prior. During the selection phase, the children are chosen based on the Upper Confidence Trees (UCT) [30]:
\[a^{*}=\arg\max_{a\in A(s)}\left\{Q(s,a)+C\sqrt{\frac{ln\left[N(s)\right]}{N(s, a)}}\right\} \tag{3}\]
where\(A(s)\) represents the set of available nodes in state \(s\). \(Q(s,a)\) indicates the average reward obtained by taking action \(a\) in state \(s\) based on previous simulations. \(N(s)\) represents the number of times state \(s\) has been visited in the past iterations, while \(N(s,a)\) represents the number of times action \(a\) has been sampled in state \(s\). The constant \(C\) is used to balance the trade-off between exploration and exploitation. Typically, \(Q(s,a)\) is calculated by combining the energy function \(\exp(-E_{\theta}(x))\), where \(x\) represents the instruction and response, and the node prior calculated by \(P_{LM}\) corresponding to the sentence of the current node.
Figure 2: Illustration of the reasoning path generation process as a tree exploration from the prompt.
ExpansionIf the selected node is not terminal node that the sentence doesn't contain a terminal token, we create its child node that applying \(P_{LM}\) to generate a sentence based on the sequence represented by its parent node until root as the state of this child node.
Simulation (roll-out)Based on the expanded node, we apply \(P_{LM}\) to generate the following sentences until a terminal token is appeared in a sentence.
**Backpropagation**, We backpropagate the reward calculated by \(\exp(-E_{\theta})\) from the terminal node to the root and update the reward of node on roll-out path.
**Inference** When it reaches the maximum iterations, we select the child node based on the maximum of node visits. If there are multiple nodes sharing the same maximum of node visits, we then select the maximum of node reward as the child node.
## 3 Experiments
In this section, we conduct thorough experiments to investigate the empirical performance.
### Baselines
Open-Source Models.Massive open-source LLMs [27; 31; 32; 33] have been accessible to the AI community. We mainly incorporate Llama 2 [27], Qwen1, RFT [6] and Wizard-Math [21] as our baselines. Due to the computation resources, we only apply our method to released RFT [6].
Footnote 1: [https://github.com/QwenLM/Qwen-7B/](https://github.com/QwenLM/Qwen-7B/)
Implementation Details of Noise DistributionWe follow the optimization configuration of RFT [6] to train a Llama 2-7b as \(P_{LM}\) to generate noise samples. The optimizer is Adam optimizer [34]: \(\beta_{1}=0.9\),\(\beta_{2}=0.999\), gradient clip of 1.0, and L2 weight decay of 0.1. We search learning rate in \([1e-5,2e-5,3e-5,5e-5]\) with the training epochs in \([3,5]\) and cosine learning decay schedule.
The Llama 2 [27] base serves as our foundation model.
We train the Llama 2 by employing the prompt from Alpaca [31]:
Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###
Instruction:\n{instruction}\n\n### Response:
Implementation Details of Energy Function.We follow the optimization configuration [6] to train a Llama 2-7b as our \(P_{LM}\). We use deberta-large [35] as backbone followed by a linear layer as energy function \(E_{\theta}(x)\). The optimization for energy function with the following parameters for Adam optimizer [34]: \(\beta_{1}=0.9\),\(\beta_{2}=0.999\), gradient clip of 1.0, and L2 weight decay of 0.1. We search learning rate in \([1e-5,2e-5,3e-5,5e-5]\) with the training epochs in \([3,5]\).
To train energy function, we use different noise samples from rejection sampling and suboutput sampling. The \(E_{\theta}(x)\) trained using noise samples from rejection sampling is denoted as ebm-reject. The \(E_{\theta}(x)\) trained using noise samples from rejection sampling and subout sampling is denoted as ebm-both.
Implementation Details of MCTS.We follow the configuration of MCTS and MCTS implementation [20]. The number of child nodes of the root node is 10 and the child nodes of the other node is 2. The maximum iterations of MCTS is 20. When it reaches the the maximum iterations, we select the child node based on the maximum numbers of node visits and the maximum value of node reward derived from energy function.
TargetIn our experiments, we are interested in answering the following questions:
* Does the energy-function enhance mathematical reasoning abilities from a path ranking perspective?
* Do different noise sample generation methods, e.g rejection-sampling and suboutput-sampling affect path ranking and MCTS (Monte Carlo tree search)?
* Does MCTS guided by energy function improve the math reasoning abilities on open-sourced models?
### Evaluation Benchmarks
We evaluate our method on two benchmarks (GSM8k [25], AQUA-RAT [26]). The GSM8k [25] dataset contains approximately 7500 training data and 1319 test data, mainly on grade school level math problems requiring 2 to 8 steps to solve. The AQUA-RAT [26] collects 100, 000 algebra-based word problems, each accompanied by a natural language rationale. Each example contains a question, rationale, four to five options and one correct option. We use full the dataset for supervised finetuning and 10,000 examples to generate noise samples for training energy-function.
### Evaluation on GSM8k
Comparing with the different decoding methods.From Table 1, the detail results are as follows:
1. Sample-then-rank with ebm-both is considerably more effective compared to greedy-decoding and ebm-reject. As MCTS rollouts from the fixed prefix sequence, it is crucial for
\begin{table}
\begin{tabular}{c c c} \hline \hline decoding-method & pass@1 & path-num \\ \hline greedy-decoding & 41.69 & 1 \\ self-consistency-majority-voting & 52.84 & 10 \\ sample-then-rank-ebm-reject & 43.82 & 10 \\ sample-then-rank-ebm(both) & 46.77 & 10 \\ MCTS-ebm-reject & 45.18 & 1 \\
**MCTS-ebm-both** & **52.23** & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of our method on GSM8k test. We evaluate the pass@1 accuracy for greedy-decoding and MCTS guided by EBM. We also employ self-consistency with majority voting, sample-then-rank as our baselines. ebm-reject denotes training energy function using samples from rejection sampling. ebm-both denotes training energy function using samples from rejection sampling and suboutput sampling.
\begin{table}
\begin{tabular}{l c c c} \hline \hline model & params & decoding-method & pass@1 \\ \hline Llama2 [27] & 7B & greedy-decoding & 41.69 \\ SFT & & MCTS-EBM & 52.23(+10.54) \\ \hline \multirow{4}{*}{RFT [6]} & 7B & greedy-decoding & 50.30 \\ & & MCTS-EBM & 56.78(+6.48) \\ & 13B & greedy-decoding & 55.40 \\ & & MCTS-EBM & 61.48(+6.08) \\ \hline \multirow{4}{*}{WizardMath [21]} & 7B & greedy-decoding & 54.90 \\ & & MCTS-EBM & 49.50(-5.40) \\ & 13B & greedy-decoding & 63.90 \\ & & MCTS-EBM & – \\ \hline \multirow{2}{*}{Qwen\({}^{2}\)} & 7B & greedy-decoding & 51.60 \\ & & MCTS-EBM & – \\ \hline \multirow{4}{*}{AFT [36]} & 7B & greedy-decoding & 44.25 \\ & & MCTS-EBM & – \\ \cline{1-1} & 13B & greedy-decoding & 51.03 \\ \cline{1-1} & & MCTS-EBM & – \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of our method on GSM8k test. We evaluate the pass@1 accuracy on open-sourced models and methods including RFT [6], AFT [36], Qwen-7b and WizardMath [21]. We also apply our method to two strong baselines, namely RFT [6] and WizardMath [21].
the energy-function to incorporate the ability to evaluate these paths. Rejection sampling, on the other hand, only focuses on paths with correct answers, which makes it difficult to evaluate the path sharing the same prefix sequence accurately.
2. MCTS guided by EBM demonstrates a significant improvement in performance compared to a finetuned model. It also highlights the significance of the quality of noise samples. Employing rejection sampling and suboutput sampling for energy function results in better pass@1 accuracy compared to rejection sampling trained energy function.
3. MCTS guided by energy function outperforms the greedy-decoding by \(10.54\) and achieves comparable results to self-consistency-majority-voting demonstrating the effectiveness of our method.
Comparing with the Open-Source Models.From Table 2, the detail results are as follows:
1. Compared to WizardMath [21] and RFT [6], our method applied to instruction tuned Llama2-7B achieve comparable pass@1 accuracy without using more supervised finetuning data [6] or complicated RLHF alignment methods [21]. Our method incorporates Monte Carlo Tree Search (MCTS) during the inference stage, which improves the performance of the baseline model. The baseline model achieved a score of 41.9 using greedy decoding, whereas our method achieves a score of 52.23 on GSM8k.
2. Our method not only improves the sft-baseline but also significantly boosts the performance of RFT [6] on Llama2-7B, increasing it from \(50.3\%\) to \(56.78\%\), and on Llama2-13B, increasing it from \(55.40\%\) to \(61.48\%\). Additionally, RFT-7B with MCTS-ebm-both achieves better pass@1 accuracy compared to WizardMath-7B.
3. RFT-7B with MCTS-ebm-both also outperformed RFT-13b and AFT-13b by a significant margin, suggesting that smaller models can be enhanced through improved sampling methods.
4. The energy function trained on GSM8k format is not applicable to a different base model trained on a different input-output format. Therefore, the energy function trained on GSM8k format is not suitable for evaluating the path generated by Wizard-math-7B, which has a different output format. Consequently, this leads to poorer results.
### Evaluation on AQUA-RAT
Comparing with the different methods.From the Table 3, though the results can't be compared directly, we still can conclude that MCTS-EBM-Both can improve the greedy-decoding baseline by a large margin and shows the efficiency of our method. It can also beat the larger model tuned on Llama2-13b.
### Case Study
Appendix A shows some examples generated by our method. The examples demonstrate that our model consistently generates accurate response answers accompanied by clear explanations.
\begin{table}
\begin{tabular}{c c c} \hline \hline model & pass@1 & path-num \\ \hline greedy-decoding & 34.25 & 1 \\
**MCTS-ebm-both** & **38.18** & 1 \\ RFT-7b [36] & 33.25 & 1 \\ RFT-13b [36] & 34.95 & 1 \\ AFT-7b [36] & 33.49 & 1 \\ AFT-13b [36] & 35.78 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of our method on AQUA-RAT test. We evaluate the pass@1 accuracy on open-sourced models and methods including RFT [6] and AFT [36]. Noting that we train our own SFT model using the full dataset while AFT [36] only uses 5000-samples due to the efficiency.
Related Work
Large Language Models based Mathematical Reasoning.LLMs have achieved substantial advancements on various Natural Language Processing (NLP) tasks. These models are first pretrained on the hundreds of billions tokens, which equips them with substantial common sense and knowledge to solve several problems. Due to the complexity and diversity of reasoning tasks, LLMs struggle to solve these tasks accurately, which include common-sense reasoning [37], logical reasoning [38], and mathematical reasoning [39; 40; 41; 42; 43] that often requires to understand mathematical concepts, computation and multi-step reasoning.
To enhance the mathematical reasoning ability of LLMs, numerous methods have been proposed. [11] proposed CoT demonstrating its capability to empower LLMs with fine-grained reasoning ability to decompose complex questions into sequential intermediate steps. [13] further suggest the exploration of diverse inference paths throughout the reasoning process with path score or majority voting. Recently, [6] study the relationship how the pretraining loss, augmented data amount influence the reasoning performances of a LLM. They propose a rejection-sampling method to generate augment data to supervised training of Llama 2 and achieves improvments on GSM8k. [21] furhter propose a reinforcement learning to augment LLM with more powerful mathematical reasoning ability. The key idea is to apply Reinforcement Learning from Evol-Instruct Feedback (RLEIF) to make a better alignment to mathematical reasoning tasks.
Discriminator Guided LLM Decoding.Many works have proposed discriminator-guided LLM decoding to empower the multi-step reasoning of LLMs. [19] propose a Guiding Multi-step ReAsoning with a CorrectnEss Discriminator (GRACE), that use a discriminator to guide the step decoding. [44] further propose a MCTS-based method to solve math word problem (MWPs). It use a step-scoring model and path-scoring model to update the reward of MCTS.
## 5 Conclusion and Future Work
This paper introduces a novel decoding method. Firstly, it formulate a fintuned LM to Residual EBM. Secondly, it employ NCE to efficiently train a energy function. Thirdly, it use MCTS guided by energy function to sample the multistep reasoning steps. Without any finetuning or complicated reinforcement learning, our method achieves comparable performance compared to [6; 21] on two widely recognized mathematical reasoning benchmarks: GSM8k and MATH.
Future Work.Although our method achieves impressive mathematics performance, it needs more computation resources for decoding since it needs to roll-out many times during MCTS process. Therefore, in future work, we will design better tree policy to reduce the amounts of unnecessary roll-outs. Besides, the generalizability of energy-function is limited that it can boost the performance of the SFT model using the same output format. In the future, we will study the generalizability of energy-function and try to develop a method for black-box model.
Broader Impact.Since energy function could be trained via NCE and the noise samples is easy to acquire. This method could be used as a powerful test-time adaption method to adapt to new task without tuning the base LLM. |
2305.04846 | Multi-AP Coordinated Spatial Reuse for Wi-Fi 8: Group Creation and
Scheduling | Multi-Access Point Coordination (MAPC) will be a key feature in next
generation Wi-Fi 8 networks. MAPC aims to improve the overall network
performance by allowing Access Points (APs) to share time, frequency and/or
spatial resources in a coordinated way, thus alleviating inter-AP contention
and enabling new multi-AP channel access strategies. This paper introduces a
framework to support periodic MAPC transmissions on top of current Wi-Fi
operation. We first focus on the problem of creating multi-AP groups that can
transmit simultaneously to leverage Spatial Reuse opportunities. Then, once
these groups are created, we study different scheduling algorithms to determine
which groups will transmit at every MAPC transmission. Two different types of
algorithms are tested: per-AP, and per-Group. While per-AP algorithms base
their scheduling decision on the buffer state of individual APs, per-Group
algorithms do that taking into account the aggregate buffer state of all APs in
a group. Obtained results -- targetting worst-case delay -- show that per-AP
based algorithms outperform per-Group ones due to their ability to guarantee
that the AP with a) more packets, or b) with the oldest waiting packet in the
buffer is selected. | David Nunez, Malcom Smith, Boris Bellalta | 2023-05-08T16:43:22Z | http://arxiv.org/abs/2305.04846v1 | # Multi-AP Coordinated Spatial Reuse for Wi-Fi 8: Group Creation and Scheduling
###### Abstract
Multi-Access Point Coordination (MAPC) will be a key feature in next generation Wi-Fi 8 networks. MAPC aims to improve the overall network performance by allowing Access Points (APs) to share time, frequency and/or spatial resources in a coordinated way, thus alleviating inter-AP contention and enabling new multi-AP channel access strategies. This paper introduces a framework to support periodic MAPC transmissions on top of current Wi-Fi operation. We first focus on the problem of creating multi-AP groups that can transmit simultaneously to leverage Spatial Reuse opportunities. Then, once these groups are created, we study different scheduling algorithms to determine which groups will transmit at every MAPC transmission. Two different types of algorithms are tested: per-AP, and per-Group. While per-AP algorithms base their scheduling decision on the buffer state of individual APs, per-Group algorithms do that taking into account the aggregate buffer state of all APs in a group. Obtained results--targetting worst-case delay--show that per-AP based algorithms outperform per-Group ones due to their ability to guarantee that the AP with a) more packets, or b) with the oldest waiting packet in the buffer is selected.
Multi-AP Coordination, Coordinated Spatial Reuse, Wi-Fi 7, Wi-Fi 8, IEEE 802.11be, IEEE 802.11bu, WLANs.
## I Introduction
Nowadays, the consumption of high-bandwidth and real-time applications constitutes a massive challenge for operators and network companies to deliver these contents to end users. Video streaming is becoming more popular than ever. Only in the first half of 2021 it represented the 53.72% of all Internet downlink traffic [1]. Beyond traditional video streaming consumption, cloud gaming [2], virtual and augmented reality (VR/AR) [3] are rapidly becoming more and more popular, hence further contributing to increasing the demand of interactive and delay-sensitive contents.
To get over that, Wi-Fi networks are constantly evolving to address not only the high requirements of these applications in terms of throughput and/or latency, but also the increasing number of users and the traffic volume on Internet. Access Point (AP) densification (i.e., covering the same area with a high number of APs) has been the natural response to cope with such a situation. This approach allows stations to benefit from high Signal to Noise (SNR) levels, as they are close to their serving APs, resulting in the use of high transmission rates. However, when the number of co-located APs is high, the limited number of frequency channels may result in detrimental high contention and interference levels, as well as affecting the ability of Wi-Fi networks to provide a reliable service. Improving this situation is set as a requirement for future Wi-Fi 8 by the 802.11 Ultra-High Reliable (UHR) Study Group [4].
A solution to mitigate the high contention levels in dense Wi-Fi deployments is to coordinate transmissions from the set of overlapping APs. To support such an objective, the Multi Access Point Coordination (MAPC) framework [5, 6, 7] was initially included as part of the IEEE 802.11be (11be) amendment [8] candidate features, although its development has been postponed for the future Wi-Fi 8 amendment -- likely to be named as IEEE 802.11bn-- leaving Multi-link Operation [9, 10] as the key and most disruptive feature of IEEE 802.11be. MAPC allows APs to share time, frequency and/or spatial resources in a controlled manner, alleviating Overlapping Basic Service Set (OBSS) contention, and enabling the implementation of WLAN-level scheduling mechanisms. However, there are still several challenges to solve. Among them, we need (i) to design a protocol for coordinating the transmissions from the set of overlapping APs; (ii) a mechanism to decide which of the overlapping APs are compatible to transmit at the same time by, for instance, leveraging Spatial Reuse (SR) opportunities; and (iii) a mechanism to decide which of those compatible APs are allocated to each coordinated transmission.
In this paper, to deal with these aforementioned challenges, we present a framework to support MAPC on Wi-Fi networks. It combines \(i)\) periodic dowlink 'MAPC transmissions' that are able to leverage Spatial Reuse opportunities when possible, with \(ii)\) un-coordinated CSMA/CA 'breathing' periods in between to account for other downlink and uplink traffic. In detail, the main contributions of this paper are:
1. A MAPC framework, to leverage both coordinated and uncoordinated transmissions on top of a multi-AP WLAN.
2. A low-complexity algorithm to build groups of compatible APs able to transmit simultaneously by leveraging Spatial Reuse opportunities.
3. Different scheduling algorithms to select the groups of APs that will be scheduled in each MAPC transmission. Two types of algorithms are considered: per-AP and per-Group algorithms.
4. Insights on how to configure several key parameters used by the group creation algorithm, and on the performance of the per-AP vs per-Group scheduling algorithms.
This paper extends the work done in [7]. Novel aspects in this paper include the definition of practical algorithms for the creation of compatible groups and group scheduling under finite load conditions.
## II Related Work
Currently, there are only a few works that delve into MAPC to leverage Spatial Reuse opportunities, and most of the information available are still directly coming from TGbe documents. In [11], the process to transmit in coordinated Spatial Reuse (c-SR) mode is split into three phases and the authors investigate several operation issues, such as information exchange about transmission power levels (one-way or bidirectional), path loss, and block acknowledgment. In addition, authors in [12, 13, 14] provide simulation results showing the potential performance gains of c-SR. For example, the work in [13] showcases some of the benefits of using c-SR compared with the default Enhanced Distributed Channel Access (EDCA) mechanism. Besides, in [12], the gains of c-SR are two times higher than the legacy systems. Similarly, the authors in [14] make a comparison between c-SR and coordinated Orthogonal Frequency Division Multiple Access (c-OFDMA), showing that the throughput for the former exceeds (twice in some cases) the latter. The work in [15] exhibits a novel transmission scheme for 11be networks, utilizing the concept of multi-AP c-OFDMA. The author shows that the c-OFDMA scheme effectively allows APs to increase the number of transmission opportunities, achieving a higher throughput than DL OFDMA in IEEE 802.11ax. Finally, the authors in [16], propose a scheme to identify the interference-free APs and a method to reduce the amount of shared information between coordinated devices using Q-learning. To the best of our knowledge, none of the works published so far deal with the scheduling process in MAPC networks and also mechanisms to create groups of compatible APs that employ c-SR scheme. Thus, this work explores alternatives to cover these open challenges and also evaluates through simulations the results when each of the proposed algorithms is employed.
## III Multi-AP Coordination
To support MAPC transmissions, we consider all APs share information with a central controller (CC1). The CC is responsible for computing and periodically informing transmission parameters, as well as the most suitable subset of transmitting APs for every Transmission Opportunity (TXOP). APs and CC exchange most of the information through a low-latency and high bandwidth wired backbone.
Footnote 1: This function could be implemented on a device like an AP, but for a more realistic approach and to avoid complexity on access points, we consider the existence of an external central controller.
One of the APs acts as the MAPC transmission initiator, while the other APs will simply follow the received indications. We will refer to the AP initiating the shared transmissions as the Sharing AP, and to the rest of APs as the Shared APs. Therefore, the Sharing AP is in charge to initiate the transmission, reserving the channel, and inform which other APs will participate in the TXOP, including the parameters to do so. In this paper we assume all APs are within the coverage area of the Sharing AP.
The operation of the proposed MAPC framework is shown in Fig. 1. It allows several APs to share a TXOP opportunity. In the proposed framework, MAPC transmissions are scheduled periodically every \(T\) ms with uncoordinated (default CSMA/CA) transmissions in between. A MAPC transmission is initiated when the Sharing AP accesses the channel and sends a MAP request-to-send (MAP-RTS) frame for channel reservation.2 If no collision occurs,3 the Shared APs reply at the same time with a MAP clear-to-send (MAP-CTS) frame. The latter is also useful to inform stations or legacy devices about the ongoing MAPC transmission, so it avoids unwanted collisions with hidden devices. At this point, the Sharing AP assumes that all neighboring devices have properly set their network allocation vector (NAV), and so the MAPC transmission will not be disturbed until it ends. Since it is the Sharing AP who shares the TXOP with the other APs, we will use both MAPC transmission and TXOP sharing transmission terms indistinctly.
Footnote 2: Favourable EDCA parameters can be used to almost always guarantee that MAPC transmissions win access to the channel as earlier as possible.
Footnote 3: Although not strictly necessary, the use or Restricted Target Wake Time (R-TWT) [8] may provide extra protection to MAPC transmission.
Once the Sharing AP accesses the channel, it splits the gained TXOP in one or more temporal slots, to which we refer as coordinated slots. One or more APs can be allocated in each coordinated slot. To do that, the Sharing AP sends a MAP trigger frame (MAP-TF) to allocate the next coordinated slot to one or more APs, including also the settings to use, such as the modulation and coding scheme (MCS) to be used by the Shared APs. When a coordinated slot is allocated to a single AP, the TXOP is shared following a traditional time division multiple access (TDMA) scheme. Otherwise, if several APs are allocated to the same coordinated slot, TDMA is enhanced with Spatial Reuse.
To create the groups of compatible APs that can be allocated to the same coordinated slot, the CC needs to know the received signal strength indicator (RSSI) at the stations from all APs. Due to the symmetry of the channel, we consider that this information is obtained from the opposite path, i.e., by measuring the received power from uplink frames (either data or ACK4 frames). Each AP stores the RSSI of all overheard stations, sharing this information with the CC through the wired backbone.
Footnote 4: We assume ACKs are correctly received in all cases.
The uncoordinated CSMA/CA period after each MAPC transmission can be used by any AP and station to transmit either downlink or uplink traffic following default Wi-Fi operation.
## IV c-TDMA/SR scheduling
The MAPC scheduling is split into two main stages: the **First Stage** includes the creation of the SR-Compatible groups5 based on the signal-to-interference-plus-noise ratio (SINR) between all AP and station pairs, and the **Second Stage**, which employs the built groups to select which APs are scheduled in every MAPC transmission. We consider the two stages are performed at the CC and communicated to the Sharing AP.
Footnote 5: Groups of APs that can transmit simultaneously using SR.
### _First Stage: SR-Compatible group formation_
The First Stage provides a procedure to verify the compatibility of a group of APs to transmit simultaneously. We have designed an algorithm to create groups of compatible APs based on the SINR at their associated stations.
#### Iv-A1 SINR requirements
As previously indicated in Section III, we assume reciprocal uplink and downlink links, so the power overheard by all the APs from uplink frames is assumed to be equivalent to the power received at the stations from downlink transmissions. All APs share this information with the CC, who stores it in a database for all AP-STA links.
For a given subset of APs, of size \(M\), the power received at each of their stations must exceed the aggregate interference level by, at least, the value of \(\gamma\). The CC verifies such a condition using the following expression:
\[\min_{i=1..M}\left(10\log_{10}(\bar{P}^{i})-10\log_{10}\left(\bar{ W}+\sum_{\begin{subarray}{c}j=1\\ i\neq j\end{subarray}}^{J}\bar{P}^{i}_{j}\right)\right)\geq\gamma \tag{1}\]
where \(\bar{P}^{i}\) and \(\bar{P}^{i}_{j}\) are vectors that contain the RSSI values seen from all the stations associated to AP\({}_{i}\), when AP\({}_{i}\) and AP\({}_{j}\) (potential interferer) transmit at the same time.6 Besides, \(W\) is the noise power, and \(\gamma\) is the SINR threshold used. Note that, the higher the value of \(\gamma\), the lower the probability of finding groups with a high number of compatible access points. Thus, all APs in the subset must satisfy (1) to form an SR-compatible group.
Footnote 6: Note that to create a group, we add APs sequentially. When a new AP is added, we guarantee both it is compatible with the rest of APs in the group, and that the other APs in the group are also compatible with the new one.
#### Iv-A2 SR-compatible group formation, At-most-K
In the following we describe the At-most-K algorithm we introduce in this paper to create these groups of compatible APs.
The creation of the groups is performed taking every AP in the network as reference (or head of a group), thus resulting in a number of groups at most equal to the number of APs. Then, we add another AP at every iteration until the maximum number of APs allowed in a group, \(K\), is reached.7 For example, considering AP\({}_{1}\) as the reference (the CC is building the first coordinated group), it means that this group may contain up to the \(K-1\) most convenient access points from the perspective of AP\({}_{1}\), i.e., the algorithm sequentially adds the other \(K-1\) APs that have the lowest values of RSSI seen from stations associated to AP\({}_{1}\) (the highest RSSI value from the other APs seen from the stations associated to AP\({}_{1}\) covers the worst case). Thus, at every iteration, one of these pre-selected APs is added to the group only if the SINR at all the involved stations is above \(\gamma\) in (1). Then, the operation is repeated selecting another AP as the reference, and so on.
Footnote 7: K has been empirically selected for this paper, and a detailed analysis of this is left for future work.
This condition of selecting the most compatible APs from the perspective of only one AP does not guarantee that the best possible SR group is created, because they probably are not the best choice for other APs in the group different from the reference one. Therefore, selecting a value of \(\gamma\) high enough is crucial to guarantee good SINR levels for all stations
Figure 1: MAPC framework operation. Periodic slots start at every \(T\) ms with un-coordinated transmissions in between. The TXOP-sharing period is divided into coordinated slots, allowing both c-TDMA and/or c-TDMA/SR transmissions.
associated to any of the APs that belong to the same group, even if they are not the reference one.
Note that, better-located APs will belong to several groups, but the final decision about which group(s) will transmit in the next TXOP is done in the Second Stage and it depends on the number of packets that APs have in their buffers, which is directly related to the time that a group has been waiting for transmitting.
### _Second Stage: Traffic Scheduling Algorithms_
The Second Stage is intended for scheduling one SR-compatible group (from the ones previously computed in the First Stage) per coordinated slot based on the buffer state information collected from all APs, which contains the number of packets in the transmission buffer of each AP, as well as the arrival time of the oldest packet.
NumPKSingleThe CC selects the AP with the highest number of packets waiting in the buffer. Then, considering only the groups created in the First Stage that contain the selected AP, the CC schedules the group of APs with the highest number of packets, calculated as the sum across all individual APs.
NumPKGroupThe CC selects the group of APs with the highest number of packets.
OldPKSingleThe CC selects the AP with the oldest packet waiting in the buffer. Then using the groups created in the First Stage that contain the selected AP, the CC schedules the group with the highest aggregate delay, calculated as the sum of the waiting time of the oldest packet across all individual APs.
OldPKGroupThe CC selects the group with the maximum aggregate group delay, i.e., the sum of the time that the oldest packet in the buffer of each individual AP has been waiting.
For the sake of fairness between groups, NumPKGroup (OldPKGroup) values are normalized by the number of APs in each group, so we avoid that groups with a few (large) number of APs starve.
## V System Model
To evaluate the performance of the MAPC framework, and assess the operation of the group creation algorithm and the proposed traffic schedulers, we consider the following scenario.
### _Deployment_
We divide the area of interest in \(9\) subareas of 10x10 meters each, and at the center of each subarea we deploy one AP as shown in Fig. 2. All APs are set to operate in the same channel. We designate AP\({}_{5}\) (the AP in the middle) as the Sharing AP since all the other APs are within its coverage area. \(N=3\) stations are randomly placed in each subarea, and associated to the nearest AP, which in this case always corresponds to the one placed at the subarea center.
Multiple transmission rates are allowed, so stations close (far) from their AP use higher (lower) MCSs to transmit and receive data. The MCS used by a given AP to transmit to a station depends on the SINR observed by the station. This value is estimated by the CC given the group of APs that will simultaneously transmit using the RSSI information collected from uplink frames. This value is announced in the MAP-TF frames by the sharing AP when a coordinated transmission starts. To allocate a specific MCS to an AP-STA pair, we employ the curves presented in [17]. These curves give the SINR ranges corresponding to each MCS that ensure an error-free transmission. A-MPDU transmissions are enabled in each coordinated slot, with the maximum number of aggregated packets depending on the MCS used and the slot duration.
The path loss is modelled using the TGax model for Enterprise Scenarios [18]:
\[P_{L}=40.05+20\log_{10}\left(\frac{\min(d,B_{p})f_{c}}{2.4}\right)+P^{\prime} +7W_{n}, \tag{2}\]
where \(d\) is the distance between the transmitter and the receiver in meters, \(f_{c}\) is the central frequency in GHz, \(W_{n}\) is the number of walls and \(P^{\prime}\) is given by \(P^{\prime}=35\log_{10}(d/B_{p})\), when \(d\) is higher than the breaking point \(B_{p}\). Otherwise, it is zero.
Only downlink traffic is considered, i.e., from the AP to the stations. The traffic generation process works as follows: Just before every MAPC transmission, \(N_{p}=10\) packets for each station arrive at its corresponding AP with probability \(p\) depending on the considered traffic load. Three different per-STA load levels are employed to refer to low, medium and high load conditions: \(1\), \(6\) and \(8\) Mbps, respectively. For the sake of simplicity, we assume that buffers in all APs are large enough so that incoming packets are never lost due to buffer overflow.
### _Operation_
The proposed MAPC framework operates as it is shown in Fig. 1. MAPC transmissions are scheduled to start at every \(nT,n=0,1,\ldots,N_{\mathrm{SN}}-1\), where \(N_{\mathrm{SN}}=10000\) is the total number of MAPC transmissions considered in each simulation. Each MAPC transmission consists of several c-TDMA or c-TDMA/SR length-variable coordinated slots (the actual value depends on the number of packets and the MCSs used to
Figure 2: An Enterprise WLAN scenario, with multiple APs.
transmit them), followed by uncoordinated transmissions using the legacy CSMA/CA mechanism. The duration of a given MAPC transmission is determined by the scheduler, although it must not exceed its maximum value, i.e., \(T_{\mathrm{TXOP-MAX}}\). The \(T_{\mathrm{TXOP-MAX}}\) duration is also a parameter used by the CC to control the amount of time devoted to MAPC transmissions, and it can be modified depending on the scenario.
## VI Simulation Results
We present and discuss the results obtained from running simulations on Matlab and taking the Scenario described in Section V. Results labelled with c-TDMA (without SR) correspond to the case where only one AP is allowed to transmit in each coordinated slot of a MAPC transmission.8 The parameters used for the numerical simulations are shown in Table I.
Footnote 8: The advantages of c-TDMA/SR vs legacy 802.11 are shown in [7].
### _A specific scenario_
In this section, we study the scenario shown in Fig. 2 with nine APs and three STAs per AP. Fig. (a)a shows the aggregate throughput and average delay, using NumPkSingle, NumPkGroup, OldPKsingle, OldPkGroup, c-TDMA-NumPK and c-TDMA-OldPk algorithms. The same value of \(\gamma=20\) dBs is used in all cases, as well as the maximum number of APs per group, i.e., \(K=3\), for the SR-compatible group formation. With respect to the throughput, as expected, c-TDMA approaches reach saturation before the SR ones. In the case of the average delay, similarly, SR approaches outperform c-TDMA ones. Moreover, comparing SR scheduling algorithms between them, NumPkSingle and OldPkSingle perform better than the rest on low (\(<100\) Mbps) and medium (100-200 Mbps) load conditions, pointing out the advantage of always scheduling the AP with more packets in the buffer. For high load all the algorithms that support SR transmissions perform similar.
Figs. (b)b and (c)c show the aggregate throughput and average delay for different values of \(\gamma\) and \(K\). The case when \(\gamma=14\) is used (Fig. (b)b), exhibits a worse performance (saturation throughput is reached earlier) for \(K=3\) than for \(K=2\), due to using a low value of \(\gamma\). The reason is that in some cases the SINR at the stations with \(K=3\) is lower than for \(K=2\), and therefore a more robust MCS index will be used for \(K=3\). On the contrary, Fig. (c)c exhibits a gain for \(K=3\) with respect to \(K=2\) because the value of \(\gamma\) is set to guarantee that the SINR at the stations is at least \(20\) dB. Thus, even if we have less groups with three APs, the high value of \(\gamma\) guarantees they will be able to use high MCS for the data transmission. In summary, both \(\gamma\) and \(K\) are tuneable parameters that can be optimized for each individual scenario.
Fig. 4 shows the 95\({}^{th}\)-percentile delay achieved by the different algorithms for the case with \(K=3\) and \(\gamma=20\). In all cases, per-AP schemes outperform per-Group ones. Interestingly, scheduling algorithms based on the number of packets in the buffer outperform delay based ones also in the worst-case delay since they are able to schedule more efficient transmissions.
### _Random scenarios_
Results in this section are obtained through the simulation of 1000 random deployments. In each deployment, while stations are generated uniformly at random inside each subarea, APs are kept at the center of each one. In all scenarios, we have considered \(K=3\), \(\gamma=20\), and a traffic load of \(8\) Mbps per station, which represents an aggregate load of \(216\) Mbps.
Fig. 5 exhibits the Cumulative Distribution Function (CDF) of the 95\({}^{th}\)-percentile delay over multiple generated random scenarios. c-TDMA algorithms show the worst delay, with a difference between them and c-SR ones exceeding \(4\) ms for most of the percentiles. Note that, as before, the algorithms that schedule the groups based on the number of packets, perform better than the ones using the oldest packet criterion. The reason is that the delay-based algorithms are not able to schedule as many packets per MAPC transmission as the number of packets-based ones, which turns out to be also counterproductive in terms of delay.
Finally, Fig. 6 shows the MAPC slot occupancy, defined as the ratio between the time spent by cooperative transmissions and the maximum duration of MAPC transmission. The lower the ratio, the more efficient the use of the MAPC slots, increasing the amount of time available for uncoordinated CSMA/CA transmissions. Note that, OldPkGroup not only achieves the worst overall results as seen previously, but also it requires the largest MAPC transmissions.
## VII Conclusions
In this paper, we have introduced and evaluated a framework for multi-AP coordinated transmissions. We also proposed a method to create groups of SR-compatible APs, and a set of algorithms to schedule coordinated transmissions. Results evidence that algorithms based-on per-AP selection perform better than the per-Group selection ones.
|
2307.03788 | Off-Diagonal Commonality of Graphs via Entropy | A graph $H$ is common if the limit as $n\to\infty$ of the minimum density of
monochromatic labelled copies of $H$ in an edge colouring of $K_n$ with red and
blue is attained by a sequence of quasirandom colourings. We apply an
information-theoretic approach to show that certain graphs obtained from odd
cycles and paths via gluing operations are common. In fact, for every pair
$(H_1,H_2)$ of such graphs, there exists $p\in(0,1)$ such that an appropriate
linear combination of red copies of $H_1$ and blue copies of $H_2$ is minimized
by a quasirandom colouring in which $p\binom{n}{2}$ edges are red; such a pair
$(H_1,H_2)$ is said to be $(p,1-p)$-common. Our approach exploits a
strengthening of the common graph property for odd cycles that was recently
proved using Schur convexity. We also exhibit a $(p,1-p)$-common pair
$(H_1,H_2)$ such that $H_2$ is uncommon. | Natalie Behague, Natasha Morrison, Jonathan A. Noel | 2023-07-07T18:23:10Z | http://arxiv.org/abs/2307.03788v1 | # Off-Diagonal Commonality of Graphs via Entropy
###### Abstract
A graph \(H\) is _common_ if the limit as \(n\to\infty\) of the minimum density of monochromatic labelled copies of \(H\) in an edge colouring of \(K_{n}\) with red and blue is attained by a sequence of quasirandom colourings. We apply an information-theoretic approach to show that certain graphs obtained from odd cycles and paths via gluing operations are common. In fact, for every pair \((H_{1},H_{2})\) of such graphs, there exists \(p\in(0,1)\) such that an appropriate linear combination of red copies of \(H_{1}\) and blue copies of \(H_{2}\) is minimized by a quasirandom colouring in which \(p\binom{n}{2}\) edges are red; such a pair \((H_{1},H_{2})\) is said to be \((p,1-p)\)_-common_. Our approach exploits a strengthening of the common graph property for odd cycles that was recently proved using Schur convexity. We also exhibit a \((p,1-p)\)-common pair \((H_{1},H_{2})\) such that \(H_{2}\) is uncommon.
## 1 Introduction
A _homomorphism_ from a graph \(H\) to a graph \(G\) is a function \(f:V(H)\to V(G)\) such that \(f(u)f(v)\in E(G)\) whenever \(uv\in E(H)\). The _homomorphism density_ of \(H\) in \(G\), denoted \(t(H,G)\), is the probability that a uniformly random function from \(V(H)\) to \(V(G)\) is a homomorphism. The homomorphism density function has played a central role in the development of the celebrated theory of graph limits [22] and is ubiquitous in modern extremal graph theory. For instance, the famous Sidorenko Conjecture [24] states that, if \(H\) is a non-empty1 bipartite graph, then
Footnote 1: A graph is _non-empty_ if its edge set is non-empty.
\[t(H,G)\geq t(K_{2},G)^{e(H)} \tag{1.1}\]
for every graph \(G\), where \(e(H):=|E(H)|\) and \(v(H):=|V(H)|\). A graph \(H\) satisfying this conjecture is said to be _Sidorenko_. In other words, a graph \(H\) is Sidorenko if the homomorphism density of \(H\) among all graphs \(G\) of a given edge density is asymptotically minimized by taking \(G\) to be a binomial random graph in which each edge appears with probability \(t(K_{2},G)\). A well-studied notion, with close ties to Sidorenko's Conjecture, is that of a common graph; a non-empty graph \(H\) is said to be _common_ if
\[t(H,G)+t(H,\overline{G})\geq(1/2)^{e(H)-1}-o(1) \tag{1.2}\]
for every graph \(G\), where \(\overline{G}\) is the complement of \(G\) and the \(o(1)\) term tends to zero as \(v(G)\to\infty\). By thinking of the edges of \(G\) and \(\overline{G}\) as being red or blue, respectively, one gets that \(H\) is common if the number of (homomorphic) copies of \(H\) in a red/blue colouring of the edges of a complete graph is asymptotically minimized by an uniformly random colouring.
Our focus in this paper is on leveraging inequalities related to (1.1) and (1.2) to obtain new examples of common graphs and pairs \((H_{1},H_{2})\) of graphs which satisfy an "off-diagonal" generalization of the common graph property (see Definition 1.7). To describe our results, it is convenient to work in the language of graph limits. A _kernel_ is a bounded measurable function \(U:[0,1]^{2}\to\mathbb{R}\) such that \(U(x,y)=U(y,x)\) for all \(x,y\in[0,1]\), and a _graphon_ is a kernel \(W\) such that \(0\leq W\leq 1\). The _homomorphism density_ of a graph \(H\) in a kernel \(U\) is
\[t(H,U):=\int_{[0,1]^{V(H)}}\prod_{uv\in E(H)}U(x_{u},x_{v})\prod_{v\in V(H)}dx _{v}.\]
The set of graphons can be thought of as the "completion" of the set of dense graphs [23]. In particular, every asymptotic inequality involving homomorphism density function in graphs is equivalent to an analogous inequality in graphons. E.g. a non-empty graph \(H\) is Sidorenko if and only if \(t(H,W)\geq t(K_{2},W)^{e(H)}\) for every graphon \(W\) and common if and only if \(t(H,W)+t(H,1-W)\geq(1/2)^{e(H)-1}\) for every graphon \(W\); for a more general statement, see Lemma 4.2. The following definition is key to the approach of this paper.
**Definition 1.3**.: Say that a non-empty graph \(H\) is _strongly common_ if
\[t(H,W)+t(H,1-W)\geq t(K_{2},W)^{e(H)}+t(K_{2},1-W)^{e(H)}\]
for every graphon \(W\).
Clearly, if \(H\) is Sidorenko, then it is strongly common and, if \(H\) is strongly common, then it is common. A classical result of Goodman [11] (see Theorem 2.2) implies that \(K_{3}\) is strongly common. Our first result is that the same is true for the 5-cycle.
**Theorem 1.4**.: \(C_{5}\) _is strongly common._
In a preprint of [2], we made the following conjecture.2
**Conjecture 1.5**.: _All odd cycles are strongly common._
Kim and Lee [15] recently confirmed Conjecture 1.5 using a novel approach based on Schur convexity (see Theorem 2.8). Chen and Ma [5] proved that the only strongly common graph containing a triangle is \(K_{3}\) itself. This was very recently extended by Versteegen [29] to the statement that the only strongly common graphs of odd girth are the odd cycles. Lee and Noel [21] recently applied the fact that \(K_{3}\) is strongly common to find the first example of an uncommon graph \(H\) such that the disjoint union \(H\sqcup H\) is common.
In this paper, we use the fact that odd cycles are strongly common to obtain new examples of common graphs. These graphs are built up from odd cycles and paths via specific gluing operations. As a special case, say that a graph \(H\) is a _simple \(C_{m}\)-tree_ if either \(H=C_{m}\) or \(H\) can be obtained from a smaller simple \(C_{m}\)-tree, say \(H^{\prime}\), by adding a copy of \(C_{m}\) and identifying one vertex or edge of the new copy of \(C_{m}\) with a vertex or edge, respectively, of \(H^{\prime}\). Say that \(H\) is a \(C_{m}\)_-vertex tree_ or a \(C_{m}\)_-edge tree_ if every identification in the construction of \(H\) involves a vertex or edge, respectively. Sidorenko [25] proved that all \(K_{3}\)-vertex trees and \(K_{3}\)-edge trees are common. These results were unified by Grzesik, Lee, Lidicky and Volec [12] to the statement that all simple \(K_{3}\)-trees are common; as discussed in [12, Section 5], their proof easily extends to show that simple \(C_{m}\)-trees are common for all odd \(m\geq 3\).
We generalize this result of [12] in two ways. First, we partially solve an open problem posed in [12, Section 5], which asks whether one can obtain common graphs from odd cycles via more general gluing operations than those used to construct simple \(C_{m}\)-trees. Our most general result (Theorem 6.1) applies to graphs obtained from gluing together subgraphs of odd cycles under certain technical conditions. The next result provides three specific examples which illustrate the types of gluing operations that we can handle.
**Theorem 1.6**.: _The three graphs depicted in Figure 1 are common._
We also extend the results of [12] to the following off-diagonal generalization of the notion of common graphs, which was recently introduced in [2].
**Definition 1.7**.: For \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\), a pair \((H_{1},H_{2})\) of non-empty graphs is said to be _\((p_{1},p_{2})\)-common_ if the following holds for any graphons \(W_{1}\) and \(W_{2}\) such that \(W_{1}+W_{2}=1\):
\[\frac{t(H_{1},W_{1})}{e(H_{1})p_{1}^{e(H_{1})-1}}+\frac{t(H_{2},W_{2})}{e(H_{2 })p_{2}^{e(H_{2})-1}}\geq\frac{p_{1}}{e(H_{1})}+\frac{p_{2}}{e(H_{2})}. \tag{1.8}\]
Figure 1: Three new examples of common graphs.
It is easily observed that a graph \(H\) is common if and only if \((H,H)\) is \((1/2,1/2)\)-common, and so Definition 1.7 extends the notion of a common graph to an "off-diagonal" setting. As another application of our most general result, Theorem 6.1, we characterize pairs \((p_{1},p_{2})\) for which \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common when \(H_{1}\) and \(H_{2}\) are simple \(C_{m}\)-trees for odd \(m\).
**Theorem 1.9**.: _Let \(m\geq 3\) be odd, let \(H_{1}\) and \(H_{2}\) be simple \(C_{m}\)-trees and let \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\). Then \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common if and only if_
\[\frac{e(H_{1})-v(H_{1})+1}{e(H_{1})p_{1}^{m-1}}=\frac{e(H_{2})-v(H_{2})+1}{e( H_{2})p_{2}^{m-1}}.\]
The study of common graphs was inspired by the 1959 result of Goodman [11] which implies that \(K_{3}\) is common. Shortly after this, Erdos [10] conjectured that all complete graphs are common. More boldly, Burr and Rosta [4] conjectured that every non-empty graph is common. The latter conjecture was disproved by Sidorenko [26], who showed that the triangle with a pendant edge (known as the _paw_ graph) is uncommon; the same construction shows that many other graphs, e.g., \(K_{3}\sqcup K_{2}\), are uncommon. Around the same time, Thomason [28] showed that \(K_{k}\) is uncommon for all \(k\geq 4\), thereby refuting the conjecture of Erdos [10]. Jagger, Stovicek and Thomason [14] generalized Thomason's [28] theorem to the statement that every common graph is \(K_{4}\)-free.
In an early arxiv version of [2], we conjectured that there exists a pair \((H_{1},H_{2})\) and \(p\in(0,1)\) such that \(H_{2}\) is uncommon and \((H_{1},H_{2})\) is \((p,1-p)\)-common. Since Sidorenko's [26] construction proves that \(K_{3}\sqcup K_{2}\) is uncommon, our next result resolves this conjecture in the affirmative. Let \(D\) denote the graph obtained from \(K_{4}\) by deleting one edge, which is referred to as the _diamond_ graph.
**Theorem 1.10**.: \((D,K_{3}\sqcup K_{2})\) _is \((p,1-p)\)-common for \(p=\frac{8-2\sqrt{10}}{3}\)._
For context, we remark that it is not true that every graph \(H\) is contained in a \((p,1-p)\)-common pair for some \(p\in(0,1)\). Indeed, one of the results of [2] says that, if \(H\) contains a \(K_{4}\), then \(H\) is not contained in a \((p,1-p)\)-common pair for any \(p\in(0,1)\). Therefore, in finding a \((p,1-p)\)-common pair \((H_{1},H_{2})\) such that \(H_{2}\) is uncommon, one needs to choose the graph \(H_{2}\) somewhat carefully.
Various other problems in the area of common graphs have been well studied. One of the oldest and most well-known problems has been to determine whether there exist common graphs of arbitrary chromatic number [13, 14]. The first example of a common graph of chromatic number four was obtained by Hatami, Hladky, Kral', Norine and Razborov [13] using the powerful flag algebra method. More recently, Kral', Volec and Wei [19] settled this problem in its entirety by finding connected common graphs of every chromatic number. Ko and Lee [16] strengthened this to the statement that there are common graphs of every chromatic number with high connectivity. Multicolour extensions of the notion of common graphs have also been studied in, e.g., [2, 9, 14, 18].
This paper is structured as follows. In Section 2, we combine a standard "algebraic expansion technique" for kernels with recent results on homomorphism densities of paths
to prove Theorem 1.4. Then, in Section 3, we recall some basic facts about the entropy of a random variable and state a "convexity lemma" which allows us to use examples of strongly common graphs and binomial inequalities for homomorphism densities to obtain new examples of \((p_{1},p_{2})\)-common pairs of graphs. In Section 4, we prove Theorem 1.6 for the first graph in Figure 1; this serves as a relatively simple example to illustrate the applicability of the tools introduced in Section 3. We then generalize this approach in Section 5 to obtain a binomial inequality involving homomorphism densities of certain graphs built up from subgraphs of a fixed graph \(F\) via gluing operations. In Section 6, we feed these binomial inequalities in the case \(F=C_{m}\) for odd \(m\geq 3\) into the convexity lemma from Section 3 to obtain a result (Theorem 6.1) which implies both of Theorems 1.6 and 1.9. In Section 7, we prove Theorem 1.10; the proof involves reducing the problem of showing that \((D,K_{3}\sqcup K_{2})\) is \((p,1-p)\)-common to a certain optimization problem, where the constraints are implied by a classical supersaturation theorem in extremal graph theory. We conclude the paper in Section 8 by proposing some open problems. The proof of the convexity lemma from Section 3 is provided in Appendix A.
## 2 Strongly Common Graphs
Our goal in this section is to prove Theorem 1.4. As a warm-up, we prove the analogous statement for \(K_{3}\), which is known as Goodman's Formula [11]. The proofs in this section make use of the following standard lemma. Given a graph \(H\) and a set \(E\subseteq E(H)\), let \(H[E]\) be the graph with vertex set \(V(H)\) and edge set \(E\).
**Lemma 2.1**.: _For every graph \(H\), kernel \(W\) and \(p\in\mathbb{R}\), if \(U=W-p\), then_
\[t(H,W)=\sum_{E\subseteq E(H)}p^{e(H)-|E|}t(H[E],U).\]
Proof.: We have
\[t(H,W)=\int_{[0,1]^{V(H)}}\prod_{uv\in E(H)}W(x_{u},x_{v})\prod_{v\in V(H)}dx_ {v}=\int_{[0,1]^{V(H)}}\prod_{uv\in E(H)}(U(x_{u},x_{v})+p)\prod_{v\in V(H)}dx _{v}.\]
The result follows by expanding the product in the integrand.
For \(k\geq 1\), let \(P_{k}\) denote the path on \(k\) vertices.
**Theorem 2.2** (Goodman's Formula [11]).: _If \(W_{1}\) and \(W_{2}\) are kernels satisfying \(W_{1}+W_{2}=1\), then_
\[t(K_{3},W_{1})+t(K_{3},W_{2})=\sum_{i=1}^{2}\left[t(K_{2},W_{i})^{3}+\frac{3}{ 2}(t(P_{3},W_{i})-t(K_{2},W_{i})^{2})\right].\]
Proof.: Let \(W_{1}\) and \(W_{2}\) be kernels such that \(W_{1}+W_{2}=1\) and let \(U\) be the kernel defined by \(U:=W_{1}-t(K_{2},W_{1})\). Note that, since \(W_{1}+W_{2}=1\), we have \(W_{2}=t(K_{2},W_{2})-U\). Also,
\[t(K_{2},U)=\int_{0}^{1}\int_{0}^{1}(W_{1}(x_{1},x_{2})-t(K_{2},W_{1}))dx_{1}dx_{ 2}=0. \tag{2.3}\]
Using Lemma 2.1, we get that
\[t(K_{3},W_{1})=t(K_{2},W_{1})^{3}+3t(K_{2},W_{1})^{2}t(K_{2},U)+3t(K_{2},W_{1})t (P_{3},U)+t(K_{3},U)\]
and
\[t(K_{3},W_{2})=t(K_{2},W_{2})^{3}-3t(K_{2},W_{2})^{2}t(K_{2},U)+3t(K_{2},W_{2}) t(P_{3},U)-t(K_{3},U).\]
Summing these two quantities and applying (2.3) yields
\[t(K_{3},W_{1})+t(K_{3},W_{2})=t(K_{2},W_{1})^{3}+t(K_{2},W_{2})^{3}+3(t(K_{2},W _{1})+t(K_{2},W_{2}))t(P_{3},U)\]
\[=t(K_{2},W_{1})^{3}+t(K_{2},W_{2})^{3}+3t(P_{3},U).\]
Now, applying Lemma 2.1 to \(t(P_{3},W_{1})+t(P_{3},W_{2})\), and using (2.3) again, we get that
\[t(P_{3},W_{1})+t(P_{3},W_{2})=t(K_{2},W_{1})^{2}+t(K_{2},W_{2})^{2}+2t(P_{3},U).\]
Solving for \(t(P_{3},U)\) in this expression and substituting it into the expression for \(t(K_{3},W_{1})+t(K_{3},W_{2})\) derived above completes the proof.
**Theorem 2.4** (Goodman [11]).: \(K_{3}\) _is strongly common._
Proof.: The result follows immediately from Theorem 2.2 and the well-known fact that \(P_{3}\) is Sidorenko.
The next theorem provides an analogue of Theorem 2.2 for the 5-cycle, from which Theorem 1.4 will be derived.
**Theorem 2.5**.: _If \(W_{1}\) and \(W_{2}\) are kernels satisfying \(W_{1}+W_{2}=1\), then_
\[t(C_{5},W_{1})+t(C_{5},W_{2})=\sum_{i=1}^{2}\left[t(K_{2},W_{i})^{5}+5t(K_{2}, W_{i})t(P_{5},W_{i})-5t(K_{2},W_{i})^{2}t(P_{4},W_{i})\right].\]
Proof.: Let \(q_{1}:=t(K_{2},W_{1})\) and \(q_{2}:=t(K_{2},W_{1})\) and define \(U:=q_{1}-W_{1}\). As in the proof of Theorem 2.2, we have \(t(K_{2},U)=0\). By Lemma 2.1, we get that \(t(C_{5},W_{1})+t(C_{5},W_{2})\) is
\[q_{1}^{5}+q_{2}^{5}+5(q_{1}^{3}+q_{2}^{3})t(P_{3},U)+5(q_{1}^{2}-q_{2}^{2})t(P _{4},U)+5(q_{1}+q_{2})t(P_{5},U).\]
Applying Lemma 2.1 to the sum \(t(K_{2},W_{1})t(P_{5},W_{1})+t(K_{2},W_{2})t(P_{5},W_{2})\), we get that it equals
\[q_{1}(q_{1}^{4}+3q_{1}^{2}t(P_{3},U)+2q_{1}t(P_{4},U)+t(P_{5},U))+q_{2}(q_{2}^{ 4}+3q_{2}^{2}t(P_{3},U)-2q_{2}t(P_{4},U)+t(P_{5},U))\]
\[=q_{1}^{5}+q_{2}^{5}+3(q_{1}^{3}+q_{2}^{3})t(P_{3},U)+2(q_{1}^{2}-q_{2}^{2})t(P_{4}, U)+(q_{1}+q_{2})t(P_{5},U).\]
Similarly, \(t(K_{2},W_{1})^{2}t(P_{4},W_{1})-t(K_{2},W_{1})^{2}t(P_{4},W_{1})\) is equal to
\[q_{1}^{2}(q_{1}^{3}+2q_{1}t(P_{3},U)+t(P_{4},U))+q_{2}^{2}(q_{2}^{3}+2q_{2}t(P_ {3},U)-t(P_{4},U))\]
\[=q_{1}^{5}+q_{2}^{5}+2(q_{1}^{3}+q_{2}^{3})t(P_{3},U)+(q_{1}^{2}-q_{2}^{2})t(P_ {4},U).\]
The result follows by substituting these quantities into the expression for \(t(C_{5},W_{1})+t(C_{5},W_{2})\) derived above and simplifying.
By Theorem 2.5, the problem of showing that \(C_{5}\) is strongly common reduces to establishing an inequality for homomorphism densities of paths. For this, we apply the following recent result of Blekherman and Raymond [3].
**Lemma 2.6** (Blekherman and Raymond [3, Theorem 1.3 (1.1)]).: _Let \(0\leq r\leq s\leq t\) be integers such that \(r\) and \(t\) are odd. Then, for every graphon \(W\),_
\[t(P_{r},W)^{t-s}t(P_{t},W)^{s-r}\geq t(P_{s},W)^{t-r}.\]
**Corollary 2.7**.: _For every graphon \(W\),_
\[t(P_{5},W)\geq t(K_{2},W)t(P_{4},W).\]
Proof.: Let \(W\) be a graphon. By Lemma 2.6,
\[t(P_{1},W)^{3}t(P_{5},W)\geq t(P_{2},W)^{4}\quad\text{and}\] \[t(P_{1},W)t(P_{5},W)^{3}\geq t(P_{4},W)^{4}.\]
The result follows by multiplying these two inequalities and taking the fourth root.
We now prove Theorem 1.4, which we restate here for convenience.
**Theorem 1.4**.: \(C_{5}\) _is strongly common._
Proof.: This is an immediate consequence of Theorem 2.5 and Corollary 2.7.
As mentioned in the introduction, Kim and Lee [15] recently proved that all odd cycles are strongly common. In fact, they obtained the following stronger result which applies to all kernels, not just graphons. We apply this result in Section 6.
**Theorem 2.8** (Kim and Lee [15]).: _For every kernel \(W\) and odd \(m\geq 3\),_
\[t(C_{m},W)+t(C_{m},1-W)\geq t(K_{2},W)^{m}+t(K_{2},1-W)^{m}.\]
_In particular, all odd cycles are strongly common._
Entropy and Convexity
The following lemma is is useful in applying Theorem 2.8 to obtain examples of \((p_{1},p_{2})\)-common pairs of graphs. The proof is a somewhat technical calculus exercise which we have chosen to put into the appendix.
**Lemma 3.1**.: _Let \(F\) be strongly common and let \(H_{1}\) and \(H_{2}\) be non-empty graphs. If \(k_{1},k_{2},\ell_{1},\ell_{2}\) are non-negative integers, \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\) and conditions (3.2)-(3.5) below are satisfied, then \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common._
\[e(H_{i})\geq e(F)\text{ for }i\in\{1,2\}\text{,} \tag{3.3}\] \[e(H_{i})=k_{i}e(F)-\ell_{i}\text{ for }i\in\{1,2\}\text{,}\] (3.4) \[\frac{k_{1}}{e(H_{1})p_{1}^{e(F)-1}}=\frac{k_{2}}{e(H_{2})p_{2}^{ e(F)-1}}\text{,}\] (3.5) \[t(H_{i},W)t(K_{2},W)^{\ell_{i}}\geq t(F,W)^{k_{i}}\text{ for every graphon }W\text{ and }i\in\{1,2\}\text{.} \tag{3.2}\]
Applying Lemma 3.1 to the case \(H_{1}=H_{2}\) yields the following corollary.
**Corollary 3.6**.: _If \(F\) is strongly common, \(H\) is a non-empty graph and \(k,\ell\) are non-negative integers such that conditions (3.7)-(3.9) below are satisfied, then \(H\) is common._
\[e(H)\geq e(F)\text{,} \tag{3.8}\] \[e(H)=ke(F)-\ell\text{,}\] (3.9) \[t(H,W)t(K_{2},W)^{\ell}\geq t(F,W)^{k}\text{ for every graphon }W\text{.} \tag{3.7}\]
In the rest of this section, we review some basic properties of the entropy of a random variable that will be used in the next two sections. Given a discrete random variable \(X\), the _range_ of \(X\), denoted \(\operatorname{rng}(X)\), is the set of all \(x\) such that \(\mathbb{P}(X=x)>0\).
**Definition 3.10**.: Let \(X\) be a discrete random variable. The _entropy_ of \(X\) is
\[\mathbb{H}(X):=\sum_{x\in\operatorname{rng}(X)}\mathbb{P}(X=x)\log_{2}\left( \frac{1}{\mathbb{P}(X=x)}\right).\]
The next lemma follows by applying Jensen's Inequality to the convex function \(x\log_{2}(x)\).
**Lemma 3.11**.: _If \(X\) is a discrete random variable, then_
\[\mathbb{H}(X)\leq\log_{2}(|\operatorname{rng}(X)|).\]
_Moreover, \(\mathbb{H}(X)=\log_{2}(|\operatorname{rng}(X)|)\) if and only if \(X\) is uniformly distributed on its range._
For discrete random variables \(X\) and \(Y\) and \(y\in\operatorname{rng}(Y)\), let \(\operatorname{rng}(X\mid Y=y)\) be the set of all \(x\) such that \(\mathbb{P}(X=x\mid Y=y)>0\).
**Definition 3.12**.: Let \(X\) and \(Y\) be discrete random variables. For \(y\in\operatorname{rng}(Y)\), define
\[\mathbb{H}(X\mid Y=y):=\sum_{x\in\operatorname{rng}(X\mid Y=y)}\mathbb{P}(X=x \mid Y=y)\log_{2}\left(\frac{1}{\mathbb{P}(X=x\mid Y=y)}\right).\]
**Definition 3.13**.: For two discrete random variables \(X\) and \(Y\), the _conditional entropy of \(X\) given \(Y\)_ is defined by
\[\mathbb{H}(X\mid Y):=\sum_{y\in\operatorname{rng}(Y)}\mathbb{P}(Y=y)\mathbb{ H}(X\mid Y=y).\]
The next lemma, known as the "chain rule" for entropy, is very useful for analyzing the entropy of a tuple of random variables.
**Lemma 3.14** (Chain Rule).: _For \(m\geq 1\) and \(k\geq 0\) and discrete random variables \(X_{1},X_{2},\ldots,X_{m}\) and \(Y_{1},Y_{2},\ldots,Y_{k}\), we have_
\[\mathbb{H}(X_{1},\ldots,X_{m}\mid Y_{1},\ldots,Y_{k})=\sum_{i=1}^{m}\mathbb{ H}(X_{i}\mid Y_{1},\ldots,Y_{k},X_{1},\ldots,X_{i-1}).\]
Another standard fact is that conditioning on a larger set of random variables can only decrease conditional entropy.
**Definition 3.15**.: For discrete random variables \(X,Y\) and \(Z\), say that \(X\) and \(Y\) are _conditionally independent given \(Z\)_ if the following hold for every \(z\in\operatorname{rng}(Z)\), \(x\in\operatorname{rng}(X\mid Z=z)\) and \(y\in\operatorname{rng}(Y\mid Z=z)\):
\[\mathbb{P}(X=x\mid Z=z,Y=y)=\mathbb{P}(X=x\mid Z=z)\text{ and }\]
\[\mathbb{P}(Y=y\mid Z=z,X=x)=\mathbb{P}(Y=y\mid Z=z).\]
**Lemma 3.16** (Deconditioning).: _For \(m,k\geq 1\) and discrete random variables \(X_{1},X_{2},\ldots,X_{m}\) and \(Y_{1},Y_{2},\ldots,Y_{k}\), we have_
\[\mathbb{H}(X_{1},\ldots,X_{m}\mid Y_{1},\ldots,Y_{k})\leq\mathbb{H}(X_{1}, \ldots,X_{m}\mid Y_{1},\ldots,Y_{k-1}).\]
_Moreover, equality holds if and only if \((X_{1},\ldots,X_{m})\) and \(Y_{k}\) are conditionally independent given \((Y_{1},\ldots,Y_{k-1})\)._
We conclude this section with a lemma which is useful for constructing high entropy distributions with specific marginal distributions. The "consequently" part of this lemma uses Lemma 3.14 and the "moreover" part of Lemma 3.16.
**Lemma 3.17** (See [20, Lemma 2.5]).: _Let \(A_{1},A_{2},A_{2}^{\prime}\) and \(A_{3}\) be discrete random variables. If \(A_{2}\) and \(A_{2}^{\prime}\) are identically distributed, then there exist \(B_{1},B_{2}\) and \(B_{3}\) such that \(B_{1}\) and \(B_{3}\) are conditionally independent given \(B_{2}\), \((B_{1},B_{2})\) and \((A_{1},A_{2})\) are identically distributed and \((B_{2},B_{3})\) and \((A_{2}^{\prime},A_{3})\) are identically distributed. Consequently,_
\[\mathbb{H}(B_{1},B_{2},B_{3}) =\mathbb{H}(A_{1},A_{2})+\mathbb{H}(A_{3}\mid A_{2}^{\prime})\] \[=\mathbb{H}(A_{1},A_{2})+\mathbb{H}(A_{2}^{\prime},A_{3})-\mathbb{ H}(A_{2}).\]
A Worked Example
The goal of this section is to use the ideas built up in the previous section to show that the first graph depicted in Figure 1 is common. This will provide a specific example to illustrate the approach before diving into the details of the (more abstract) general case.
For two graphs \(J\) and \(G\), let \(\operatorname{Hom}(J,G)\) be the set of homomorphisms from \(J\) to \(G\) and \(\hom(J,G):=|\operatorname{Hom}(J,G)|\). We say that a random variable \((X_{v}:v\in V(J))\) is _\(G\)-homomorphism supported_ if, for almost every choice of \((X_{v}:v\in V(J))\), there exists \(f\in\operatorname{Hom}(J,G)\) such that \(X_{v}=f(v)\) for all \(v\in V(J)\). We start by establishing the following proposition.
**Proposition 4.1**.: _Let \(H\) be the first graph depicted in Figure 1. The graph \(J=H\sqcup K_{2}\) satisfies \(\hom(J,G)\geq\hom(C_{5},G)^{2}\) for every graph \(G\)._
Proof.: Denote the vertices of \(C_{5}\) by \(1,2,\ldots,5\) in cyclic order. If \(\hom(C_{5},G)=0\), then the inequality holds trivially. So, let \(G\) be a graph with \(\hom(C_{5},G)>0\). Let \(f\) be a uniformly random homomorphism from \(C_{5}\) to \(G\) and let \(X_{i}=f(i)\) for \(1\leq i\leq 5\). Then, by Lemma 3.11,
\[\mathbb{H}(X_{1},X_{2},X_{3},X_{4},X_{5})=\mathbb{H}(f)=\log_{2}(\hom(C_{5},G)).\]
Label the vertices of \(J\) by \(1,\ldots,10\) so that vertices \(1,2,3,4\) and \(5\) form a \(5\)-cycle, vertices \(5,6,7\) and \(8\) form a \(4\)-cycle and vertices \(9\) and \(10\) are in the two-vertex component of \(J\). Our goal is to construct a \(G\)-homomorphism supported random variable \((Y_{v}:v\in V(J))\) which has high entropy. We start by applying Lemma 3.17 with \(A_{1}=(X_{1},\ldots,X_{4})\), \(A_{2}=A_{2}^{\prime}=X_{5}\) and \(A_{3}=(X_{1},X_{2})\). This gives us random variables \((Z_{1},\ldots,Z_{4})\), \(Z_{5}\) and \((Z_{6},Z_{7})\) such that \((Z_{1},\ldots,Z_{5})\) is distributed like \((X_{1},\ldots,X_{5})\), \((Z_{5},Z_{6},Z_{7})\) is distributed like \((X_{5},X_{1},X_{2})\) and \((Z_{1},\ldots,Z_{4})\) and \((Z_{6},Z_{7})\) are conditionally independent given \(Z_{5}\). Applying this lemma again with \(A_{1}=(Z_{1},\ldots,Z_{4},Z_{6})\), \(A_{2}=(Z_{5},Z_{7})\), \(A_{2}^{\prime}=(X_{5},X_{2})\) and \(A_{3}=X_{1}\) gives us random variables \(Y_{1},\ldots,Y_{8}\) such that \((Y_{1},\ldots,Y_{7})\) is distributed like \((Z_{1},\ldots,Z_{7})\), \((Y_{5},Y_{8},Y_{7})\) is distributed like \((X_{5},X_{1},X_{2})\) and \((Y_{1},\ldots,Y_{4},Y_{6})\) is conditionally independent of \(Y_{8}\) given \((Y_{5},Y_{7})\). Finally, we simply let \((Y_{9},Y_{10})\) be a copy of \((X_{1},X_{2})\) chosen independently of \((Y_{1},\ldots,Y_{8})\).
By Lemma 3.14, \(\mathbb{H}(Y_{1},\ldots,Y_{10})\) is equal to
\[\mathbb{H}(Y_{1},\ldots,Y_{5})+\mathbb{H}(Y_{6},Y_{7}\mid Y_{1},\ldots,Y_{5})+ \mathbb{H}(Y_{8}\mid Y_{1},\ldots,Y_{7})+\mathbb{H}(Y_{9},Y_{10}\mid Y_{1}, \ldots,Y_{8}).\]
By Lemma 3.16 and conditional independence, this is equal to
\[\mathbb{H}(Y_{1},\ldots,Y_{5})+\mathbb{H}(Y_{6},Y_{7}\mid Y_{5})+\mathbb{H}(Y _{8}\mid Y_{5},Y_{7})+\mathbb{H}(Y_{9},Y_{10}).\]
By construction, we may rewrite this expression as
\[\mathbb{H}(X_{1},\ldots,X_{5})+\mathbb{H}(X_{1},X_{2}\mid X_{5})+\mathbb{H}(X_ {1}\mid X_{2},X_{5})+\mathbb{H}(X_{1},X_{2}).\]
Using Lemma 3.16, we can bound this below by
\[\mathbb{H}(X_{1},\ldots,X_{5})+\mathbb{H}(X_{1},X_{2}\mid X_{4},X_{5})+ \mathbb{H}(X_{1}\mid X_{2},X_{3},X_{4},X_{5})+\mathbb{H}(X_{1},X_{2}).\]
By symmetry of \(C_{5}\) and the fact that \(f\) was chosen uniformly at random, \((X_{1},X_{2})\) and \((X_{4},X_{5})\) are identically distributed and so \(\mathbb{H}(X_{1},X_{2})=\mathbb{H}(X_{4},X_{5})\). Likewise, \(\mathbb{H}(X_{1}\mid X_{2},X_{3},X_{4},X_{5})=\mathbb{H}(X_{3}\mid X_{1},X_{2},X_{4},X_{5})\). So, the above expression can be rewritten as
\[\mathbb{H}(X_{1},\ldots,X_{5})+\mathbb{H}(X_{1},X_{2}\mid X_{4},X_{5})+\mathbb{ H}(X_{3}\mid X_{1},X_{2},X_{4},X_{5})+\mathbb{H}(X_{4},X_{5})\]
which, by Lemma 3.14, is equal to \(2\mathbb{H}(X_{1},\ldots,X_{5})\). Thus, by Lemma 3.11,
\[\log_{2}(\hom(J,G))\geq\mathbb{H}(Y_{1},\ldots,Y_{10})\geq 2\mathbb{H}(X_{1}, \ldots,X_{5})=2\log_{2}(\hom(C_{5},G))\]
which completes the proof.
We are nearly ready to show that the first graph in Figure 1 is common. The following standard lemma from graph limit theory will be used to convert inequalities for graphs into inequalities for graphons.
**Lemma 4.2** (See [22]).: _For \(t\geq 1\), let \(H_{1},\ldots,H_{t}\) be graphs and \(f_{1},f_{2}:[0,1]^{t}\to\mathbb{R}\) be continuous functions. If every graph \(G\) satisfies_
\[f_{1}(t(H_{1},G),\ldots,t(H_{t},G))+f_{2}(t(H_{1},\overline{G}),\ldots,t(H_{t },\overline{G}))\geq\epsilon(v(G))\]
_where \(\epsilon:\mathbb{N}\to\mathbb{R}\) is a function such that \(\lim_{n\to\infty}\epsilon(n)=0\), then_
\[f_{1}(t(H_{1},W),\ldots,t(H_{t},W))+f_{2}(t(H_{1},1-W),\ldots,t(H_{t},1-W))\geq 0\]
_for every graphon \(W\)._
Proof Sketch.: Given a graphon \(W\), a \(W\)_-random graph_ of order \(n\) is the graph \(G_{n,W}\) with vertex set \(\{1,\ldots,n\}\) obtained by sampling \(n\) uniformly random points \(x_{1},\ldots,x_{n}\) of \([0,1]\) independently and then, for \(1\leq i\neq j\leq n\), adding an edge from \(i\) to \(j\) with probability \(W(x_{i},x_{j})\). By standard concentration inequalities, with probability \(1\), we have \(\lim_{n\to\infty}t(H,G_{n,W})=t(H,W)\) and \(\lim_{n\to\infty}t(H,\overline{G}_{n,W})=t(H,1-W)\) for every graph \(H\). The result follows by continuity of \(f_{1}\) and \(f_{2}\).
**Proposition 4.3**.: _The first graph in Figure 1 is common._
Proof.: Denote the first graph in Figure 1 by \(H\) and let \(J:=H\sqcup K_{2}\). Our goal is to apply Corollary 3.6 with \(F=C_{5}\), \(k=2\) and \(\ell=1\). By Theorem 1.4, \(C_{5}\) is strongly common. Note that (3.7) and (3.8) hold trivially. To verify (3.9), we use Proposition 4.1 to get
\[t(K_{2},G)t(H,G)=t(J,G)=\frac{\hom(J,G)}{n^{v(J)}}\geq\frac{\hom(C_{5},G)^{2}}{ n^{10}}=t(C_{5},G)^{2}\]
for every graph \(G\). By Lemma 4.2, this implies that \(t(K_{2},W)t(H,W)\geq t(C_{5},W)\) for every graphon \(W\). Thus, (3.9) holds and so \(H\) is common by Corollary 3.6.
Gluing Templates and Generalized Trees
We open this section by describing a class of graphs obtained by gluing together subgraphs of a particular graph \(F\) in a tree-like manner. For a set \(X\), let \(2^{X}\) denote the collection of all subsets of \(X\).
**Definition 5.1**.: Let \(F\) be a graph, \(T\) be a tree and let \(\psi:V(T)\cup E(T)\to 2^{V(F)}\). We say that \((T,\psi)\) is an \(F\)_-gluing template_ if \(\psi(st)\subseteq\psi(s)\cap\psi(t)\) for every edge \(st\) of \(T\).
Next, we explain the way in which an \(F\)-gluing template gives rise to a graph. Given a graph \(F\) and a subset \(S\subseteq V(F)\), let \(F[S]\) be the subgraph of \(F\) induced by \(S\).
**Definition 5.2**.: Let \(F\) be a graph and \((T,\psi)\) be an \(F\)-gluing template. The _generalized \(F\)-tree_ corresponding to \((T,\psi)\) is the graph \(J(T,\psi)\) constructed in the following manner. Start by taking a copy \(F_{s}\) of \(F[\psi(s)]\) for each \(s\in V(T)\). Then, for each \(st\in E(T)\) and each \(v\in\psi(st)\), we identify the vertex of \(F_{s}\) corresponding to \(v\) with the vertex of \(F_{t}\) corresponding to \(v\).
Figure 2 gives an example of a generalized \(C_{5}\)-tree. In this example, \(J(T,\psi)\) is the first graph depicted in Figure 1, which was shown to be common in Section 4.
**Remark 5.3**.: Note that, for every \(F\)-gluing template \((T,\psi)\), the graph \(J=J(T,\psi)\) satisfies
\[v(J)=\sum_{t\in V(T)}|\psi(t)|-\sum_{st\in E(T)}|\psi(st)|.\]
We pause for a few basic examples.
**Example 5.4**.: For every graph \(F\) and tree \(T\), if \(\psi:V(T)\cup E(T)\to 2^{V(F)}\) maps each vertex of \(T\) to \(V(F)\) and each edge to \(\emptyset\), then \(J(T,\psi)\) is nothing more than the disjoint union of \(v(T)\) copies of \(F\).
Figure 2: An example of a generalized \(C_{5}\)-tree
**Example 5.5**.: If \((T,\psi)\) is a \(C_{m}\)-gluing template such that \(\psi(s)=V(C_{m})\) for every \(s\in V(T)\) and \(\psi(st)\) is either a singleton or an edge of \(C_{m}\) for all \(st\in E(T)\), then \(J(T,\psi)\) is a simple \(C_{m}\)-tree, as defined in the introduction.
The goal in the rest of this section is to describe a sufficient condition on an \(F\)-gluing template \((T,\psi)\) for the graph \(J=J(T,\psi)\) to satisfy
\[t(J,G)\geq t(F,G)^{e(J)/e(F)}\]
for every graph \(G\). As in the proof of Propositon 4.1, our approach is to construct a distribution on the set \(\operatorname{Hom}(J,G)\) of homomorphisms from \(J\) to \(G\) with entropy at least \(e(J)/e(F)\) times the entropy of a uniformly random homomorphism from \(F\) to \(G\). The inequality will then follow from an application of Lemma 3.11. This approach seems to have first appeared in the work of Kopparty and Rossman [17] and has been used in several papers related to Sidorenko and common graphs [1, 3, 7, 8, 12, 20, 27]. The next lemma describes a way in which one can use a distribution on homomorphisms from \(F\) to \(G\) to get a distribution on homomorphisms from \(J(T,\psi)\) to \(G\) of high entropy.
**Lemma 5.6**.: _Let \(F\) and \(G\) be graphs, let \((T,\psi)\) be an \(F\)-gluing template and let \(J=J(T,\psi)\). For every \(G\)-homomorphism supported random variable \((X_{v}:v\in V(F))\) there exists \((Y_{v}:v\in V(J))\) which is \(G\)-homomorphism supported such that_
\[\mathbb{H}(Y_{v}:v\in V(J))=\sum_{s\in V(T)}\mathbb{H}(X_{v}:v\in\psi(s))- \sum_{st\in E(T)}\mathbb{H}(X_{v}:v\in\psi(st)). \tag{5.7}\]
Proof.: For each \(s\in V(T)\), let \(V_{s}\) be the vertices of the copy \(F_{s}\) of \(F[\psi(s)]\) added during the construction of \(J(T,\psi)\). Let \(\gamma_{s}:V_{s}\to\psi(s)\) be the bijection which maps each vertex of \(V_{s}\) to the corresponding vertex of \(\psi(s)\). We prove, by induction on \(v(T)\), that there exists a \(G\)-homomorphism supported random variable \((Y_{v}:v\in V(J))\) satisfying (5.7) such that, additionally, for each \(s\in V(T)\), we have that \((Y_{v}:v\in V_{s})\) has the same distribution as \((X_{\gamma_{s}(v)}:v\in V_{s})\). In the case that \(v(T)=1\), simply set \((Y_{v}:v\in V_{s})=(X_{\gamma_{s}(v)}:v\in V_{s})\), where \(s\) is the unique vertex of \(T\).
Now, suppose that \(v(T)\geq 2\), let \(\ell\) be a leaf of \(T\) and let \(w\) be the unique neighbour of \(\ell\). Let \(T^{\prime}=T\setminus\{\ell\}\) and let \(\psi^{\prime}\) be the restriction of \(\psi\) to \(V(T^{\prime})\cup E(T^{\prime})\). Let \(J^{\prime}=J(T^{\prime},\psi^{\prime})\). By induction, we can construct a \(G\)-homomorphism supported \((Y_{v}:v\in V(J^{\prime}))\) of entropy
\[\sum_{s\in V(T^{\prime})}\mathbb{H}(X_{v}:v\in\psi^{\prime}(s))-\sum_{st\in E (T^{\prime})}\mathbb{H}(X_{v}:v\in\psi^{\prime}(st))\]
such that \((Y_{v}:v\in V_{s})\) has the same distribution as \((X_{\gamma_{s}(v)}:v\in V_{s})\) for all \(s\in V(T^{\prime})\). To construct the desired random variable \((Y_{v}:v\in V(J))\), we apply Lemma 3.17. Let \(A_{1}=(Y_{v}:v\in V(J^{\prime})\setminus(V_{w}\cap V_{\ell}))\), let \(A_{2}=(Y_{v}:v\in V_{w}\cap V_{\ell})\), let \(A_{2}^{\prime}=(X_{\gamma_{w}(v)}:v\in V_{w}\cap V_{\ell})\) and let \(A_{3}=(X_{\gamma_{\ell}(v)}:v\in V_{\ell}\setminus V_{w})\). Note that \(A_{2}\) and \(A_{2}^{\prime}\) are identically distributed by the inductive hypothesis. The existence of the desired distribution now follows from Lemma 3.17.
We would like to describe conditions under which the expression in (5.7) can be bounded below by \((e(J)/e(F))\mathbb{H}(X_{v}:v\in V(F))\). As in the proof of Proposition 4.1, this will involve "adding conditioning" and doing some cancellation. The key is to use the following consequence of Lemmas 3.14 and 3.16.
**Lemma 5.8**.: _Let \(R_{1},R_{2}\) and \(R_{3}\) be finite disjoint sets. For every discrete random variable \((X_{v}:v\in R_{1}\cup R_{2}\cup R_{3})\),_
\[\mathbb{H}(X_{v}:v\in R_{1}\cup R_{2}\cup R_{3})-\mathbb{H}(X_{v}:v\in R_{2} \cup R_{3})-\mathbb{H}(X_{v}:v\in R_{1}\cup R_{2})+\mathbb{H}(X_{v}:v\in R_{2})\]
_is non-positive._
Proof.: By Lemma 3.16,
\[\mathbb{H}(X_{v}:v\in R_{1}\mid X_{v}:v\in R_{2}\cup R_{3})\leq\mathbb{H}(X_{v }:v\in R_{1}\mid X_{v}:v\in R_{2}).\]
By Lemma 3.14,
\[\mathbb{H}(X_{v}:v\in R_{1}\mid X_{v}:v\in R_{2})=\mathbb{H}(X_{v}:v\in R_{1} \cup R_{2})-\mathbb{H}(X_{v}:v\in R_{2})\]
and
\[\mathbb{H}(X_{v}:v\in R_{1}\mid X_{v}:v\in R_{2}\cup R_{3})=\mathbb{H}(X_{v}:v \in R_{1}\cup R_{2}\cup R_{3})-\mathbb{H}(X_{v}:v\in R_{2}\cup R_{3}).\]
Substituting these equalities into the inequality above completes the proof.
**Remark 5.9**.: Note that, since \(\mathbb{H}(X_{v}:v\in\emptyset)=0\), applying Lemma 5.8 in the case \(R_{2}=\emptyset\) yields
\[\mathbb{H}(X_{v}:v\in R_{1}\cup R_{3})-\mathbb{H}(X_{v}:v\in R_{3})-\mathbb{H }(X_{v}:v\in R_{1})\leq 0\]
for any two disjoint sets \(R_{1}\) and \(R_{3}\).
Next, we observe that, if \(f\) is a uniformly random homomorphism from a graph \(F\) to a graph \(G\), then, for any two subsets \(S_{1}\) and \(S_{2}\) of \(V(F)\) which are "symmetric" to one another, the entropy of \((f(v):v\in S_{1})\) is the same as the entropy of \((f(v):v\in S_{2})\); one may recall that this idea was used in the proof of Proposition 4.1. To formalize this, recall that an _automorphism_ of \(F\) is an isomorphism from \(F\) to itself. Let \(\operatorname{Aut}(F)\) denote the set of all automorphisms of \(F\). Given a graph \(F\), let \(\sim_{F}\) be the equivalence relation on \(2^{V(F)}\) such that \(S_{1}\sim_{F}S_{2}\) if there exists \(\varphi\in\operatorname{Aut}(F)\) such that \(\varphi(S_{1})=S_{2}\). Let \(\mathcal{C}_{F}\) be a subset of \(2^{V(F)}\) containing one element from each equivalence class of \(\sim_{F}\).
Consider the vector space \(\mathbb{R}^{\mathcal{C}_{F}\setminus\{\emptyset\}}\) spanned by vectors \(\vec{e}_{S}\) for \(S\in\mathcal{C}_{F}\setminus\{\emptyset\}\). Let \(\vec{e}_{\emptyset}\) be the zero vector and, for \(S^{\prime}\subseteq V(F)\), let \(\vec{e}_{S^{\prime}}=\vec{e}_{S}\) where \(S\in\mathcal{C}_{F}\) and \(S^{\prime}\sim_{F}S\). Given an \(F\)-gluing template \((T,\psi)\), define
\[\vec{z}_{T,\psi}:=\sum_{s\in V(T)}\vec{e}_{\psi(s)}-\sum_{st\in E(T)}\vec{e}_ {\psi(st)}.\]
For any pairwise disjoint subsets \(R_{1},R_{2}\) and \(R_{3}\) of \(V(F)\), define
\[\vec{x}_{R_{1},R_{2},R_{3}}:=\vec{e}_{R_{1}\cup R_{2}\cup R_{3}}-\vec{e}_{R_{2} \cup R_{3}}-\vec{e}_{R_{1}\cup R_{2}}+\vec{e}_{R_{2}}.\]
In particular, if \(R_{2}=\emptyset\), then
\[\vec{x}_{R_{1},\emptyset,R_{3}}:=\vec{e}_{R_{1}\cup R_{3}}-\vec{e}_{R_{3}}- \vec{e}_{R_{1}}.\]
One should observe that the coefficients in the expression for \(\vec{z}_{T,\psi}\) mirror those on the right side of (5.7) while the coefficients in the expression for \(\vec{x}_{R_{1},R_{2},R_{3}}\) mirror those in the expression in Lemmas 5.8. We can now reduce the problem of using Lemma 5.8 to bound (5.7) below by \((e(J)/e(F))\mathbb{H}(X_{v}:v\in V(F))\) to solving a specific linear program.
**Definition 5.10**.: Let \(F\) be a graph, \((T,\psi)\) be an \(F\)-gluing template and \(J=J(T,\psi)\). We say that \((T,\psi)\) is _good_ if \(e(J)>0\) and the vector
\[(e(J)/e(F))\vec{e}_{V(F)}-\vec{z}_{T,\psi}\]
is in the convex cone generated by the vectors \(\vec{x}_{R_{1},R_{2},R_{3}}\) over all pairwise disjoint \(R_{1},R_{2},R_{3}\).
Next, we relate the number of edges and vertices of \(J(T,\psi)\) to that of \(F\) when \((T,\psi)\) is a good \(F\)-gluing template.
**Lemma 5.11**.: _If \((T,\psi)\) is a good \(F\)-gluing template and \(J=J(T,\psi)\), then_
\[e(J)/v(J)=e(F)/v(F).\]
Proof.: Given an \(F\)-gluing template \((T,\psi)\), define
\[\vec{\zeta}_{T,\psi}:=\sum_{s\in V(T)}\lvert\psi(s)\rvert\cdot\vec{e}_{\psi(s) }-\sum_{st\in E(T)}\lvert\psi(st)\rvert\cdot\vec{e}_{\psi(st)}.\]
For pairwise disjoint subsets \(R_{1},R_{2}\) and \(R_{3}\) of \(V(F)\), define
\[\vec{\xi}_{R_{1},R_{2},R_{3}}:=\lvert R_{1}\cup R_{2}\cup R_{3}\rvert\cdot \vec{e}_{R_{1}\cup R_{2}\cup R_{3}}-\lvert R_{2}\cup R_{3}\rvert\cdot\vec{e} _{R_{2}\cup R_{3}}-\lvert R_{1}\cup R_{2}\rvert\cdot\vec{e}_{R_{1}\cup R_{2}}+ \lvert R_{2}\rvert\cdot\vec{e}_{R_{2}}.\]
Note that, for every set \(S\subseteq V(F)\), the coefficient of \(\vec{e}_{S}\) in \(\vec{\zeta}_{T,\psi}\) is equal to \(\lvert S\rvert\) times the coefficient of \(\vec{e}_{S}\) in \(\vec{z}_{T,\psi}\). Likewise, the coefficient of \(\vec{e}_{S}\) in \(\vec{\xi}_{R_{1},R_{2},R_{3}}\) is \(\lvert S\rvert\) times the coefficient of \(\vec{e}_{S}\) in \(x_{R_{1},R_{2},R_{3}}\).
Now, since \((T,\psi)\) is good, we can write
\[\vec{z}_{T,\psi}+\sum_{R_{1},R_{2},R_{3}}c_{R_{1},R_{2},R_{3}}\cdot\vec{x}_{R_ {1},R_{2},R_{3}}=(e(J)/e(F))\cdot\vec{e}_{V(F)} \tag{5.12}\]
for non-negative constants \(c_{R_{1},R_{2},R_{3}}\). Now, consider the expression
\[\vec{\zeta}_{T,\psi}+\sum_{R_{1},R_{2},R_{3}}c_{R_{1},R_{2},R_{3}}\cdot\vec{ \xi}_{R_{1},R_{2},R_{3}}.\]
As discussed in the previous paragraph, for each \(S\subseteq V(F)\), the coefficient \(\vec{e}_{S}\) in this expression is equal to \(|S|\) times the coefficient of \(\vec{e}_{S}\) in the expression on the left side of (5.12). Therefore, by (5.12),
\[\vec{\zeta}_{T,\psi}+\sum_{R_{1},R_{2},R_{3}}c_{R_{1},R_{2},R_{3}}\cdot\vec{ \xi}_{R_{1},R_{2},R_{3}}=(e(J)/e(F))v(F)\cdot\vec{e}_{V(F)}. \tag{5.13}\]
Let \(\vec{j}\) denote the all-ones vector in \(\mathbb{R}^{\mathcal{C}_{F}}\) and \(\langle\cdot,\cdot\rangle\) denote the standard inner product. By Remark 5.3,
\[v(J)=\left\langle\vec{\zeta}_{T,\psi},\vec{j}\right\rangle\]
which, by (5.13), is equal to
\[\left\langle(e(J)/e(F))v(F)\cdot\vec{e}_{V(F)}-\sum_{R_{1},R_{2},R_{3}}c_{R_{ 1},R_{2},R_{3}}\cdot\vec{\xi}_{R_{1},R_{2},R_{3}},\vec{j}\right\rangle\]
\[=(e(J)/e(F))v(F)-\sum_{R_{1},R_{2},R_{3}}c_{R_{1},R_{2},R_{3}}\cdot\left(|R_{ 1}\cup R_{2}\cup R_{3}|-|R_{2}\cup R_{3}|-|R_{1}\cup R_{2}|+|R_{2}|\right).\]
This final expression is simply equal to \((e(J)/e(F))v(F)\) since the summation is over pairwise disjoint sets \(R_{1},R_{2},R_{3}\). Thus, \(v(J)=(e(J)/e(F))v(F)\) and the proof is complete.
The key lemma of this section is as follows.
**Lemma 5.14**.: _If \((T,\psi)\) is a good \(F\)-gluing template and \(J=J(T,\psi)\), then_
\[t(J,G)\geq t(F,G)^{e(J)/e(F)}\]
_for every graph \(G\)._
Proof.: If there are no homomorphisms from \(F\) to \(G\), then \(t(F,G)=0\) and the inequality holds trivially. Otherwise, \(f\) be a uniformly random homomorphism from \(F\) to \(G\) and, for each \(v\in V(F)\), let \(X_{v}:=f(v)\). Note that, by the uniformity of \(f\), we have that \(\mathbb{H}(X_{v}:v\in S)=\mathbb{H}(X_{v}:v\in S^{\prime})\) whenever \(S\sim_{F}S^{\prime}\); this will be applied many times in the rest of the proof. Let \((Y_{v}:v\in V(J))\) be a \(G\)-homomorphism supported random variable constructed from \((X_{v}:v\in V(F))\) as in Lemma 5.6. By (5.7),
\[\mathbb{H}(Y_{v}:v\in V(J))=\sum_{s\in V(T)}\mathbb{H}(X_{v}:v\in\psi(s))- \sum_{st\in E(T)}\mathbb{H}(X_{v}:v\in\psi(st)). \tag{5.15}\]
Since \((T,\psi)\) is good, we can choose \(c_{R_{1},R_{2},R_{3}}\geq 0\) for each triple \(R_{1},R_{2},R_{3}\) of pairwise disjoint subsets of \(V(F)\) such that so that
\[(e(J)/e(F))\vec{e}_{V(F)}=\sum_{s\in V(T)}\vec{e}_{\psi(s)}-\sum_{st\in E(T)} \vec{e}_{\psi(st)}+\sum_{R_{1},R_{2},R_{3}}c_{R_{1},R_{2},R_{3}}\cdot\vec{x}_ {R_{1},R_{2},R_{3}}. \tag{5.16}\]
By Lemma 5.8 and the fact that \(c_{R_{1},R_{2},R_{3}}\geq 0\) for all \(R_{1},R_{2},R_{3}\), the linear combination consisting of the sum over all \(R_{1},R_{2},R_{3}\) of \(c_{R_{1},R_{2},R_{3}}\) times
\[\mathbb{H}(X_{v}:v\in R_{1}\cup R_{2}\cup R_{3})-\mathbb{H}(X_{v}:v\in R_{2} \cup R_{3})-\mathbb{H}(X_{v}:v\in R_{1}\cup R_{2})+\mathbb{H}(X_{v}:v\in R_{2})\]
is non-positive. By (5.16), adding this linear combination to the right side of (5.15) yields \((e(J)/e(F))\mathbb{H}(X_{v}:v\in V(F))\). So,
\[\mathbb{H}(Y_{v}:v\in V(J))\geq(e(J)/e(F))\mathbb{H}(X_{v}:v\in V(F)).\]
Combining this with two applications of Lemma 3.11 and the fact that \(f\) is uniform, we get
\[\log_{2}(\hom(J,G))\geq\mathbb{H}(Y_{v}:v\in V(J))\]
\[\geq(e(J)/e(F))\mathbb{H}(X_{v}:v\in V(F))=(e(J)/e(F))\log_{2}(\hom(F,G)).\]
Therefore,
\[t(J,G)=\frac{\hom(J,G)}{v(G)^{v(J)}}\geq\frac{\hom(F,G)^{e(J)/e(F)}}{v(G)^{v( J)}}=\left(\frac{\hom(F,G)}{v(G)^{v(J)(e(F)/e(J))}}\right)^{e(J)/e(F)}.\]
By Lemma 5.11, we have \(v(J)(e(F)/e(J))=v(F)\) and so the right side is precisely equal to \(t(F,G)^{e(J)/e(F)}\). This completes the proof.
**Corollary 5.17**.: _Let \(F\) be a graph and \(\ell\geq 0\). If \((T,\psi)\) is a good \(F\)-gluing template and \(H\) is obtained from \(J(T,\psi)\) by deleting \(\ell\) two-vertex components and an arbitrary number of one-vertex components, then_
\[t(H,G)t(K_{2},G)^{\ell}\geq t(F,G)^{(e(H)+\ell)/e(F)}\]
_for every graph \(G\)._
Proof.: This follows from Lemma 5.14 and the facts that \(e(J)=e(H)+\ell\) and \(t(H,G)t(K_{2},G)^{\ell}=t(J,G)\) for every graph \(G\).
## 6 Generalized Trees of Odd Cycles
In this section, we apply the results built up so far to show that certain pairs of generalized \(C_{m}\)-trees are \((p_{1},p_{2})\)-common for some \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\). The following is our most general result of this type, from which Theorems 1.6 and 1.9 will be derived.
**Theorem 6.1**.: _Let \(m\geq 3\) be odd and, for \(i\in\{1,2\}\), let \((T_{i},\psi_{i})\) be a good \(C_{m}\)-gluing template, let \(J_{i}=J(T_{i},\psi_{i})\) and let \(H_{i}\) be obtained from \(J_{i}\) by deleting \(\ell_{i}\geq 0\) two-vertex components and an arbitrary number of one-vertex components. If \(e(H_{i})\geq m\) for \(i\in\{1,2\}\) and \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\) and_
\[\frac{e(H_{1})+\ell_{1}}{e(H_{1})p_{1}^{m-1}}=\frac{e(H_{2})+\ell_{2}}{e(H_{2 })p_{2}^{m-1}},\]
_then \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common._
Proof.: For \(i\in\{1,2\}\), define
\[k_{i}:=\frac{e(H_{i})+\ell_{i}}{m}=\frac{e(J_{i})}{m}.\]
We verify the hypotheses of Lemma 3.1 with \(F=C_{m}\). The fact that \(C_{m}\) is strongly common follows from Theorem 2.8. The conditions (3.2) and (3.4) hold by the hypotheses of the theorem and (3.3) holds by definition of \(k_{1}\) and \(k_{2}\). By Corollary 5.17, (3.5) holds as well. Thus, the result follows by Lemma 3.1.
Our goal in the rest of this section is to derive Theorems 1.6 and 1.9 from Theorem 6.1. We restate these results for convenience.
**Theorem 1.6**.: _The three graphs depicted in Figure 1 are common._
Proof.: The first graph in Figure 1 was shown to be common in Proposition 4.3. So, we focus on the second and third graphs, which we denote by \(H_{2}\) and \(H_{3}\), respectively. We describe two good \(C_{5}\)-gluing templates \((T_{2},\psi_{2})\) and \((T_{3},\psi_{3})\) such that, for \(i\in\{2,3\}\), the graph obtained from \(J_{i}(T_{i},\psi_{i})\) by deleting all components on at most two vertices is precisely \(H_{i}\). By Theorem 6.1, this is sufficient for showing that \(H_{i}\) is common for \(i\in\{2,3\}\).
First, let \(T_{2}=P_{6}\) where the vertices of \(P_{6}\) are labelled \(1,2,\ldots,6\) in the order that they come on the path. Let
\[\psi_{2}(1)=\psi_{2}(2)=\psi_{2}(4)=\{1,2,3,4,5\},\quad\psi_{2}(3)=\{1,2,3\}, \quad\psi_{2}(5)=\{1,2\},\quad\psi_{2}(6)=\{1\}.\]
Also, let
\[\psi_{2}(12)=\{2,3,4,5\},\quad\psi_{2}(23)=\{1\},\quad\psi_{2}(34)=\{3\},\quad \psi_{2}(45)=\psi_{2}(56)=\emptyset.\]
We have that \(J(T_{2},\psi_{2})=H_{2}\sqcup K_{2}\sqcup K_{1}\). To see that \((T_{2},\psi_{2})\) is good, note that
\[\vec{z}_{(T_{2},\psi_{2})}=3\vec{e}_{\{1,2,3,4,5\}}+\vec{e}_{\{1,2,3\}}+\vec{e }_{\{1,2\}}+\vec{e}_{\{1\}}-\vec{e}_{\{1,2,3,4\}}-\vec{e}_{\{1\}}-\vec{e}_{\{ 3\}}\]
which, since \(\vec{e}_{\{1\}}=\vec{e}_{\{3\}}\), is equal to
\[3\vec{e}_{\{1,2,3,4,5\}}+\vec{e}_{\{1,2,3\}}+\vec{e}_{\{1,2\}}-\vec{e}_{\{1,2,3,4\}}-\vec{e}_{\{1\}}.\]
Starting with \(\vec{z}_{(T_{2},\psi_{2})}\) and adding the vectors
\[\vec{x}_{\{1\},\{2,3\},\{4\}}=\vec{e}_{\{1,2,3,4\}}-\vec{e}_{\{2,3,4\}}-\vec{e }_{\{1,2,3\}}+\vec{e}_{\{2,3\}}=\vec{e}_{\{1,2,3,4\}}-2\vec{e}_{\{1,2,3\}}+ \vec{e}_{\{1,2\}}\]
and
\[\vec{x}_{\{1\},\{2\},\{3\}}=\vec{e}_{\{1,2,3\}}-\vec{e}_{\{2,3\}}-\vec{e}_{\{1,2\}}+\vec{e}_{\{2\}}=\vec{e}_{\{1,2,3\}}-2\vec{e}_{\{1,2\}}+\vec{e}_{\{1\}}\]
yields \(3\vec{e}_{\{1,2,3,4,5\}}\). Therefore, \((T_{2},\psi_{2})\) is good. By Theorem 6.1, we get that \(H_{2}\) is common.
Next, let \(T_{3}=P_{5}\) where the vertices are labelled \(1,\ldots,5\) in the order that they come on the path. Let
\[\psi_{3}(1)=\psi_{3}(2)=\psi_{3}(3)=\{1,2,3,4,5\},\quad\psi_{3}(4)=\psi_{3}(5)= \{1,2\}.\]
Also, let
\[\psi_{3}(12)=\{5\},\quad\psi_{3}(23)=\{2,3,4\},\quad\psi_{3}(34)=\psi_{3}(45)=\emptyset.\]
We have that \(J(T_{3},\psi_{3})=H_{3}\sqcup K_{2}\sqcup K_{2}\). To see that \((T_{3},\psi_{3})\) is good, note that
\[\vec{z}_{(T_{3},\psi_{3})}=3\vec{e}_{\{1,2,3,4,5\}}+2\vec{e}_{\{1,2\}}-\vec{e}_ {\{5\}}-\vec{e}_{\{2,3,4\}}.\]
Starting with \(\vec{z}_{(T_{3},\psi_{3})}\) and adding the vector
\[\vec{x}_{\{1\},\{2\},\{3\}}=\vec{e}_{\{1,2,3\}}-\vec{e}_{\{2,3\}}-\vec{e}_{\{1,2\}}+\vec{e}_{\{2\}}=\vec{e}_{\{2,3,4\}}-2\vec{e}_{\{1,2\}}+\vec{e}_{\{5\}}\]
yields \(3\vec{e}_{\{1,2,3,4,5\}}\). Therefore, \((T_{3},\psi_{3})\) is good and so \(H_{3}\) is common by Theorem 6.1.
Next, we prove Theorem 1.9. The "only if" part of the theorem uses the following result of [2]. Given a graph \(H\) and an integer \(m\geq 3\), let \(c_{m}(H)\) be the number of unlabelled cycles of length \(m\) in \(H\). The _girth_ of \(H\) is \(g(H):=\min\{m:c_{m}(H)>0\}\).
**Theorem 6.2** (Behague, Morrison and Noel [2]).: _Let \(m\geq 3\) be odd and suppose that \(g(H_{1})=g(H_{2})=m\). If \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common, then_
\[\frac{c_{m}(H_{1})}{e(H_{1})p_{1}^{m-1}}=\frac{c_{m}(H_{2})}{e(H_{2})p_{2}^{m- 1}}.\]
**Theorem 1.9**.: _Let \(m\geq 3\) be odd, let \(H_{1}\) and \(H_{2}\) be simple \(C_{m}\)-trees and let \(p_{1},p_{2}\in(0,1)\) such that \(p_{1}+p_{2}=1\). Then \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common if and only if_
\[\frac{e(H_{1})-v(H_{1})+1}{e(H_{1})p_{1}^{m-1}}=\frac{e(H_{2})-v(H_{2})+1}{e( H_{2})p_{2}^{m-1}}.\]
Proof.: Let \(H_{1}\) and \(H_{2}\) be simple \(C_{m}\)-trees. We label the vertices of \(C_{m}\) by \(1,\ldots,m\) in the order that they appear on the cycle. By definition, a graph \(H\) is a simple \(C_{m}\)-tree if and only if \(H=J(T^{\prime},\psi^{\prime})\) for some \(C_{m}\)-gluing template \((T^{\prime},\psi^{\prime})\) such that \(\psi(s)=V(C_{m})\) for all \(s\in V(T^{\prime})\) and \(\psi(st)\) is either a singleton or a set consisting of a pair of adjacent vertices of \(C_{m}\) for all \(st\in E(T^{\prime})\). For \(i\in\{1,2\}\), let \((T^{\prime}_{i},\psi^{\prime}_{i})\) be such a \(C_{m}\)-gluing template for \(H_{i}\) and let \(\ell_{i}\) be the number of \(st\in E(T^{\prime}_{i})\) such that \(|\psi^{\prime}_{i}(st)|=2\). Then, for each \(i\in\{1,2\}\),
\[g(H_{i})=m,\]
\[v(H_{i})=m\cdot v(T^{\prime}_{i})-e(T^{\prime}_{i})-\ell_{i},\]
\[e(H_{i})=m\cdot v(T^{\prime}_{i})-\ell_{i}\]
and
\[c_{m}(H_{i})=v(T^{\prime}_{i}).\]
In particular,
\[e(H_{i})-v(H_{i})+1=e(T^{\prime}_{i})+1=v(T^{\prime}_{i})=c_{m}(H_{i})=\frac{e( H_{i})+\ell_{i}}{m}. \tag{6.3}\]
If \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common, then, by Theorem 6.2 and (6.3),
\[\frac{e(H_{1})-v(H_{1})+1}{e(H_{1})p_{1}^{m-1}}=\frac{c_{m}(H_{1})}{e(H_{1})p_{1 }^{m-1}}=\frac{c_{m}(H_{2})}{e(H_{2})p_{2}^{m-1}}=\frac{e(H_{2})-v(H_{2})+1}{e(H _{2})p_{2}^{m-1}}.\]
This proves the "only if" direction.
To prove the "if" direction, let \((T_{i},\psi_{i})\) be a \(C_{m}\)-gluing template obtained from \((T^{\prime}_{i},\psi^{\prime}_{i})\) as follows. Let \(T_{i}\) be a tree constructed from \(T^{\prime}_{i}\) by adding \(e(T^{\prime}_{i})\) vertices in an arbitrary manner. Let \(\psi_{i}\) agree with \(\psi^{\prime}_{i}\) on all vertices and edges of \(T^{\prime}_{i}\) and let it map \(\ell_{i}\) of the new vertices to \(\{1,2\}\), map the rest of the new vertices to \(\{1\}\) and map every edge incident to at least one of the new vertices to \(\emptyset\). It is easy to see that \(J_{i}:=J(T_{i},\psi_{i})\) is the disjoint union of \(H_{i}\) with \(\ell_{i}\) copies of \(K_{2}\) and \(e(T^{\prime}_{i})-\ell_{i}\) copies of \(K_{1}\). As \(\vec{z}_{T_{i},\psi_{i}}=(e(J_{i})/m)\vec{e}_{V(C_{m})}\), we see that \((T_{i},\psi_{i})\) is good. By Theorem 6.1, we have that \((H_{1},H_{2})\) is \((p_{1},p_{2})\)-common provided that
\[\frac{e(H_{1})+\ell_{1}}{me(H_{1})p_{1}^{m-1}}=\frac{e(H_{2})+\ell_{2}}{me(H_ {2})p_{2}^{m-1}}.\]
By (6.3), the above equality is equivalent to
\[\frac{e(H_{1})-v(H_{1})+1}{e(H_{1})p_{1}^{m-1}}=\frac{e(H_{2})-v(H_{2})+1}{e(H _{2})p_{2}^{m-1}}\]
and the proof is complete.
## 7 An Uncommon Graph in a Common Pair
Next, we present the proof of Theorem 1.10, restated below for convenience. The proof is inspired by an approach from a recent paper on disconnected common graphs [21]. We will apply the following classical "supersaturation version" of Mantel's Theorem on the edge density of triangle-free graphs.
**Theorem 7.1** (Goodman's Supersaturation Bound [11]).: _For every graphon \(W\),_
\[t(K_{3},W)\geq t(K_{2},W)(2t(K_{2},W)-1).\]
Recall that \(D\) denotes the graph obtained from \(K_{4}\) by deleting one edge.
**Theorem 1.10**.: \((D,K_{3}\sqcup K_{2})\) _is \((p,1-p)\)-common for \(p=\frac{8-2\sqrt{10}}{3}\)._
Proof.: Let \(W\) be a graphon. Our goal is to show that
\[\frac{t(D,W)}{5p^{4}}+\frac{t(K_{3}\sqcup K_{2},1-W)}{4(1-p)^{3}}\geq\frac{p}{ 5}+\frac{1-p}{4}. \tag{7.2}\]
If \(t(K_{2},W)=0\), then the left side of (7.2) is \(\frac{1}{4(1-p)^{3}}\) which is greater than \(\frac{p}{4}+\frac{1-p}{5}\). Likewise, if \(t(K_{2},W)=1\), then the right side of (7.2) is \(\frac{1}{5p^{4}}\) which is also greater than \(\frac{p}{4}+\frac{1-p}{5}\) for the given value of \(p\). So, we can assume that
\[0<t(K_{2},W)=1-t(K_{2},1-W)<1. \tag{7.3}\]
Define
\[y:=t(K_{3},W)-t(K_{2},W)^{3}.\]
Then, by definition of \(y\),
\[t(K_{3},W)=t(K_{2},W)^{3}+y. \tag{7.4}\]
By Theorem 2.4, we have
\[t(K_{3},1-W)\geq t(K_{2},W)^{3}+t(K_{2},1-W)^{3}-t(K_{3},W)=t(K_{2},1-W)^{3}-y. \tag{7.5}\]
Also, by Theorem 7.1 and the definition of \(y\),
\[y\geq t(K_{2},W)(2t(K_{2},W)-1)-t(K_{2},W)^{3}. \tag{7.6}\]
Note that \(D\) is a simple \(K_{3}\)-tree. So, by Lemma 5.14 (as used in the proof of Theorem 1.9) and (7.4),
\[t(K_{2},W)t(D,W)\geq t(K_{3},W)^{2}=(t(K_{2},W)^{3}+y)^{2}\]
which implies that
\[t(D,W)\geq\frac{(t(K_{2},W)^{3}+y)^{2}}{t(K_{2},W)}.\]
Note that, by (7.3), the above inequality does not involve a division by zero. By (7.4),
\[t(K_{3}\sqcup K_{2},1-W)=t(K_{3},1-W)t(K_{2},1-W)\geq(t(K_{2},1-W)^{3}-y)t(K_{ 2},1-W).\]
Now, define \(x:=t(K_{2},W)-p\) so that \(t(K_{2},W)=p+x\) and \(t(K_{2},1-W)=1-p-x\). By (7.3), we have \(-p<x<1-p\). By the lower bounds on \(t(D,W)\) and \(t(K_{3}\sqcup K_{2},1-W)\) derived above, we can bound the left side of (7.2) below by the minimum of
\[f(x,y):=\frac{((p+x)^{3}+y)^{2}}{5p^{4}(p+x)}+\frac{(1-p-x)((1-p-x)^{3}-y)}{4( 1-p)^{3}}\]
over all real numbers \(x\) and \(y\) such that \(-p<x<1-p\) and \(y\geq y_{0}(x)\) where \(y_{0}(x):=(p+x)(2(p+x)-1)-(p+x)^{3}\); the constraint \(y\geq y_{0}(x)\) comes from (7.6).
First, we analyze the partial derivatives of \(f\) with respect to \(y\). We have
\[\frac{\partial f}{\partial y}=\frac{2(p+x)^{2}}{5p^{4}}+\frac{2y}{5p^{4}(p+x )}-\frac{1-p-x}{4(1-p)^{3}}.\]
For each fixed \(x\) in the range \(-p<x<1-p\), define
\[y_{1}(x):=\frac{5(1-p-x)(p+x)p^{4}}{8(1-p)^{3}}-(p+x)^{3}\]
and note that \(\frac{\partial f}{\partial y}(x,y_{1}(x))=0\) for all \(x\). Also, we have
\[\frac{\partial^{2}f}{\partial y^{2}}=\frac{2}{5p^{4}(p+x)}\]
which is positive. Thus, for each fixed \(x\) in the range \(-p<x<1-p\), the minimum of \(f(x,y)\) over \(y\geq y_{0}(x)\) is attained at \(y=y_{1}(x)\) if \(y_{1}(x)\geq y_{0}(x)\) or at \(y=y_{0}(x)\) otherwise.
Our next goal is to bound the function \(g_{0}(x):=f(x,y_{0}(x))\) from below for all \(x\) such that \(-p<x<1-p\). We have
\[g_{0}(x)=\frac{((p+x)^{3}+y_{0}(x))^{2}}{5p^{4}(p+x)}+\frac{(1-p-x)((1-p-x)^{3} -y_{0}(x))}{4(1-p)^{3}}\]
\[=\frac{(p+x)^{2}(2(p+x)-1)^{2}}{5p^{4}(p+x)}+\frac{(1-p-x)((1-p-x)^{3}+(p+x)^{3} -(p+x)(2(p+x)-1))}{4(1-p)^{3}}\]
\[=\left(\frac{1065+336\sqrt{10}}{400}\right)x^{3}+\left(\frac{379+118\sqrt{10} }{80}\right)x^{2}-\left(\frac{135+72\sqrt{10}}{320}\right)x+\frac{52-3\sqrt{1 0}}{160}.\]
Thus, the minimum of \(g_{0}(x)\) over all \(-p<x<1-p\) is approximately \(0.23263\), attained at \(x\approx 0.057472\). So, \(g_{0}(x)>\frac{p}{5}+\frac{1-p}{4}\) for all relevant choices of \(x\).
The final step is to bound \(g_{1}(x):=f(x,y_{1}(x))\) for all \(x\) satisfying \(-p<x<1-p\) and \(y_{1}(x)\geq y_{0}(x)\). First,
\[y_{1}(x)-y_{0}(x)=\frac{5(1-p-x)(p+x)p^{4}}{8(1-p)^{3}}-(p+x)(2(p+x)-1)\]
\[=\frac{p+x}{8(1-p)^{3}}\left(5(1-p-x)p^{4}-8(1-p)^{3}(2(p+x)-1)\right).\]
The above expression is non-negative if and only if
\[x\leq\frac{-5p^{5}+21p^{4}-56p^{3}+72p^{2}-40p+8}{5p^{4}-16p^{3}+48p^{2}-48p+ 16}=\frac{-425+140\sqrt{10}}{246}<0.08.\]
Thus, it suffices to show that \(g_{1}(x)\geq\frac{p}{5}+\frac{1-p}{4}\) for all \(-0.6\leq x\leq 0.08\), where the lower bound on \(x\) comes from the fact that \(p<0.6\). We have
\[g_{1}(x)=\frac{((p+x)^{3}+y_{1}(x))^{2}}{5p^{4}(p+x)}+\frac{(1-p-x)((1-p-x)^{3} -y_{1}(x))}{4(1-p)^{3}}\]
\[=\frac{5(1-p-x)^{2}(p+x)p^{4}}{64(1-p)^{6}}\]
\[+\frac{(1-p-x)(8(1-p)^{3}(1-p-x)^{3}+8(1-p)^{3}(p+x)^{3}-5(1-p-x)(p+x)p^{4})}{ 32(1-p)^{6}}\]
\[=-\left(\frac{487+154\sqrt{10}}{100}\right)x^{3}+\left(\frac{79+25\sqrt{10}}{ 50}\right)x^{2}+\frac{7+2\sqrt{10}}{60}.\]
Thus, \(\frac{dg_{1}}{dx}(0)=0\). Also,
\[\frac{d^{2}g_{1}}{dx^{2}}(x)=\frac{79+25\sqrt{10}}{25}-\left(\frac{1461+462 \sqrt{10}}{50}\right)x\]
which is positive for all \(x\leq\frac{-6+2\sqrt{10}}{3}\approx 0.108\). Therefore, the minimum of \(g_{1}(x)\) over all \(x\) in the range \(-0.6\leq x\leq 0.08\) is attained at \(x=0\). We are now done by observing that \(g_{1}(0)=\frac{7+2\sqrt{10}}{60}=\frac{p}{5}+\frac{1-p}{4}\).
## 8 Conclusion
As we have seen in this paper, strongly common graphs are very useful for generating examples of \((p_{1},p_{2})\)-common pairs of graphs. A natural problem is to classify such graphs.
**Problem 8.1**.: Classify strongly common graphs.
Note that, by the results of [5, 15, 29], the strongly common graphs with odd girth are precisely the odd cycles. It would be interesting to know whether there exists a non-bipartite strongly common graph which is not an odd cycle and, more ambitiously, a strongly common graph of high chromatic number. Examples of the latter would strengthen the recent breakthrough result of Kral', Volec and Wei [19].
**Question 8.2**.: Does there exist a non-bipartite strongly common graph \(H\) such that \(H\) is not isomorphic to an odd cycle?
**Question 8.3**.: Do there exist strongly common graphs of arbitrary chromatic number?
In particular, it already seems difficult to determine whether a \(4\)-chromatic strongly common graph can exist. Given that Chen and Ma [5] proved that strongly common graphs other than \(K_{3}\) must be triangle-free, perhaps a natural family of graphs to consider are those which arise from the Mycielski construction. The question of whether such graphs are common was asked by Conlon, Fox and Sudakov [6, Section 2.6]; it would also be interesting to know if they are strongly common.
The recent results of [5, 29] suggest that the class of strongly common graphs is rather small, which limits the applicability of results like Lemma 3.1. It would therefore be interesting to extend the approach in this paper to graphs which satisfy a weaker hypothesis than being strongly common.
Theorem 1.10 demonstrates that, if \((H_{1},H_{2})\) is \((p,1-p)\)-common for some \(p\in(0,1)\), then \(H_{1}\) and \(H_{2}\) need not both be common. However, we conjecture that at least one of them must be common.
**Conjecture 8.4**.: _Let \(H_{1}\) and \(H_{2}\) be graphs. If there exists \(p\in(0,1)\) such that \((H_{1},H_{2})\) is \((p,1-p)\)-common, then at least one of \(H_{1}\) or \(H_{2}\) is common._
|
2310.00896 | Organized Event Participant Prediction Enhanced by Social Media
Retweeting Data | Nowadays, many platforms on the Web offer organized events, allowing users to
be organizers or participants. For such platforms, it is beneficial to predict
potential event participants. Existing work on this problem tends to borrow
recommendation techniques. However, compared to e-commerce items and purchases,
events and participation are usually of a much smaller frequency, and the data
may be insufficient to learn an accurate model. In this paper, we propose to
utilize social media retweeting activity data to enhance the learning of event
participant prediction models. We create a joint knowledge graph to bridge the
social media and the target domain, assuming that event descriptions and tweets
are written in the same language. Furthermore, we propose a learning model that
utilizes retweeting information for the target domain prediction more
effectively. We conduct comprehensive experiments in two scenarios with
real-world data. In each scenario, we set up training data of different sizes,
as well as warm and cold test cases. The evaluation results show that our
approach consistently outperforms several baseline models, especially with the
warm test cases, and when target domain data is limited. | Yihong Zhang, Takahiro Hara | 2023-10-02T04:26:07Z | http://arxiv.org/abs/2310.00896v1 | # Organized Event Participant Prediction Enhanced by Social Media Retweeting Data
###### Abstract
Nowadays, many platforms on the Web offer organized events, allowing users to be organizers or participants. For such platforms, it is beneficial to predict potential event participants. Existing work on this problem tends to borrow recommendation techniques. However, compared to e-commerce items and purchases, events and participation are usually of a much smaller frequency, and the data may be insufficient to learn an accurate model. In this paper, we propose to utilize social media retweeting activity data to enhance the learning of event participant prediction models. We create a joint knowledge graph to bridge the social media and the target domain, assuming that event descriptions and tweets are written in the same language. Furthermore, we propose a learning model that utilizes retweeting information for the target domain prediction more effectively. We conduct comprehensive experiments in two scenarios with real-world data. In each scenario, we set up training data of different sizes, as well as warm and cold test cases. The evaluation results show that our approach consistently outperforms several baseline models, especially with the warm test cases, and when target domain data is limited.
event-based system, social media, cross-domain system, graph embedding, neural recommendation
## I Introduction
Many digital platforms now are offering organized events through the Internet, where users can be organizers or participants. For example, the platform Meetup1 allow people to organize offline gatherings through online registration. And there are flash sales platforms such as Gilt2 that offer limited-time product discounts. Moreover, retweeting viral messages of the moment on social media platforms such as Twitter3 can be also considered a type of event. Effectively predicting event participants can provide many benefits to event organizers and participants. For example, organizers can send out invitations more effectively [1], while potential participants can receive better recommendations [2]. Some previous researches have found that the problem of event participant prediction can be solved with recommendation techniques, such as matrix factorization [3]. Indeed if one considers events as items, and participation as users, then recommending events to users can be performed similarly as recommending products to users with an e-commerce recommender system [4]. Unlike a product-based e-commerce platform, though, which has thousands of items, each purchased by thousands of users, events are organized and participated with much smaller frequency. Therefore, one problem with many event-based platforms is that they have not collected enough data to effectively learn a model of user preferences.
Footnote 1: [https://www.meetup.com/](https://www.meetup.com/)
Footnote 2: [https://www.gilt.com/](https://www.gilt.com/)
Footnote 3: [https://www.twitter.com](https://www.twitter.com)
On the other hand, social media platforms such as Twitter nowadays are generating huge amounts of data that are accessible publicly [5]. A particular activity, that is _retweeting_, in which social media users repeat a popular tweet, can be seen as a type of event participant [6]. We argue that event-based platforms can use data of such activity to support their own prediction models even though some restrictions are required. For example, due to privacy concerns, it is assumed that users in the target domain will not offer their social media account information. This condition invalidates many cross-domain recommendation solutions that rely on linked accounts [7, 8, 9]. Nevertheless, even if the users are not linked to social media accounts, we can still have some useful information from social media. For examples, the interaction data that consists of user retweeting records of past tweets, and the tweet texts that are written in the same natural language. Retweeting data are useful for event participant prediction because the act of retweeting generally reveals a user's preference towards what is described in the tweet text [10, 11].
In this paper, we propose a method to utilize social media retweeting data during the learning of an event participant prediction model of a target domain, which has limited training data. As mentioned, we do not assume there are linkable users across social media and the target domain. Instead, we only assume that the event descriptions in the target domains are written in the same language as the social media tweets. This will become our basis for linking two domains. We generate a joint graph using data from two domains, and learn cross-domain users embeddings in the same embedding space. In this way, we can increase training data by adding social media retweeting data, and train more accurate models. To the best of our knowledge, this is the first work to use social media retweeting to enhance event participant prediction.
## II Related Work
We follow the recent research trend of event participant prediction, which is identified as an important problem in event-based social network (EBSN). Previously, Liu et al. studied the participant prediction problem in the context of EBSN [12]. Their technique relied on the topological structure of the EBSN and early responded users. Similarly, Zhang et al.
[13] proposed to engineer some user features and then apply machine learning such as logistic regression, decision tree, and support vector machines. Additionally, Du et al. considered the event descriptions, which were overlooked in previous works [14]. As the matrix factorization became a standard method in recommendation systems [15, 16], later works also attempted to use this method in participant prediction. For example, Jiang and Li proposed to solve the problem by engineering user features and applying feature-based matrix factorization [3]. In this paper, we propose a prediction framework build on top of a deep neural network model of matrix factorization [17]. In contrast to existing works, our framework is designed to use social media retweeting data to enhance the recommendation performance in the target domain.
Our inspiration comes from various works that use a support domain to help solve computation problems in a target domain. Especially, social media has been used in various works as the support domain. For example, Wei et al. have found that Twitter volume spikes could be used to predict stock options pricing [18]. Asur and Huberman studied if social media chatter can be used to predict movie sales [19]. Pai and Liu proposed to use tweets and stock market values to predict vehicle sales [20]. Broniatowski et al. made an attempt to track influenza with tweets [21]. They combined Google Flue Trend with tweets to track municipal-level influenza. These works, however, only used high-level features of social media, such as message counts or aggregated sentiment scores. In this work, we consider a more general setting of using retweeting as a supporting source to help participation prediction in the target domain, and users and events are transformed into embeddings for a wider applicability.
## III Problem Formulation
We formulate the problem of event participant prediction leveraging social media retweeting data as the following. In the target domain, we have a set of event data \(E^{T}\), and for each event \(e\in E^{T}\), there is a number of participants \(p(e)=\{u_{1}^{T},...,u_{n}^{T}\}\), with \(u_{i}\in U^{T}\). In the social media retweeting data, we have a set of tweets \(E^{S}\), for \(e\in E^{S}\), we have retweeters \(p(e)=\{u_{1}^{S},...,u_{m}^{S}\}\), with \(u_{i}\in U^{S}\). Normally we have fewer event data in the target domain than in the retweeting data, so \(|E^{S}|>|E^{T}|\). We assume no identifiable common users across two domains, so \(U^{T}\cap U^{S}=\emptyset\). An event in the target domain is described using the same language as the tweets. Let \(d(e)=\{w_{i},...,w_{l}\}\) be the words in the description of event \(e\). If \(V^{S}\) and \(V^{T}\) are the description vocabularies in the tweets and the target domain, then \(V^{S}\cap V^{T}\neq\emptyset\).
We can represent event descriptions and users as vector-form embeddings. Since the event descriptions in the target domain and the tweet texts are written in the same language, their embeddings can also be obtained from the same embedding space. We denote \(r(e)\) as the function to obtain embeddings for event \(e\) for both the target domain events and tweets. In the target domain, we have base user embeddings \(l^{B}(u)\) available through the information provided by the platform user.
## IV Entity-connected Graph for Learning Joint User Embedding
There exists a number of established techniques that learn embeddings from graphs [22]. Our method is to learn a joint embedding function for both target domain and social media users by deploying such techniques, after creating a graph that connects them. Based on the participation data, we can create four kinds of relations in the graph, namely, participation relation, co-occurrence relation, same-word relation, and word-topic relation.
The participation relation comes from the interaction data, and is set between users and words of the event. Suppose user \(u\) participates in event \(e\). Then we create \(rel(u,w)=participation\), for each word \(w\) in \(d(e)\).
The co-occurrence relation comes from the occurrence of words in the event description. We use _mutual information_[23] to represent the co-occurrence behavior. Specifically, we have \(mi(w_{1},w2)=log(\frac{N(w_{1},w_{2})|E|}{N(w_{1})N(w_{2})}\), where \(N(w_{1},w_{2})\) is the frequency of co-occurrence of words \(w_{1}\) and \(w_{2}\), \(|E|\) is the total number of events, and \(N(w)\) is the frequency of occurrence of a single word \(w\). We have a threshold \(\phi\) to determine the co-occurrence relation, such that if \(mi(w_{1},w_{2})>\phi\), we create \(rel(w_{1},w_{2})=co\_occurrence\).
Two kinds of relations mentioned above are created within a single domain. We now connect the graph of two domains using the same-word relation. We create \(rel(w^{T},w^{S})=same\_word\), if a word in the target domain and a word in the retweeting data are the same word. In this way, two separate graphs for two domains are connected through entities in the event descriptions. Once we have the joint graph, we can use established graph embedding learning techniques to learn user embeddings. In our experiment, we use TransE [22] as the embedding learning technique.
## V Event Participant Prediction Leveraging Joint User Embeddings
We have shown how to obtain joint user embeddings for two event domains. Now we need a method to use them for the problem we aim to solve, that is event participant prediction. In this section, we will discuss first how event participant prediction can be solved in a single domain. Then we will present our framework that leverages joint user embeddings to solve the problem.
### _Single Domain Prediction_
We find that the event participant prediction can be solved by recommendation techniques. Similar to the user and item interaction in a recommendation problem, event participation can be also treated as the interaction between users and events. After considering several options, we choose the state-of-the-art cold-start recommendation model proposed by Wang et al. [24]. It is a generalization of a neural matrix factorization (NeuMF) model [17] which originally used one-hot representation for users and items.
We aim to use the model to learn the following function:
\[\hat{y}_{we}=f(l(u),r(e)) \tag{1}\]
where \(l(u)\) and \(r(e)\) are the learned embeddings for user \(u\) and event \(e\). NeuMF ensembles two recommendation models, called generalized matrix factorization (GMF) and multi-layer perceptron (MLP). Specifically, it makes prediction:
\[f(l(u),r(e))=\sigma\left[GMF(l(u),r(e))\bigoplus MLP(l(u),r(e))\right] \tag{2}\]
where \(\sigma\) is a linear mapping function, and \(\bigoplus\) is a concatenation operation.
Since the dataset usually contains only observed interactions, i.e., user purchase records of items, when training the model, it is necessary to bring up some negative samples, for example, by randomly choosing some user-item (user-event) pairs that have no interaction. The loss function for participant prediction is defined as the following:
\[\mathcal{L}_{PartP}=\sum_{(u,e)\in\mathcal{Y}\cup\mathcal{Y}^{-}}y_{ue}\log \hat{y}_{ue}+(1-y_{ue})\log(1-\hat{y}_{ue}), \tag{3}\]
where \(y_{ue}=1\) if user \(u\) participated in event \(e\), and 0 otherwise. \(\mathcal{Y}\) denotes observed interactions and \(\mathcal{Y}^{-}\) denotes negative samples.
### _Leveraging Joint User Embeddings_
We have acquired in the previous section joint user embeddings, \(l^{J}(u)\), from the entity-connected graph. Note that we can apply the same graph technique to learn embeddings in single domains as well, denoted as \(l^{S}(u)\) and \(l^{T}(u)\) respectively for the retweeting data and target domain. From problem formulation, we also have base user embedding for the target domain \(l^{B}(u)\). A problem is that the graph embeddings \(l^{J}(u)\) and \(l^{T}(u)\) are only available for a small number of target domain users, because they are learned from limited participation data. When we predict participants in future events, we need to consider the majority of users who have not participated in past events. These users have base embeddings \(l^{B}(u)\) but not graph embeddings \(l^{J}(u)\) and \(l^{T}(u)\).
We need to map base embedding \(l^{B}(u)\) to the embedding space of \(l^{J}(u)\) when making the prediction. As some previous works proposed, this can be done through linear latent space mapping [25]. Essentially it is to find a transfer matrix \(M\) so that \(M\times U_{i}^{s}\) approximates \(U_{i}^{t}\), and \(M\) can be found by solving the following optimization problem
\[\min_{M}\sum_{u_{i}\in\mathbf{U}}\mathcal{L}(M\times U_{i}^{s},U_{i}^{t})+ \Omega(M), \tag{4}\]
where \(\mathcal{L}(.,.)\) is the loss function and \(\Omega(M)\) is the regularization. After obtaining \(M\) from users who have both base embeddings and graph embeddings, we can map the base user embedding to graph user embedding \(l^{J^{\prime}}(u)=M\times l^{B}(u)\) for those users who have no graph embedding.
An alternative solution would be using the base user embedding as the input for training the model. This would then require us to map graph user embedding to target domain base user embedding. Unlike mapping base embedding to graph embedding, where some target domain users have both embeddings, we do not have social media users with base embeddings. So the mapping requires a different technique. We solve it by finding the most similar target domain users for a social media user, and using their embeddings as the social media user base embedding. More specifically, we pick \(k\) most similar target domain users according to the graph embedding, and take the average of their base embedding:
\[l^{B^{\prime}}(u)=\frac{1}{K}\sum_{u_{i}\in U^{K}}l^{B}(u_{i}) \tag{5}\]
where \(U^{K}\) is top-k target domain users most similar to the social media user \(u\) according to their graph embeddings.
### _Base and Graph Fusion_
We have shown two ways to create joint training data by mapping graph embeddings to base embeddings, and by mapping base embeddings to graph embeddings. Both embedding spaces have their advantages. The graph embeddings are taken from the interaction data, thus contain information useful for predicting participation. The base embedding contains user context obtained from the target domain, which can supply extra information. While it is possible to use the two types of embeddings separately, we would like to propose a fusion unit that leverages the advantages of both embedding spaces. We call the method base and graph embedding fusion (BGF).
After obtaining training data for two types of embeddings, we train two prediction models separately for them using the NeuMF model. The input event embeddings \(r(e)\) are the same for both models. The input user embeddings are selected depending on whether the user has a graph embedding available or not. More specifically, for graph embedding space, the input \(l(u)\) is set to \(l^{J}(u)\) if user \(u\) has graph embedding, and otherwise it is set to the mapped embedding \(l^{J^{\prime}}(u)\). Similarly we do for the base embedding space, and select either \(l^{B}(u)\) or \(l^{B^{\prime}}(u)\) depending on the availability. Then, instead of output predictions, we take the concatenation layers of two NeuMF models, produced by the concatenation in Equation (2) and concatenate them together. The prediction is made on the output of this large concatenation layer.
Following a recent trend in deep learning research, we use an attention module [26] to further refine the output of the model. An attention module is generally effective when we need to choose more important information from inputs. Since after running two prediction models, we have a large number of information units, it is suitable to apply the attention module.
The idea of attention is to use a vector query to assign weights to a matrix so the more important factors can be emphasized. The query is compared with keys, a reference source, to produce a set of weights, which is then used to combine the candidate embeddings. For the current scenario, we use the concatenated output of NeuMF as the key and the event embedding as the query. The output of the attention module is a context vector \(c_{i}\) for event \(i\)
\[\mathbf{c}_{i}=\sum_{j}a_{ij}s_{j} \tag{6}\]
where \(a_{ij}\) is attention weights, and \(s_{j}\) is the key. We transform the concatenated output of NeuMF into a matrix with the same number of columns as the query dimensions, and use it as the key \(s_{j}\). The attention weights can be obtained using the \(general\) attention score [27].
We insert the attention module after the output of two prediction models and use the event embedding as the query to select the more important information. Empirically, we do find adding the attention module improves overall prediction accuracy.
We note that BGF can be used with a single domain. We can construct the graph for a single domain without the bridging relations, i.e., only keeping the word co-occurrence and the user participation relations. Using the described above, we can have two sets of embedding generated, from the base embedding and from the graph, and on them the BGF unit can be applied. In the empirical study to be presented later, the single domain BGF is shown to have achieved relatively high prediction accuracy.
### _Leveraging Cross-domain Learning_
We have integrated social media retweeting into the event participation data of a target domain using the method described above. Now we can simply combine the retweeting data with the event participant data, treating them as a single dataset. However, there are better ways to train the model across domains, as proposed by recent studies in transfer learning. Here we will introduce a transfer learning technique that can be used to further improve our method.
The technique is called knowledge distillation (KD) [28]. It has been shown that, when model learning is shifted from one task to another task, this technique can be used to distill knowledge learned in the previous task. The distilled knowledge becomes accessible through the KL-divergence, a measure of the difference between prediction results using the new model and the old model. Specifically, we set up a loss through KL-Divergence:
\[\mathcal{L}_{KD}=D_{KL}(\hat{Y}_{new}||\hat{Y}_{old}) \tag{7}\]
where \(\hat{Y}_{new}\) and \(\hat{Y}_{old}\) are predictions made with the model learned in the new domain and the old domain, respectively, and \(D_{KL}\) is the point-wise KL-Divergence.
We first train the model using the retweeting data, and then shift to the target domain participation data. The single domain loss and the KD loss can be counted together in the cross-domain model learning, as
\[\mathcal{L}_{CD}=\mathcal{L}_{PartP}+\mathcal{L}_{KD}. \tag{8}\]
## VI Experimental Evaluation
To verify the effectiveness of our approach, we perform experiments with a public event dataset, taken from event platform Meetup. On Meetup, events are explicitly defined by organizers, and users register for participation. We use Twitter as the supporting social media source. In this section, we will discuss the dataset preparation and the experiment setup, before presenting the evaluation results.
### _Dataset Collection_
We use a publicly available dataset4. The dataset was collected for the purpose of analyzing information flow in event-based social networks [12]. On Meetup, users can participate in social events, which are only active in a limited period, or they can join groups, which do not have time restriction. Events and users are also associated with tags, which are associated with descriptive English keywords. Popular event examples are language study gatherings, jogging and hiking sessions, and wine tasting workshops. The dataset contains relations between several thousands of users, events, and groups. Our interest is mostly in the user-event relation.
Footnote 4: [https://ieee-dataport.org/documents/meetup-dataset](https://ieee-dataport.org/documents/meetup-dataset)
We prepare a corresponding Twitter retweet dataset. We monitor Twitter for tweets authored by users with the keyword "she/her" and "he/him" in their profile description, which results in more than two million tweets. While these tweets covers many topics, they are more or less gender-aware given the author profiles. We construct retweet clusters from these retweets, and obtain several thousands of retweet clusters, each retweeted at least ten times by users in the dataset.
Since our objective is to investigate the effect of adding retweets when the target domain has limited data, we generate datasets of different sizes. Specifically, we select three sizes of datasets, containing 100, 200, and 500 events. To balance the retweets with event data, we use the same number of tweets as the events. The events are randomly selected, and the tweets are also randomly selected with the restriction that their texts have common words with the event descriptions. The number of events, users, participation, tweets, Twitter users, and retweets are shown in Table I.
We use pre-trained embeddings to represent event descriptions and tweets of the same language. Specifically, we use the Spacy5 package, which provides word embeddings trained on Web data, and a pipeline to transform sentences into embeddings. For our approach, we also need to provide base user embeddings. For the Meetup dataset, the users are associated with tags, which are associated with text keywords. We again use Spacy to transform user tags to embeddings, and use them as the base user embedding.
Footnote 5: [https://spacy.io/](https://spacy.io/)
### _Experiment Setup_
We set up two test cases, based on whether or not test data contain events in the training data. In the case where test data contain events in the training data, which is called _warm test_, we randomly pick up one user from each event, adding it to the test data and removing it from the training data. In the case
where test data contain no event in the training data, which is called _cold test_, we use all data shown in Table I as the training data, and use additional 1,000 events as the test data.
We create the training dataset by random negative sampling. For every interaction entry \((u,e)\) in the training dataset, which is labeled as positive, we randomly pick four users who have not participated in the event, and label the pairs as negative. The testing is done by event. For each event \(e\) in the test dataset, we label all users who participated in the event \(U^{+}\) as positive. Then, for the purpose of consistent measurement, we pick \(n-|U^{+}|\) users, labeled as negative, so that the total candidate is \(n\), which is set to 100. For the warm test, \(|U^{+}|=1\), while for the cold test, \(|U^{+}|\) varies from event to event.
We predict the user preference score for all the \(n\) users, rank them by the score, and measure the prediction accuracy based on top \(k\) users in the rank. We measure \(Recall@10\) and \(Precision@5\).
We compare our method with three baselines in the existing literature, in addition to variations of our own approaches. The baselines include:
* base, which runs the recommendation model on target domain base embeddings.
* BGF, the base and graph fusion model we introduced. In this variation, it is used with only the target domain data.
* MIX, a variation of our approach without the knowledge distillation component. Instead, it mixes target domain participation data and the retweets as a single training dataset.
* BPRMF [29], a single domain matrix factorization-based recommendation model, known for its effectiveness in implicit recommendation.
* CKE [30], a knowledge graph-based recommendation model. It can be used for cross-domain prediction if the supporting domain is transformed into a knowledge graph.
* KGAT [31], a state-of-the-art knowledge graph-based recommendation model. It can be used for cross-domain prediction like CKE. However, it does not deal with cold items so we skip it for the cold test.
We implement our approach and all baselines in Python and Tensorflow. We set 200 as the latent factor embedding size where it is needed.
### _Evaluation Results and Discussions_
The experimental results are shown in Tables II, respectively. Single-domain methods are indicated by (SD) and cross-domain methods are indicated by (CD). The best results in each test are highlighted in bold font.
First we look at the warm test. We can see that the proposed method has a clear advantage over other methods, achieving the best accuracy for both scenarios and for all training data sizes. Particularly, we see that it steadily outperforms MIX method, validating the effectiveness of knowledge distillation. The second best cross-domain method is KGAT, especially for smaller training data sizes. But its performance deteriorates as training data sizes increase. The best single-domain method, BGF, outperformed cross-domain methods like KGAT and CKE when training data size is large, showing the strength of fusing graph embedding and base embedding together. The proposed method, utilizes BGF and knowledge distillation, outperformed single domain BGF by up to 66%.
Next we look at the cold test. We see that the result is more complex in the cold test. When the training data size is smaller, the proposed method generally shows some advantages. For example, when the training data size is 100, it achieves 2.4% higher Recall@10 than BGF. When the training data size is 500, the base model achieves the best accuracy.
Comparing warm and cold tests, we see that cross-domain methods have an advantage for the former, but a disadvantage for the latter. The reason is that when we already have some participant data for an event, it is easier to use external knowledge to enhance the information. However, when there is no data for a new event, the useful information is mostly from the target domain itself, and retweeting data can only add limited useful information to the model, if not noises, especially when the target domain has sufficient training data.
## VII Conclusion
In this paper, we propose to use social media retweeting data as an general enhancement for event participant prediction in a target domain. Our proposed solution involves a cross-domain knowledge graph, which assumes that event descriptions are written in the same language as social media tweets. We also present a learning method that utilizes joint user embedding from the knowledge graph and makes use of knowledge distillation. We test the method with real-world event participation data, comparing it with several baselines. And we show that our proposed method has clear advantage in terms of prediction accuracy, especially for the warm tests, where some participants of events are already known. For the cold test, we reach mixed results, with our method only superior in some training data sizes.
## Acknowledgement
This research is partially supported by JST CREST Grant Number JPMJCR21F2. |
2305.15793 | Feature space reduction method for ultrahigh-dimensional, multiclass
data: Random forest-based multiround screening (RFMS) | In recent years, numerous screening methods have been published for
ultrahigh-dimensional data that contain hundreds of thousands of features;
however, most of these features cannot handle data with thousands of classes.
Prediction models built to authenticate users based on multichannel biometric
data result in this type of problem. In this study, we present a novel method
known as random forest-based multiround screening (RFMS) that can be
effectively applied under such circumstances. The proposed algorithm divides
the feature space into small subsets and executes a series of partial model
builds. These partial models are used to implement tournament-based sorting and
the selection of features based on their importance. To benchmark RFMS, a
synthetic biometric feature space generator known as BiometricBlender is
employed. Based on the results, the RFMS is on par with industry-standard
feature screening methods while simultaneously possessing many advantages over
these methods. | Gergely Hanczár, Marcell Stippinger, Dávid Hanák, Marcell T. Kurbucz, Olivér M. Törteli, Ágnes Chripkó, Zoltán Somogyvári | 2023-05-25T07:16:26Z | http://arxiv.org/abs/2305.15793v1 | Feature space reduction method for ultrahigh-dimensional, multiclass data: Random forest-based multiround screening (RFMS)
###### Abstract
In recent years, numerous screening methods have been published for ultrahigh-dimensional data that contain hundreds of thousands of features; however, most of these features cannot handle data with thousands of classes. Prediction models built to authenticate users based on multichannel biometric data result in this type of problem. In this study, we present a novel method known as _random forest-based multiround screening (RFMS)_ that can be effectively applied under such circumstances. The proposed algorithm divides the feature space into small subsets and executes a series of partial model builds. These partial models are used to implement tournament-based sorting and the selection of features based on their importance. To benchmark RFMS, a synthetic biometric feature space generator known as _BiometricBlender_ is employed. Based on the results, the RFMS is on par with industry-standard feature screening methods while simultaneously possessing many advantages over these methods.
## Introduction
The understanding of human motor coordination and the building of prediction models to meet various business needs have become widely studied topics in fields such as neurology and cybersecurity. With the help of adequate sensors, gestures, walking, handwriting, eye movement, or any other human motor activity can be transformed into multidimensional time series. However, from a general perspective, any fixed set of features is either uncharacteristic of these time series, or it is too large for resource efficient classification. Thus, instead of computing an _a priori_ defined, conveniently small set of features, a promising alternative strategy is to create an ultrahigh-dimensional dataset that consists of hundreds of thousands of features and to search for the most informative minimal subset [1]. In this process, as well as in many other machine learning applications, the evaluation of feature importance and the elimination of irrelevant or redundant predictors has become one of the crucial elements in improving the performance of algorithms [2]. This elimination can increase the accuracy of the learning process and reduce the resource needs of model building. The statistical challenges of high dimensionality have been thoroughly reviewed in [3, 4, 5].
Traditional variable selection methods do not usually work well in ultrahigh-dimensional data analysis because they aim to specifically select the optimal set of active predictors [6, 7, 8, 9]. It has also been reported that traditional dimensionality reduction methods such as principal component analysis (PCA) do not yield satisfactory results for high dimensional data (for example, see [10, 11, 12]). In contrast to these methods, feature screening uses rough but fast techniques to select a larger set that contains most or all of the active predictors [13, 14, 15]. Although several screening methods have been published for ultrahigh dimensional data in recent years (e.g., [16, 17, 18, 19, 20, 21]), only a few of them can be used in cases when the response variable contains numerous classes. In particular, in the domains of neuroscience and biometric authentication, datasets with these properties are often encountered.
To reduce various ultrahigh dimensional feature spaces in binary classification problems, Fan and Li (2008) [22] proposed a _sure independence screening (SIS)_ method in the context of linear regression models. According to Fan and Fan (2008) [23], all features that effectively characterize both classes can be extracted by using two-sample \(t\)-test statistics. Thus resulting in _features annealed independence rules (FAIR)_. For similar binary classification problems, Mai and Zou (2013) [13] used a Kolmogorov filter (KF) method that was extended to handle a multiclass response in Mai and Zou (2013) [14]. The KF method is also applied for the ultrahigh dimensional binary classification problem with a dependent variable in Lai et al. (2017) [24]. For
solving similar tasks, Roy et al. (2022)[21] proposed a model-free feature screening method based on energy distances (see [25, 26]).
While most existing feature screening approaches are unsuitable for examining higher-order interactive structures and nonlinear structures, random forest (RF)[27] can overcome such difficulties[28]. To provide a robust screening solution for ultrahigh-dimensional, multiclass data, we propose the _random forest-based multiround screening (RFMS)_ method. The Julia package that implements RFMS is publicly available on GitHub[29]. The RFMS improves the accuracy and scalability of both traditional selection methods and existing RF-based screening by organizing the screening process into rounds. As an advantage, the input is processed in larger chunks, and we can iteratively distill a well-predicting subset of features.
The paper is organized as follows. The Data and methodology section introduces the dataset that was used for benchmarking and the proposed feature screening method. The Results and discussion section presents the performance of the novel screening method and compare it with other reduction algorithms. Finally, the Conclusions and future work section provides our conclusions and suggest future research directions.
## Data and methodology
### Synthetic dataset
To compare the performance of the proposed RFMS with a wide range of feature screening methods, an ultrahigh dimensional, multiclass feature space--with ground truth and some additional side information on the usefulness of the features--was employed. This feature space imitates the key properties of the private signature dataset of Cursor Insight--,which was the winner of the ICDAR competition on signature verification and writer identification in 2015[30])--. Moreover, it was compiled by using the _BiometricBlender_ data generator[31]. The _BiometricBlender_ Python package provides an alternative to real biometric datasets, which are typically not freely accessible and cannot be published for industrial reasons. It is publicly available on GitHub[32].
The following parameters were set during the data generation process:
* n-classes = 100;
* n-samples-per-class = 64;
* n-true-features = 100;
* n-fake-features = 300;
* min-usefulness = 0.5;
* max-usefulness = 1;
* location-sharing-extent = 50;
* location-ordering-extent = 20;
* n-features-out = 10 000;
* blending-mode = 'logarithmic';
* min-count = 4;
* max-count = 8;
* random-state = 137.
The resulting dataset contains a categorical target variable with 100 unique classes and 10 000 intercorrelated features. Note that neither of the features in itself contains enough information to accurately classify the data over the target variable. However, an appropriate combination can provide sufficient information to achieve classification with high accuracy. Due to the high dimensionality of the dataset, the identification of such a combination is a nontrivial task (regardless of the classification algorithm). The screening algorithm that was introduced in this paper provides a reliable, robust, and resource-efficient means to achieve that goal.
### Random forest-based multiround screening
Before we describe the steps of the proposed screening algorithm, several notations must be described. Let \(y\in\{1,2,\ldots,k\}\) be a categorical target variable that contains \(k\) different classes (\(k\in\mathbb{N}^{+}\), \(k\geq 2\)), and let \(\textbf{x}=\langle x_{1},x_{2},\ldots,x_{n}\rangle\) be the tuple of input features (\(n\in\mathbb{N}^{+}\)). (Note that the method may straightforwardly be applied to continuous target variables as well.) Moreover, let \(\alpha,\beta\in\mathbb{N}^{+}\) be predefined parameters such that \(1\leq\beta\leq\alpha\leq n\), where \(\alpha\) denotes the size of the subsets that the feature space will be divided into, and \(\beta\) denotes the number of features that will be selected by the algorithm. For optimal values of \(\alpha\) and \(\beta\), see the Supplementary information parameters step-size and reduced-size, respectively.
**Preparation.** First, the input features of **x** are arranged in random order. Formally, let \(\pi\) be a random permutation of \(\{1,2,\ldots,n\}\), then
\[\textbf{x}_{\pi}=\left\langle x_{\pi(1)},x_{\pi(2)},\ldots,x_{\pi(n)}\right\rangle\]
denotes the randomly ordered tuple of input features. \(\mathbf{x}_{\pi}\) is then divided into \(m=\left\lceil n/\alpha\right\rceil\) subsets as follows:
\[\mathbf{x}_{\pi}^{1} =\left\langle x_{\pi(1)},x_{\pi(2)},\ldots,x_{\pi(\alpha)}\right\rangle,\] \[\mathbf{x}_{\pi}^{2} =\left\langle x_{\pi(\alpha+1)},x_{\pi(\alpha+2)},\ldots,x_{\pi(2 \alpha)}\right\rangle,\] \[\vdots\] \[\mathbf{x}_{\pi}^{j} =\left\langle x_{\pi((j-1)\alpha+1)},x_{\pi((j-1)\alpha+2)}, \ldots,x_{\pi(j\alpha)}\right\rangle\quad(1\leq j<m),\] \[\vdots\] \[\mathbf{x}_{\pi}^{m} =\left\langle x_{\pi((m-1)\alpha+1)},x_{\pi((m-1)\alpha+2)}, \ldots,x_{\pi(n)}\right\rangle.\]
**Iteration.** In this step, we iterate over the abovementioned subsets by selecting the \(\beta\) most important features from a subset, adding them to the next subset, and repeating this process until the \(\beta\) most important features are selected from the last subset. Formally, for \(1\leq i\leq m\), let
\[\bar{\mathbf{x}}_{\pi}^{i}=\mathbf{x}_{\pi}^{i}\sim\mathbf{z}^{i-1}=\left\langle \bar{x}_{1}^{i},\bar{x}_{2}^{i},\ldots,\bar{x}_{\ell}^{i}\right\rangle\]
(i.e., the concatenation of the two tuples), where \(t=|\mathbf{x}_{\pi}^{i}|+\beta\leq\alpha+\beta\), \(\mathbf{z}^{0}=\left\langle\right\rangle\) is an empty tuple, and \(\mathbf{z}^{i}\)\((1\leq i<m)\) will be defined below. (Note that \(\bar{\mathbf{x}}_{\pi}^{1}=\mathbf{x}_{\pi}^{1}\).) The relative feature importance of \(\bar{\mathbf{x}}_{\pi}^{i}\) on \(y\) is identified by using random forest classification. The importance of a feature is determined by the total number of times it appears in the classification forest (often termed the _selection frequency_).
The most important \(\beta\) features of \(\bar{\mathbf{x}}_{\pi}^{i}\) are stored in:
\[\mathbf{z}^{i}=\left\langle\bar{x}_{G_{i}(1)}^{i},\bar{x}_{G_{i}(2)}^{i}, \ldots,\bar{x}_{G_{i}(\beta)}^{i}\right\rangle,\]
where \(G_{i}:\{1,2,\ldots,\beta\}\rightarrow\{1,2,\ldots,t\}\) is an injective function that sorts the features in \(\mathbf{z}^{i}\) in descending order of their importance.
**Result.** The \(\beta\) features considered most important by the RFMS are found in the following:
\[\mathbf{z}=\mathbf{z}^{m}=\left\langle\bar{x}_{G_{m}(1)}^{m},\bar{x}_{G_{m}(2 )}^{m},\ldots,\bar{x}_{G_{m}(\beta)}^{m}\right\rangle.\]
The aforementioned steps of the calculation are illustrated in Figure 1.
Figure 1: Steps of the RFMS.
## Results and discussion
To compare the performance of the RFMS with off-the-shelf screening methods, we completed the following measurements:
1. We measured the maximum accuracy of three basic classifiers--\(k\)-nearest neighbors (\(k\)NN) [33, 34], support vector classifier (SVC) [35], and random forest (RF) [27]--on the full feature set by using \(n\)-fold cross-validation. The optimal parameters of the classifiers were identified via a grid search.
2. We performed screening by using four different methods (including our method), thus resulting in the requested number of screened features (from 10 to 500) per method. The tested screening methods included principal component analysis (PCA) [36, 37], factor analysis (FA) [38, 39], \(k\)-best [40], and RFMS.
3. We measured the maximum accuracy of the three classifiers on each of the screened feature sets by using \(n\)-fold cross-validation.
4. For every step above, we also measured the CPU usage.
Note that methods based on neural networks are legally restricted to prevent the restoration of original signatures. Therefore, we did not utilize these methods as a basis for comparison. The highest classification accuracies for each combination, along with their screening and fitting times, are summarized in Table 1. The optimized hyperparameters that were used during the application of the RFMS method can be found in the Supplementary information.
Based on the results, the RFMS and FA methods outperformed both PCA and \(k\)-best screening in accuracy. The highest accuracy was achieved by using the RFMS-SVC and FA-RF pairs (61.4%); however, the latter combination required considerably lower screening time. Notably, depending on the persistence of the features (see, e.g., [41]), the screening was performed relatively infrequently in comparison with the fitting procedure, in which the combination comprising RFMS proved to be relatively fast. Furthermore, in exchange for a slower screening procedure, RFMS offers several advantages over the FA method. These advantages are detailed below.
**Potential cost reduction in feature computation.** To use FA on an incoming sample, its full feature set must be computed before the transformation can be applied. The trained model only works on the transformed feature set. In contrast, the output of RFMS is a transformation-free subset of the original feature set. This facilitates the interpretation of the resulting features; in addition, once RFMS has finished, and we have the set of optimal features, only these features need to be computed on any further incoming samples. This could be a significant factor in saving on cost and time in a production system.
**Suitability for several classifiers.** Although the combination of FA and RF resulted in a high accuracy and low screening time, the accuracy of the same FA output with SVC and \(k\)NN classifiers produced significantly weaker results (accuracy of 42% and 10%, respectively). However, for the RFMS output, SVC performed slightly better than RF (just as well as the FA-RF combination), and even the accuracy of the \(k\)NN classifier at 38.1% was much closer to the top performers.
\begin{table}
\end{table}
Table 1: Classification results on the 6 400\(\times\)10 000 dataset for three basic classifiers and various reduction algorithms. _(a)_ Only the best accuracy among all of the parameters is reported. _(b)_ Screening times are the CPU times of the feature screening step and correspond to the best accuracy shown above. _(c)_ Fitting times are defined as the CPU times after the reduction step and correspond to the best accuracy shown above.
**Robustness.** If we further investigate past the highest accuracies for every combination and observe how the accuracy changes with the adjustment of the hyperparameters, we can conclude that FA is quite sensitive. If we reduce the number of screened features (components) from 500 to 250, the highest achievable accuracy drops to 33.1%. A further reduction to 125 results in an accuracy of only 25%. A similar performance drop is observable if we begin to increase the number of features from 500. However, with RFMS, a reduction in the number of screened features from 500 to 200 only slightly reduces the best accuracy to 60.8%, and with a further reduction to 100, the accuracy is still 55.4%. We observed this behavior with high probability when the degrees of freedom of the data were well defined, but the FA was requested to produce fewer features.
Figure 2 summarizes both trends on a single plot, thus demonstrating how the highest achievable accuracy converges to its global optimum as the number of screened features increases. Note that the deviation from the plotted accuracy values with the randomization of the selection and measurement process is negligible.
In addition, by adjusting the RFMS hyperparameters, the screening time can be significantly reduced without compromising the classification accuracy. For example, with the right combination, the screening time can be decreased to 2 143 s (merely 1/5th of the highest value in Table 1), while the achievable accuracy is still 60%. The fastest run in our test occurred for 1 738 s (15% of the longest screening time), and even that output could achieve a 57.3% accuracy (93.4% of the overall highest accuracy).
**Performance on proprietary datasets.** We have extensively used RFMS on our own proprietary biometric feature sets; although we cannot publicly share these datasets, we can share our experiences. We found that the FA-RF pair typically performs worse than the combination of RFMS-RF for real feature sets. In one particular case, we trained both screening methods on a dataset of 10 000 classes, 81 000 samples, and 18 700 input features and targeted 200 output features. We subsequently measured the performance of the screened features by using a disjunct dataset of 44 classes and 58 000 samples (as well as the same number of features). The best classification accuracy that we could obtain on an FA transformed feature set was approximately 82%, while the RFMS-filtered output could elicit classification rates up to as high as 93%, albeit with the screening time being significantly longer (both values have been measured with 5-fold cross-validation). However, given the sensitive and proprietary nature of the dataset, we cannot provide hard evidence for this claim.
## Conclusions and future work
Research on feature screening has grown rapidly in recent years; however, screening ultralarge, multiclass data is still in its infancy. To narrow this gap in the research, we presented a novel method known as random forest-based multiround screening (RFMS) that can be effectively applied in such circumstances. Due to the fact that ultrahigh-dimensional, multiclass data are typically encountered in biometrics, the RFMS was benchmarked on a synthetic feature space that imitates the key properties of
Figure 2: Convergence of highest accuracy as a function of the number of screened features (components). Accuracy is measured on a scale of 0-1. RFMS converges to the optimum much quicker than FA.
a real (private) signature dataset. Based on the results, the RFMS is on par with industry-standard feature screening methods, and it also possesses many advantages over these methods due to its flexibility and robustness, as well as its transformation-free operation. The Julia package that implements RFMS is publicly available on GitHub[29].
The difference in maximum accuracy that was achieved on real and synthetic data suggests that the synthetic data generator used for tests does not yet reproduce all of the properties of real data that challenge feature screeners, and this scenario is especially true for factor analysis. Therefore, it would be important to explore the properties of real data that cause this difference and to further develop _BiometricBlender_ in this direction, which could subsequently enable more realistic tests.
To further develop the RFMS method, the following future works are suggested:
1. Filter highly correlated variables in every iteration just before classification, as this could improve the importance of the features that are proposed by the method.
2. Identify the means of automatically determining the number of important features to be retained per cycle, thus allowing for all of the important features to be kept and most unnecessary features to be dropped. This could improve both accuracy and computation time.
3. Reduce screening time by using more parallel computations (random forest building already utilizes multiple threads when available).
4. Replace random forest and the importance metrics with other less common (but potentially better performing) alternatives.
5. Hyperparameter optimization is typically not viable with brute force due to lengthy computation times. Handy visualization tools could provide useful hints for manual boosting.
6. Consider various types of elimination tournaments, such as going through the input several times or using alternative scoring like the Elo or Glicko[42] systems. This may further improve accuracy when some information is nontrivially distributed in multiple "entangled" features.
## Supplementary information
RFMS was based on a Julia package that is publicly available on GitHub[29]. Its hyperparameters have been optimized via a grid search to identify a combination that produces the highest classification accuracy, as well as to observe the effect of changing the hyperparameters on the outcome.
In all cases, a fixed random seed of 20 230 125 was used to make the process deterministic. A fixed value of 0.7 was set for the partial-sampling parameter. Finally, 100 random features were added to the mix before screening as _canaries_. If any of these features had appeared in the final set of screened features on the output, we could have been confident that any less important features were simply noise. However, none of our total 3 969 measurements (539 screening configurations combined with 9 different classifiers, minus the contradicting combinations) stumbled upon a random feature among the screened ones; therefore, we were confident that the screening process identified truly relevant and meaningful features.
Table S1 summarizes the best four hyperparameter combinations, one for each of the three tested classifiers, plus one that produced the smallest screening time.
## Data availability
The applied feature space was compiled by using the _BiometricBlender_ data generator[31], which is publicly available on GitHub[32].
|
2303.09034 | Unsupervised Facial Expression Representation Learning with Contrastive
Local Warping | This paper investigates unsupervised representation learning for facial
expression analysis. We think Unsupervised Facial Expression Representation
(UFER) deserves exploration and has the potential to address some key
challenges in facial expression analysis, such as scaling, annotation bias, the
discrepancy between discrete labels and continuous emotions, and model
pre-training. Such motivated, we propose a UFER method with contrastive local
warping (ContraWarping), which leverages the insight that the emotional
expression is robust to current global transformation (affine transformation,
color jitter, etc.) but can be easily changed by random local warping.
Therefore, given a facial image, ContraWarping employs some global
transformations and local warping to generate its positive and negative samples
and sets up a novel contrastive learning framework. Our in-depth investigation
shows that: 1) the positive pairs from global transformations may be exploited
with general self-supervised learning (e.g., BYOL) and already bring some
informative features, and 2) the negative pairs from local warping explicitly
introduce expression-related variation and further bring substantial
improvement. Based on ContraWarping, we demonstrate the benefit of UFER under
two facial expression analysis scenarios: facial expression recognition and
image retrieval. For example, directly using ContraWarping features for linear
probing achieves 79.14% accuracy on RAF-DB, significantly reducing the gap
towards the full-supervised counterpart (88.92% / 84.81% with/without
pre-training). | Fanglei Xue, Yifan Sun, Yi Yang | 2023-03-16T02:09:47Z | http://arxiv.org/abs/2303.09034v1 | # Unsupervised Facial Expression Representation Learning
###### Abstract
This paper investigates unsupervised representation learning for facial expression analysis. We think Unsupervised Facial Expression Representation (UFER) deserves exploration and has the potential to benefit facial expression analysis regarding some critical problems, e.g. scaling, annotation bias, the gap between discrete annotations and continuous emotion expressions, and model pre-training. Such motivated, we propose a UFER method with contrastive local warping (ContraWarping), which leverages the insight that the emotional expression is robust to current global transformation (affine transformation, color jitter, etc.) but can be easily changed by random local warping. Therefore, given a facial image, ContraWarping employs some global transformations and local warping to generate its positive and negative samples and sets up a novel contrastive learning framework. Our in-depth investigation shows that: 1) the positive pairs from global transformations may be exploited with general self-supervised learning (e.g. BYOL) and already bring some informative features, and 2) the negative pairs from local warping explicitly introduce expression-related variation and further bring substantial improvement. Based on ContraWarping, we demonstrate the benefit of UFER under two facial expression analysis scenarios: facial expression recognition and image retrieval. For example, directly using ContraWarping features for linear probing achieves 79.95% accuracy on RAF-DB, significantly reducing the gap towards the full-supervised counterpart (89.18% / 84.81% with/without pre-training).
## 1 Introduction
Facial expression is one of the most natural ways for humans to express their emotions by moving their facial muscles [11]. Facial Expression Analysis (FEA) aims at automatically analyzing the emotion from facial images and has wide applications in various domains, such as driver fatigue monitoring, virtual reality, human-computer interaction systems, _etc._ In the last decades, FEA has made great progress benefiting from deep learning methods [3, 43, 37, 36, 20, 39]. However, almost all of these methods rely on supervised learning, which requires large-scale and high-quality labeled datasets. Such datasets are scarce and expensive to obtain for FEA, which limits the performance of deep learning methods that can benefit from scaling up the training data. In addition, different datasets may have considerable annotation bias, leading to supervision conflict for joint training [43]. Therefore, in this paper, we are interested in unsupervised representation learning for facial expression analysis.
Besides the advantage of strong scaling capability, we think Unsupervised Facial Expression Representation (UFER) is potential to benefit automatic facial expression analysis in more aspects, such as the gap between dis
Figure 1: The motivation of our ContraWarping. We propose a random local warping to mimic facial muscle movements. 1st row: some movements may change the original expression (neutral) to some other predefined categories (sad, angry, and pititable). 2nd row: some movements result in unreal expression, but they do change the expression (_e.g._ action units) to some extent, as well. Since random local warping does not rely on human annotation, we utilize it for unsupervised facial expression representation learning to learn expression-related features.
crete annotations and continuous emotion expressions, and model pre-training. We explain these two aspects as below:
\(\bullet\) The gap between discrete annotations and continuous emotion expressions. One of the most popular FEA tasks is facial expression recognition (FER). It typically categorizes facial emotion into several (7) classes. Such discrete categorization is not consistent with the continuous variation of facial expressions. In some realistic facial expression analysis tasks ( expression retrieval and photo album summarization), the continuous feature space is superior than a discrete one [34]. UFER does not need human annotations and naturally bridges the gap between discrete annotations and continuous emotion expressions.
\(\bullet\) Model pre-training is critical for facial expression analysis. For example, most facial expression recognition methods based on deep learning use pre-trained weights on MS1M [14] or ImageNet [12] for model initialization. Without these pre-trained weights, the performance will drop significantly. However, pre-training on ImageNet classification or face recognition deviates far from the objective of facial expression analysis. For example, in order to identify a same person under different scenes, a MS1M pre-trained model should focus on identity-related features and suppress the expression-related features. In contrast, we believe using UFER for model pre-training is likely to achieve better effect.
Such motivated, we propose an UFER method (ContraWarping) with contrastive local warping, inspired by recent generic self-supervised learning (SSL) methods (30), BYOL [13], SimSiam [7]) based on contrastive learning. Generally, these contrastive SSL methods generate positive pairs from different views of a same image, and the negative pairs from different images (some SSL methods do not have negative pairs). Using contrastive learning, they train a deep feature space where the positive samples are close to each other and the negative samples are far away.
Based on these general contrastive SSL methods, the key insight of our ContraWarping is: the emotion expression is robust to current widely-used data augmentations (marked as global transformations) like affine transformation, color jitter,, but can be easily changed by random local warping. Therefore, given a facial image, ContraWarping employs some global transformations and local warping to generate its positive and negative samples, respectively. Based on these triplet samples, we set up a novel contrastive learning framework. Specifically, given a face image, we randomly select a region and generate local warping by moving the content to a random direction with a random distance. We find such random local warping 1) sometimes can simulate the realistic facial muscle movements and roughly change the facial expression to another realistic one (the first row in Fig. 1), and 2) sometimes changes the facial expression to some unreal (and ridiculous) expression (the second row in Fig. 1). No matter which situation happens, the locally-warped face is likely to have a different expression and thus becomes a negative sample for the original image. Given these negative samples, ContraWarping pushes them far away from the original image while pulling the positive samples close.
To further enhance ContraWarping, we incorporate a facial landmark detection sub-task. We use an existing landmark detection method to extract pseudo-landmarks for the given face and perform the same warping operation to generate landmarks for the warped face. Since our goal is not to predict precise landmarks but to help the model to find moving muscles, no refined human-labelled landmark is needed here. Thus, our proposed framework could extract expression-related features in the pre-training stage without any annotations.
Based on ContraWarping, we conduct in-depth investigations on UFER and reveal that: 1) the positive pairs from global transformations may be exploited with general self-supervised learning (BYOL) and already bring some informative features; and 2) the negative pairs from local warping explicitly introduce expression-related variation and further bring substantial improvement. Experiments on facial expression recognition and retrieval tasks validate the effectiveness of ContraWarping.
To summarize, our contributions are as follows:
1. We propose a novel framework, ContraWarping, for unsupervised facial expression representation learning. It leverages random local warping to simulate facial muscle movements and generate informative negative pairs for contrastive learning.
2. We introduce a facial landmark detection sub-task based on pseudo labels to help the model identify the expression-changing muscles, significantly improving the k-NN performance.
3. Based on ContraWarping, we comprehensively investigate UFER against the supervised counterpart and reveal its strong potential for facial expression analysis.
Codes and pre-trained weights will be public at [https://github.com/youqingxiaozhua/ContraWarping](https://github.com/youqingxiaozhua/ContraWarping).
## 2 Related Works
### Facial Expression Recognition
FER methods tend to extract informative and expression-related features from facial images, and then adopt a classifier (SVM [9]) to classify the image into expression categories. In the last decades, many hand-crafted filters
are proposed to extract texture-based features, like: LBP [30], Gabor [25], HOG [10], and SIFT [27]. These methods could handle in-the-lab databases, but fail to extend to in-the-wild scenes due to various poses and occlusions.
Recently, benefiting from large-scale in-the-wild databases [22, 26, 1], learning-based methods have made a great process for FER. Deep convolutional neural networks (CNN) have a strong ability to automatically extract discriminative features from images with supervision from ground-truth labels. Li _et al_. [22] proposed the DLP-CNN to further enhance the discriminative power of deep features with a locality preserving loss. Cai _et al_. [3] also proposed an island loss to reduce the intra-class various and enlarge the inter-class differences. Ruan _et al_. [28] proposed the FDRL method to first decompose facial features into action-aware latent features and then reconstruct the expression-specific features.
Those methods extract holistic features from the whole face, while other methods try to find expression-related facial areas to enhance recognition. Zhong _et al_. [47] proposed a two-stage multi-task sparse learning framework to find common patches shared by all expressions and specific patches to discriminate a certain expression. For the first time, they proved that only a few facial muscles (areas) are discriminative for FER. Haapy _et al_. [16] proposed a method to extract some salient patches containing discriminative features with the help of facial landmarks. FER in the wild need to handle unconstrained conditions like partial occlusion and various poses. Li _et al_. [23] proposed a gate-based method named ACNN, which utilizes the attention mechanism to compute an adaptive weight for every facial region. With the help of the proposed gate unit, ACNN could shift the attention from occluded patches to other unoccluded ones. RAN [37] utilizes self-attention and relation-attention modules to extract compact face representation from several face regions. Most recently, Transfer [39] is proposed to utilize Transformer [33] and multiple attention maps to learn relation-aware local features.
Collecting large-scale FER datasets' annotations is challenging due to high ambiguity and subjectivity. Therefore, another line of FER research is learning with uncertain or noisy labels. Zeng _et al_. [43] firstly proposed the IPA2LT framework to learn from inconsistently labelled FER datasets. After that, many methods [36, 31, 45, 46] are proposed to decrease uncertainty. Wu _et al_. [38] studied a new problem, learning with open-world noisy data and proposed a graph-based method to solve this problem.
Unlike these methods, we aim to learn general expression-aware features without any clean or noisy labels. We randomly generate various facial images with simulated muscle movements and push the model to focus on these regions to extract expression-related features.
### Learning from Unlabeled Data
To reduce the dependency on high-cost annotated datasets, a large number of methods have been proposed to learn from unlabeled data. Among them, self-supervised learning methods have achieved great success in the last decade. He _et al_. [17] proposed the MoCo framework with a momentum encoder which firstly outperforms supervised pre-training in some downstream tasks, indicating the great potential of contrastive learning. SimCLR [5] and MoCo V2 [6] further simplify this framework by removing the memory bank and adding a projection head after the rep
Figure 2: Overview of our ContraWarping framework. The model takes an input image and applies global transformations to produce \(x_{1}\) and \(x_{2}\), which should be similar in feature space. Then, it applies random local warping to \(x_{1}\) to generate \(x_{3}\), which simulates muscle movements. The model learns to push \(x_{3}\) away from \(x_{1}\) and \(x_{2}\) in feature space, thus capturing expression-related features without human annotation. Additionally, a landmark detection sub-task is introduced to help the model identify the warped areas.
resentations. Without negative samples or momentum encoder, BYOL [13] and SimSiam [7] make the framework more simpler and cleaner.
Some methods use similar ideas to extract general facial representation from unlabeled data. SSSPL [32] adopts three auxiliary tasks (patch rotation, segmentation and classification tasks) to learn the spatial-semantic relationship. He [19] utilized a 3D reconstruction task as a self-supervised bypass to enhance face recognition. TCAE [24] and FaceCycle [44] utilize multiple encoder and decoder to disentangle and reconstruct pose, expression, or identity features to learn from unlabeled data. TCAE also requires video samples to provide variations in expression and pose [24]. As for the FER task, very limited related research focus on this topic. CRS-CONT [21] adopt the self-supervised learning framework to FER. However, it still need coarse-grained labels to generate expression-specific positive and negative sample pairs. Differently, we proposed a random warping strategy to simulate the emotion expression process - muscle movements, which could easily generate various expression-specific negative samples without supererogatory encoder-decoder. With our proposed ContraWarping, many existing self-supervised methods could be utilized to extract expression features without any labelled data.
## 3 Method
### Overview
As has been discussed before, contrastive learning methods could learn without labels by producing highly similar representations for different views of the same image. Specifically, as illustrated in Fig. 2, the input image is augmented with random global transformations to generate two different views \(x_{1}\) and \(x_{2}\). Following BYOL's example, \(x_{1}\) and \(x_{2}\) are passed through the backbone and projector to generate the projected features \(z_{1}\) and \(z_{2}\). An additional predictor is further utilized to generate \(p_{1}\) to prevent collapse. Since \(x_{1}\) and \(x_{2}\) are from the same image and global transformations do not affect the muscle movements, \(p_{1}\) and \(z_{2}\) should be very similar to each other.
To learn expression-related features in the pre-training stage, we proposed random warping, an unsupervised way to simulate facial muscle movements. We use it to warp \(x_{1}\) to \(x_{3}\), making \(x_{3}\) has a different expression but the same identity and pose as \(x_{1}\). Similar to \(z_{2}\), \(z_{3}\) is extracted and projected from \(x_{3}\) with the same backbone and projector. In order to learn expression-related features, we require \(z_{3}\) and \(p_{1}\) to have a low similarity since they have different expressions. We also find that an additional landmark detection task can help the model to focus on the warped (expression-changed) areas. With these two expression-related pretext tasks, our proposed ContraWarping could empower current self-supervised learning methods to extract expression-related features to benefit downstream FER tasks.
### Face Warping
To bring expression information to the pre-training phase, we adopt a simple face-warping method [15] to simulate how facial muscles move when expressing emotions. A facial muscle movement can be seen as the muscle taking a small area of the face around it to move a short distance. We can simulate this process by the above face-warping method. First, we define the warping starting point \(\vec{c}\) and ending point \(\vec{m}\). To simplify, we assume the warping process only takes effect in a circular area, denoted as a circle with a centre of \(\vec{c}\) and a radius of \(r_{max}\). All pixels in the circle are supposed to move in the same direction as \(\vec{c}\) to \(\vec{m}\), but pixels around \(\vec{c}\) are supposed to move longer and pixels near the circumference move shorter, making the warping result smooth. For any point \(\vec{x}\) in the circle, its content is moved from a source point (denoted as \(\vec{u}\)). According to [15], the source point coordinate vector \(\vec{u}\) can be calculated by:
\[\vec{u}=\vec{x}-\left(\frac{r_{max}^{2}-\left|\vec{x}-\vec{c}\right|^{2}}{r_ {max}^{2}-\left|\vec{x}-\vec{c}\right|^{2}+\left|\vec{m}-\vec{c}\right|^{2}} \right)^{2}(\vec{m}-\vec{c}) \tag{1}\]
With Eq. (1), we can move one "muscle" efficiently. However, in most cases, emotions are expressed by multiple muscles. To simulate complex expressions, we repeat the above local warping \(n\) times with random starting points, radii and moving distances. This allows us to generate various expressions of the same people and backgrounds (as shown in Fig. 1) without any supervision information.
During training, we generate warped facial images on-the-fly from training samples. We show some examples of warped images from our data loader in Fig. 4. As shown, \(x_{3}\) is warped from \(x_{1}\); the warped areas are marked with red arrows for easy identification. Some warped samples are obvious, for example, the slightly opened mouth in the
Figure 3: Illustration of the warping process. As point \(\vec{c}\) moves to a new position \(\vec{m}\), every pixel in the circle should move in the same direction. For example, point \(\vec{u}\) moves to \(\vec{x}\). That is, the pixel value of the warped face at position \(\vec{x}\) can be calculated by finding the before-warping position \(\vec{u}\).
fourth row. Most warping areas are subtle, such as the raised or lowered eyebrow in the first and third rows and the upward mouth in the second and last rows. We assume \(x_{1}\) and \(x_{3}\) have a low similarity. Therefore, obvious warping could help the model converge faster, while subtle warping encourages the model to detect fine-grained muscle movements, which benefit downstream FER tasks.
On the other hand, although our method does not need \(x_{3}\) to change to another emotion, some random warping operation already achieves this. For example, the upward mouth in the second row makes the original _happy_ face to _contempt_, and the dropping eyebrow in the third row changes a _sad_ face to a slightly _angry_ one. These examples show that our warping method could really modify muscle movements and simulate various expressions. We hope this could inspire more interesting methods for FER in the future.
### Landmark Detection
To encourage the model to better focus on the warped part of the face, we add a simple landmark detection head with several deconvolutional layers [41] to the framework, as shown in Fig. 2. The landmark detector takes feature maps from \(x_{1}\) and \(x_{3}\) as input and predicts the corresponding landmark points, respectively.
Specifically, the landmark detection model is pre-trained on the 300-W [29] dataset and is used to directly predict pseudo landmarks for the MS1M dataset. The model is lightweight and use HRNetV2-W18 [35] as its backbone. It is worth noting that our goal is not to predict accurate landmarks but to help the model find the "moving muscles" areas as a pretext task. With the predicted pseudo landmark, we perform the same warping with the image \(x_{1}\) to generate the pseudo landmark of \(x_{3}\). The MSE loss is adopted as the criterion. To better push the model to focus on variable parts, we set the loss weight of unchanged landmarks to 0.1 while changed ones to 1.
### Joint Loss Function
In our framework, the backbone is jointly trained with the contrastive loss and the landmark detection loss. For contrastive loss, we adopt the symmetrical cosine similarity following SimSiam [7] and BYOL [13]:
\[sim(i,j)=\frac{1}{2}\left(\frac{p_{i}}{\|p_{i}\|}\cdot\frac{z_{j}}{\|z_{j}\|} +\frac{p_{j}}{\|p_{j}\|}\cdot\frac{z_{i}}{\|z_{i}\|}\right) \tag{2}\]
Since \(x_{1}\) and \(x_{2}\) are two different views from the same image, so they should have a very high similarity:
\[L_{cont12}=-sim(1,2) \tag{3}\]
As for \(x_{3}\), it is warped from \(x_{1}\). We hope it is dis-similar from \(x_{1}\) in the expression space. However, since only a small region of the face is changed by warping, we do not want to make them too dissimilar to prevent confusing the model training. Therefore, we set a target similarity (denoted as \(s_{t}\)) as a hyper-parameter:
\[L_{cont13}=max\left(sim(1,3),\;s_{t}\right) \tag{4}\]
For landmark detection, the weighted MSE loss is adopted to both \(x_{1}\) and \(x_{3}\):
\[L_{landmark}=\frac{1}{n}\sum_{i=1}^{n}MSE(w_{i}\cdot pred_{i},\;w_{i}\cdot pseudo _{i}) \tag{5}\]
where \(n\) is the number of landmark points, \(w_{i}\) is the weight for the corresponding point, \(pred_{i}\) and \(pseudo_{i}\) are predicted and the pseudo landmarks, respectively. Then, the joint loss can be formulated as:
\[L=L_{cont12}+L_{cont13}+\lambda(L_{landmark1}+L_{landmark3}) \tag{6}\]
where \(\lambda\) is a hyper-parameter to balance the losses.
Figure 4: Visualization of the original training images and three branches of input \(x_{1}\), \(x_{2}\) and \(x_{3}\). \(x_{3}\) is randomly warped from \(x_{1}\), and the warped area is marked with a red arrow. (Best zoom in to find details.)
## 4 Experiments
### Settings
**Evaluation tasks.** We evaluate ContraWarping on two facial expression analysis tasks: facial expression recognition and facial expression retrieval. We call them as "recognition" and "retreival" for brevity. The recognition task is the most popular FEA task and requires predicting discrete categories, while the retrieval task compares expressions in continuous feature space to find the closest one.
On the recognition task, we follow the standard protocols in SSL and use linear evaluation and k-NN evaluation to measure the quality of extracted features. Specifically, we train one fully-connected layer (or perform k-NN classification) based on features from the **frozen** backbone.
On the retrieval task, we evaluate ContraWarping under both direct deployment and fine-tuning scenarios. In the first scenario, we directly use the unsupervised facial expression representation learned from ContraWarping to extract deep features. In the second scenario, the pre-trained models are fine-tuned on the retrieval training set.
### Datasets
**Dataset for training ContraWarping.** We train ContraWarping on **MS1M**[14], a large-scale face recognition dataset with about 3.8M facial images from popular celebrities. Most recognition methods [34, 36, 31, 39, 46] use this database for model pre-training. The difference between their pre-training and ours is that they use the supervised face identification task with face ID annotations, while ContraWarping is an SSL method.
**Dataset for facial expression recognition**. For the recognition task, we use two popular datasets, _i.e._ RAF-DB [22] and AffectNet [26]. RAF-DB [22] is a large-scale FER dataset with 30,000 facial images labelled into seven basic or compound expression categories. Every facial image in this dataset is manually labelled about 40 times to ensure reliability. AffectNet [26] is one of most challenging FER dataset. It consists of about one million facial images collected by searching compression-related keywords on the Internet. Following [20], about 280,000 and 3,500 facial images labeled in seven basic categories are adopted for training and testing.
**Dataset for facial expression retrieval**. For the retrieval task, we use FEC [34], a large-scale expression comparison dataset by specify the smeariest image pair in each triplet. Since the dataset only release the image url and many urls have been crashed. By removing the broken images, finally about 358K and 28K triplet samples are collected for training and testing, respectively. The triplet prediction accuracy based on extracted features is reported.
### Implementation Details
Unless otherwise specified, we utilize a ResNet-18 [18] as our backbone. For random warping, we random select the starting point \(\vec{c}\) from a uniform distribution \(U(50,150)\). Similarly, the moving step \(|\vec{m}-\vec{c}|^{2}\in U(100,200)\), the radius \(r_{max}\in U(50,80)\), and the strength from \(U(150,300)\). These parameters are for images of size 224 \(\times\) 224. The repeat time \(n\) is empirically set to 2. For landmark detection, 68 points are inferred from a pre-trained landmark detection model, and the heatmap is generated with a sigma of 1.5. Other settings for the pre-training and evaluation are described individually below:
**Pre-training.** As our proposed pretext tasks could combine with various self-supervised learning (SSL) methods. We keep the same settings as the original SSL method by default, except no random crop is adopted for \(x_{1}\) to perform landmark detection. We did not perform tuning on the learning rate or batch size. Specifically, for SimSiam, the SGD optimizer with a 0.05 learning rate is adopted for a mini-batch of 256. The model is pre-trained for 50 epochs on 10% MS1M for ablation studies and for 20 epochs on 100% MS1M for comparison with the state of the arts. For BYOL, a big batch size (4096) with the LARS [40] optimizer is adopted, and the learning rate is set to 4.8. Since the vanilla BYOL need to pre-train for a long while (up to 1000 epochs), we pre-trained it for 50 epochs on 100% MS1M to compare with SOTA methods.
### Ablation Studies
**Effectiveness of proposed modules.** In our framework, random warping and landmark detection are two pretext tasks to extract expression-related features for contrastive pre-training methods. To investigate the effect of these two proposed new tasks, we perform an ablation study by pre-training on 10% MS1M images and evaluating on RAF-DB with three protocols. As the BYOL needs a very large batch size, making it demanding for hardware, we investigated the experiments with SimSiam, which could work with a batch size of 256, which is more resource-friendly.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Linear} & \multicolumn{2}{c}{k-NN} \\ \cline{3-4} & & 10 & 30 \\ \hline SimSiam & 69.95 & 53.10 & 52.54 \\ SimSiam + RW\({}^{*}\) & 71.71 & 53.98 & 53.16 \\ SimSiam + RW\({}^{\dagger}\) & 72.00 & 54.53 & 54.89 \\ SimSiam + RW\({}^{*}\) + LD & 73.66 & 55.87 & 56.71 \\ SimSiam + RW\({}^{\dagger}\) + LD & **75.29** & **62.32** & **62.97** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation of our proposed pretext tasks with SimSiam on RAF-DB. The top-1 accuracy (%) is reported. **RW**: random warping, **LD**: landmark detection. RW\({}^{*}\) represents the totally random warping and RW\({}^{\dagger}\) represents the landmark-based random warping. See Sec. 4.4 for details.
The results are illustrated in Tab. 1. As we can see, the vanilla SimSiam could achieve a decent performance: 69.95% top-1 accuracy by only training a classifier with one FC layer. By applying random warping to the face image, the model can learn from synthetic different expression pairs and outperforms the vanilla SimSiam by a significant margin. Specifically, the proposed random warping increases the linear evaluation performance from 69.95% to 72.00S and boosts the 10-NN and 30-NN performance to 54.53% and 54.89%, respectively. We also find that warping based on landmarks performs better than totally random warping, which will be further explored in the following section. With the help of landmark detection, the performance of linear evaluation and 10-NN could further boost to 75.29% and 62.32%. These experimental results demonstrate that our proposed random warping could help current contrastive frameworks to learn expression-related features. And the landmark detection task could further help the model to focus on the moving areas. With the help of these two pretext tasks, current SSL models could extract better representations for FER.
**Target similarity \(s_{t}\) between \(x_{1}\) and \(x_{3}\).** Our proposed ContraWarping utilizes the random local warping to simulate facial muscle movements. The basic hypothesis is that the warped face (\(x_{3}\) in Fig. 2) has a different expression from the original face (\(x_{1}\) in Fig. 2). To distinguish different expressions, we push \(x_{1}\) and \(x_{3}\) to have a low similarity. However, except for the warped area, facial muscles in other regions share the same status with the original \(x_{1}\). To prevent confusing the learning process, we set a target similarity \(s_{t}\) as a lower bound as described in Eq. (4). The lower \(x_{t}\) will push the model to focus more on moving areas and ignore other regions, which is harmful to the model to pull \(x_{1}\) and \(x_{2}\) close. It's a trade-off between these two tasks.
Results with different \(s_{t}\) values are illustrated in Tab. 2. Surprisingly, we find that although we have set the lower bound of \(sim(1,3)\) by \(s_{t}\), the observed \(sim(1,3)\) is still a little lower than \(s_{t}\). This may be because the punishment still exists when at least one sample has a higher similarity in the mini-batch, making the overall mean similarity lower. To better illustrate the learning result, we also report the mean \(sim(1,3)\) of the last mini-batch at the pre-training stage in Tab. 2. As we can see, the framework achieves the best performance (75% for linear evaluation) when \(s_{t}\) is set to 0.6 and the actual \(sim(1,3)\) is 0.45. When \(s_{t}\) is set to 0, the model tries to represent various permutations of muscle movements as orthometric, which ignores the common areas and performs worst (68.58%) for linear evaluation and is relatively poor for k-NN classification. When \(s_{t}\) is set to a high value, _e.g_. 0.8, the model has less incentive to distinguish different muscle movements and can not extract effective expression presentations.
**Warping Position: Random V.S. Landmark-based.**
As random warping is proposed to simulate facial muscle movements and change the expression, we hope it could take effect on the physiological facial muscle areas. However, warping on irrelevant areas, such as the forehead, cheek, or hair areas, may not affect the expression and may introduce false negative samples, which is harmful to the learning process. An intuitive way is to only perform warping around facial landmarks. Tab. 3 shows the performance comparison of totally random and landmark-based warping methods. The landmark-based method performs better in all three protocols. Moreover, the landmark-based random warping is more helpful in boosting k-NN performance: increasing from 55.87% and 56.71% to 62.32%, and 62.97% for k = 10 and 30, respectively, while marginal improvement (0.29%) for linear evaluation.
Although the landmark-based method performs better, the total random warping procedure performs comparatively and is more flexible in applying to other no-landmark-available scenes. We hope both methods can inspire researchers to design more interesting works.
### Combining with various SSL methods.
Our random warping could combine with various contrastive SSL methods. In principle, our proposed modules could help existing SSL methods to focus on the muscles that move during facial expression, and better SSL methods could extract more robust features.
We conduct experiments with three SOTA SSL methods to investigate the compatibility with our proposed modules in Tab. 4. We also compare them with random initialization and supervised training on the MS1M dataset. As shown, SimSiam and BYOL outperform the supervised ones without any annotation. This is because the face recognition
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{\(s_{t}\)} & \multirow{2}{*}{\(sim(1,3)\)} & \multirow{2}{*}{Linear} & \multicolumn{2}{c}{k-NN} \\ \cline{3-5} & & & 10 & 30 \\ \hline
0 & - 0.16 & 68.58 & 51.50 & 50.85 \\
0.2 & 0.08 & 69.95 & 50.55 & 50.29 \\
0.4 & 0.26 & 73.99 & 55.34 & 56.45 \\
0.6 & 0.45 & **75.00** & **55.87** & **56.71** \\
0.8 & 0.69 & 73.21 & 55.08 & 55.67 \\ \hline \end{tabular}
\end{table}
Table 2: Exploring the target similarity \(s_{t}\) between \(x_{1}\) and \(x_{3}\) on RAF-DB with SimSiam. For generality, the warping position is **randomly selected** in these experiments.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Linear} & \multicolumn{2}{c}{k-NN} \\ \cline{3-5} & & 10 & 30 \\ \hline Random & 75.00 & 55.87 & 56.71 \\ Landmark-based & **75.29** & **62.32** & **62.97** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of random landmark-based warping on RAF-DB with SimSiam.
pre-train restrains the model from learning expression features. By introducing expression-related tasks in the pre-training stage, we improve the performance of all three SSL methods. The improvement with MoCo V2 is less significant because it does not have a \(s_{t}\) hyper-parameter to balance the similarity between \(x_{1}\) and \(x_{3}\). We simply append our \(x_{3}\) feature to the dictionary as full negative examples. However, we still increase the linear evaluation accuracy of MoCo V2 by about 1%. Moreover, these results suggest that our method can benefit from a stronger SSL method to perform better.
**Comparison with general facial representation learning methods.** Some methods [24, 4, 2] aim to extract universal facial representations with identity, pose, expression, and even landmark information while our method only focuses on expression features. As shown in Tab. 5, our method outperforms these general methods by a big margin: under linear evaluation, our method outperforms FaceCycle [4] with 8.13% with a Res-18 backbone, and outperforms Flickr-Face [2] with more than 4% with Res-50, indicating our overwhelming superiority over general methods.
**Comparison with state-of-the-art FER methods.** We also find that the baseline strategy, which uses a pure Res18 network with only random crop and flips as data augmentation, achieves a comparable performance for FER with our ContraWarping. As illustrated in Tab. 6, the baseline performs similarly (89.18%) with RUL (88.98%) on RAF-DB and ranks second (64.94%) on AffectNet, demonstrating the great potential of our method to boost FER performance.
### Image Retrieval.
Image retrieval is another application that could benefit from continuous expression representation. For example, belly laughs and smiles may take place in different scenes, although they have the same basic category: happy. To investigate the effect of our proposed ContraWarping, we reported the performance on the FEC database without and with fine-tuning. As shown in Tab. 7, our ContraWarping outperforms MS1M supervised weights significantly, from 34.78% to 39.78%. Even after fine-tuning, our method still performs better, reaching up to 78.31%. This indicates that our unsupervised approach can learn continuous expression features and benefit image retrieval.
## 5 Conclusion
In this paper, we propose a novel method (ContraWarping) for unsupervised facial expression representation (UFER) learning. The key point of ContraWarping is leveraging local warping to generate expressive variations of face images. We use contrastive learning and landmark detection as two pretext tasks to learn from the locally-warped images. In-depth investigations on expression recognition and retrieval tasks show that the ContraWarping representation has gained a considerable discriminative ability for facial expression analysis. Moreover, we demonstrate that ContraWarping can be used as an effective pre-training strategy that outperforms popular pre-training with the face identification pretext task.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Year & RAF-DB & AffectNet \\ \hline IPA2LT [43] & 2018 & 86.77 & 57.31 \\ RAN [37] & 2020 & 86.90 & 59.50 \\ SCN [36] & 2020 & 87.03 & 60.23 \\ KTN [20] & 2021 & 88.07 & 63.97 \\ DMUE [31] & 2021 & 88.76 & 62.84 \\ RUL [45] & 2021 & 88.98 & 61.43 \\ Face2Exp [42] & 2022 & 88.54 & 64.23 \\ EAC [46] & 2022 & **89.99** & **65.32** \\ \hline Baseline + Ours & 2023 & 89.18 & 64.94 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison with our baseline strategy with current state-of-the-art FER methods.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Linear} & \multicolumn{2}{c}{k-NN} \\ \cline{3-4} & & & 10 & 30 \\ \hline TCAE [24] & 9-layer & 65.32 & 59.19 & 57.98 \\ FaceCycle [4] & 16-layer & 71.01 & 55.80 & 55.80 \\ Ours & Res-18 & **79.95** & **62.48** & **63.92** \\ \hline Flickr-Face [2] & Res-50 & 80.70 & 60.01 & 60.36 \\ Ours & Res-50 & **84.64** & **66.72** & **68.06** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of our framework with other state-of-the-art general facial representation learning methods on RAF-DB.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Linear} & \multicolumn{2}{c}{k-NN} \\ \cline{3-4} & & & 10 & 30 \\ \hline TCAE [24] & 9-layer & 65.32 & 59.19 & 57.98 \\ FaceCycle [4] & 16-layer & 71.01 & 55.80 & 55.80 \\ Ours & Res-18 & **79.95** & **62.48** & **63.92** \\ \hline Flickr-Face [2] & Res-50 & 80.70 & 60.01 & 60.36 \\ Ours & Res-50 & **84.64** & **66.72** & **68.06** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of different pre-trained weights on FEC. |
2301.08553 | Optimality-preserving Reduction of Chemical Reaction Networks | Across many disciplines, chemical reaction networks (CRNs) are an established
population model defined as a system of coupled nonlinear ordinary differential
equations. In many applications, for example, in systems biology and
epidemiology, CRN parameters such as the kinetic reaction rates can be used as
control inputs to steer the system toward a given target. Unfortunately, the
resulting optimal control problem is nonlinear, therefore, computationally very
challenging. We address this issue by introducing an optimality-preserving
reduction algorithm for CRNs. The algorithm partitions the original state
variables into a reduced set of macro-variables for which one can define a
reduced optimal control problem from which one can exactly recover the solution
of the original control problem. Notably, the reduction algorithm runs with
polynomial time complexity in the size of the CRN. We use this result to reduce
reachability and control problems of large-scale protein-interaction networks
and vaccination models with hundreds of thousands of state variables. | Kim G. Larsen, Daniele Toller, Mirco Tribastone, Max Tschaikowski, Andrea Vandin | 2023-01-20T13:17:57Z | http://arxiv.org/abs/2301.08553v1 | # Optimality-preserving Reduction of Chemical Reaction Networks
###### Abstract
Across many disciplines, chemical reaction networks (CRNs) are an established population model defined as a system of coupled nonlinear ordinary differential equations. In many applications, for example, in systems biology and epidemiology, CRN parameters such as the kinetic reaction rates can be used as control inputs to steer the system toward a given target. Unfortunately, the resulting optimal control problem is nonlinear, therefore, computationally very challenging. We address this issue by introducing an optimality-preserving reduction algorithm for CRNs. The algorithm partitions the original state variables into a reduced set of macro-variables for which one can define a reduced optimal control problem from which one can exactly recover the solution of the original control problem. Notably, the reduction algorithm runs with polynomial time complexity in the size of the CRN. We use this result to reduce reachability and control problems of large-scale protein-interaction networks and vaccination models with hundreds of thousands of state variables.
## I Introduction
The interplay between control theory and systems biology is instrumental to gain insights into the dynamics of natural systems across different scales (e.g., [1]). In particular, the problem of controlling a biological system is relevant in applications such as smart therapeutics and biosensors [2]. Mathematically, this can be studied as the problem of controlling a formal chemical reaction network (CRN), whereby the biological system under study is modeled as a (finite) set of species that interact across a (finite) set of reaction channels. This representation admits both a stochastic interpretation in terms of a continuous-time Markov chain (CTMC), where discrete changes in the population levels of each species are tracked, and a deterministic one as a system of nonlinear ordinary differential equations (ODEs), where each equation tracks the time evolution of the concentration of each species. Notably, and particularly relevant for the theoretical developments in this paper, under mild conditions the deterministic equations correspond to a limit regime of a family of CTMCs (e.g., [3]).
In this setting, the control inputs may be represented by the parameter values of designated reaction rates [4], such that the overall controller design may be studied as an optimal control problem. Treating certain rates as inputs can also be used for the complementary goal of studying the open-loop behavior of the system when some parameters are unknown/uncertain, by estimating reachable sets [5]; this is a pressing problem in systems biology, where rate parameters are often not directly accessible.
Controlling the biological system by studying its ODEs in place of the CTMC is appealing because the ODE system size has, in general, exponentially fewer equations. However, the control problem is computationally prohibitive in general due to the fact that it is nonlinear [6, 7]. One approach to tackling this problem is to devise an _optimality-preserving_ reduction of a control system, where the hope is to solve a reduced optimal control problem instead of the original one. While for linear systems this problem is well-understood [6], it remains challenging for nonlinear ones.
In this paper we consider CRNs with the well-known mass-action semantics (e.g., [8, 9]), leading to ODE systems with polynomial right-hand sides. Here, reactions are characterized by rate parameters which can be used as inputs, taken from bounded domains. The optimal control problem consists in finding the values of those parameters such that a given cost function is minimized. We present an optimality-preserving reduction method based on a partition of the set of species, thus corresponding to a partition of the set of ODE variables. This follows a long tradition in the development of _lumping_ techniques for (bio-)chemical systems (e.g., [10, 11]), most of which are concerned with preserving the dynamics of the system and not of the solution of the control problem as done here.
Our reduction is exact in the sense that one can define a reduced optimal control problem whose solution can be exactly related to that of the original problem. Based on this, we develop an algorithm that finds the coarsest partition, i.e., the maximal lumping, that satisfies this property. The algorithm is based on previous recent results for lumping of uncertain Markov chains (essentially seen as linear control systems [12]). Similarly to that, the number of required computational steps is at most polynomial in the number of species and reactions of the CRN. However, the technical machinery required here is profoundly different and, importantly, identifies the polynomial ODE system of a mass-action CRN as the deterministic limit process of a family of CTMCs. Specifically, in the derivation of the main result, visualized in Fig. 1, we:
* make use of fluid limit results [3, 13] and associate to each CCRN a family of continuous-time Markov chains (CTMCs) which, roughly speaking, converge in probability to the control system of the CCRN;
* show that the original CTMC family can be replaced by the CTMC family of a lumped CCRN while preserving optimality;
* show that the control system of the original and the lumped CCRN have common optimal values;
* show how an optimal control of the original CCRN can be computed from an optimal control of the lumped CCRN.
By doing so, we circumvent the problem of having to relate nonlinear control systems directly.
We implement the aforementioned lumping algorithm in the software tool ERODE [14] and apply the theory to two families of case studies. In the first class, we study epidemiological models over weighted networks [15], where a) each node is subject to vaccination control or b) the network weights are subject to uncertainty. In the second class, we study protein-interaction networks where proteins can bind to the binding sites of a substrate. Here, we show that lumping algorithms developed for autonomous ODE systems (e.g. [16, 17, 18]) carry over to the case where kinetic parameters are assumed to belong to common intervals, rather than have a common value. Both classes show how one can reduce nonlinear optimal control [5, 19, 20] and verification problems [7, 21, 22] with thousands of state variables on common hardware.
_Related work._ While results on exact optimality-preserving lumping techniques for linear control systems have been explored (see [6, 12] and references therein), nonlinear counterparts are scarce. Bisimulation/abstraction [23, 24, 25] is closely related but complementary to CCRN species equivalence. Specifically, for a given observation, the largest bisimulation gives rise to a lumped dynamical system which coincides with the original one up to a previously chosen observation map. Instead, CCRN species equivalence seeks to find from a family of linear observation maps the one that gives rise to the largest bisimulation. Since only observation maps expressible by equivalence relations are considered, the coarsest CCRN species equivalence can be computed in polynomial time. Apart from bisimulation/abstraction, we mention decoupling [26] that yields substantial speed-ups but may impose restrictive symmetry constraints. While this can be addressed by decoupling approaches [27], the corresponding lumping is approximative. It is worth noting that our approach is reminiscent to Koopman operator theory which expresses a nonlinear system via an infinite linear one [28]. Fluid limits are however complementary to Koopman operator theory. This is because they rely upon probabilistic arguments and their linear (Markov chain) approximations hold true for arbitrary initial conditions, rather than specific ones [28].
_Paper outline._ After reviewing CRNs, Section II introduces controlled CRNs (CCRNs). Section III instead reviews CRN species equivalence from [29] and extends it to CCRNs. Building upon Section III, Section IV establishes that CCRN species equivalence allows for optimality-preserving lumping of fluid models, while Section V presents applications in nonlinear system verification and control. The paper concludes in Section VI, while Section VII contains a proof that is postponed for the benefit of presentation.
## II Controlled CRNs
A mass-action CRN is \((\mathcal{S},\mathcal{R}_{\alpha})\) where \(\mathcal{S}\) is a set of species, \(\mathcal{R}\) is a set of reactions and \(\alpha=(\alpha_{i_{r}})_{r\in\mathcal{R}}\) is a set of kinetic parameters, with \(\alpha_{i_{r}}\geq 0\). Each reaction \(r=\pi\xrightarrow{\alpha_{i_{r}}}\rho\) comprises multisets \(\pi\) and \(\rho\) of species, denoting the reactants and products, respectively. Mass-action CRNs are traditionally given both a stochastic and a deterministic interpretation as a Markov jump process and a system of polynomial differential equations.
In the stochastic interpretation [3], a state of the underlying Markov chain is a species multiset \(\sigma\in\mathbb{N}_{0}^{\mathcal{S}}\) giving the number of molecules \(\sigma(A)\) for each species \(A\in\mathcal{S}\). The forward equations are given by the initial value problem
\[\partial_{t}p_{\sigma}=\sum_{\theta}q(\sigma,\theta)p_{\sigma}, \tag{1}\]
Fig. 1: Visualization of the main result. As first, we approximate the control systems of the original and the lumped CCRN by means of suitable CTMC families (1). Then, we show that the lumped CTMC family admits the same optimal value as the original one (2). Combining (1) and (2), we conclude that the lumping of the control system is optimality preserving (3).
Fig. 2: A table of frequently used symbols.
where \(p(0)\) is the initial probability measure, whereas the transition rate from state \(\sigma\) to state \(\theta\) is
\[q(\sigma,\theta)=q_{\alpha}(\sigma,\theta):=\sum_{r=\rho\xrightarrow{\alpha_{i_{ r}}}\pi\in\mathcal{R}_{\alpha}}\alpha_{r_{i}}\cdot\binom{\sigma}{\rho} \tag{2}\]
We denote by \(q=(q(\sigma,\theta))_{\sigma,\theta}\) the _transition rate matrix_, where \(q(\sigma,\sigma)=-\sum_{\theta\neq\sigma}q(\sigma,\theta)\). The dynamics can be described as follows: when in state \(\sigma\), every reaction determines a possible jump that consumes molecules according to the multiplicities of the reactants and yields new molecules according to the products; the reaction fires proportionally (via the kinetic rate \(\alpha_{i}\)) to the total number of possible encounters between single molecules of the reacting species. With this in place, the CTMC described by (1) is denoted by \((X^{q}(t))_{t\geq 0}\).
In the deterministic interpretation [30], instead, the model is described by the system of polynomial differential equations \(\partial_{t}v=f(v,\alpha)\), where the vector field \(f:\mathbb{R}_{\geq 0}^{\mathcal{S}}\times\mathbb{R}_{\geq 0}^{[\mathcal{R}]} \rightarrow\mathbb{R}^{\mathcal{S}}\) is given, for any species \(A\in\mathcal{S}\), by
\[f_{A}(v,\alpha)\!:=\!\sum_{r=\rho\xrightarrow{\alpha_{r_{i}}}\pi\in\mathcal{R }_{\alpha}}\alpha_{r_{i}}(\pi(A)-\rho(A))\prod_{B\in\mathcal{S}}\frac{v_{B}^{ \rho(B)}}{\rho(B)!}, \tag{3}\]
with \(\rho(B)!\) denoting the factorial of \(\rho(B)\). Under certain assumptions, it can be shown that the stochastic model converges in probability to the deterministic model, as the molecule counts tend to infinity. These are commonly known as fluid limit results [31, 32], as discussed in Section IV.
We now introduce the notion of _controllable CRN_ (CCRN), for which we likewise give both a stochastic and a deterministic control system. In both cases we consider two extremal CRNs \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\) and \((\mathcal{S},\mathcal{R}_{\overline{\alpha}})\), with \(\underline{\alpha}\leq\overline{\alpha}\), which constrain the values that the decision variables (i.e., the control inputs) may attain.
* The stochastic control system is given by (1), where each \(q(\sigma,\theta)\) becomes a measurable control input bounded by the corresponding values in the extremal CRNs, that is \[q(\sigma,\theta)=q_{\underline{\alpha},\overline{\alpha}}(\sigma,\theta): \mathbb{R}_{\geq 0}\rightarrow[q_{\underline{\alpha}}(\sigma,\theta);q_{ \overline{\alpha}}(\sigma,\theta)]\] (4) The resulting family of CTMCs is denoted by \(\left(\mathbb{N}_{0}^{\mathcal{S}},[q_{\underline{\alpha}};q_{\overline{ \alpha}}]\right)\) and is called the _uncertain CTMC_ (UCTMC) of a CCRN.1 Footnote 1: We use here the name from [33], even though the name _controlled CTMC_ would be appropriate too.
* Likewise, in the deterministic control system, each kinetic parameter in (3) becomes a control input bounded by the corresponding values in the extremal CRNs, that is a measurable \(\alpha_{i_{r}}:\mathbb{R}_{\geq 0}\rightarrow[\underline{\alpha}_{i_{r}}; \overline{\alpha}_{i_{r}}]\). Moreover, for any bounded set of initial conditions \(I\subseteq\mathbb{R}_{\geq 0}^{\mathcal{S}}\) and time \(t\geq 0\), we define the set of states reachable from \(I\) at time \(t\) as \[\mathfrak{R}(t)=\{v(t)\mid\partial_{t}v=f(v,\alpha),v(0)\in I,\alpha\in[ \underline{\alpha};\overline{\alpha}]\}\] The initial set \(I\) allows to account for uncertainty in the initial condition and encapsulates as special case the singleton set. For a given \(\alpha:\mathbb{R}_{\geq 0}\rightarrow[\underline{\alpha};\overline{\alpha}]\), we shall write \(v^{\alpha}\) for the solution of \(\partial_{t}v=\bar{f}(v,\alpha)\), where \(v(0)\in I\) is assumed to be given.
We shall adhere to the following notation.
**Remark 1**.: _Since \(q\) is \(q_{\underline{\alpha},\overline{\alpha}}\) from (4) rather than \(q_{\alpha}\) from (2) in all but few cases, \(q\) shall refer to \(q_{\underline{\alpha},\overline{\alpha}}\) unless stated otherwise. Also, we shall write \(\alpha_{i}\) rather than \(\alpha_{i_{r}}\) to increase readability._
We end the section by pointing out the following.
**Remark 2**.: _In general, ensuring that the forward equation (1) is regular in the sense that it admits a unique solution for every initial probability distribution \(p(0)\) is nontrivial because the state space \(\mathbb{N}_{0}^{\mathcal{S}}\) is infinite. A common way to ensure regularity is to prove that the CTMC is non-explosive by means of stochastic Lyapunov conditions [34, 35]._
We call a CCRN/UCTMC regular if it induces regular CTMCs only. While some of our results assume regularity, the optimality-preserving lumping from Section IV does not.
## III Species Equivalence of Controlled CRNs
We shall use the following CCRN as running example.
**Example 1**.: _Consider the CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) with species \(\mathcal{S}=\{B,A_{00},A_{01},A_{10},A_{11}\}\) and reactions_
\[A_{00}+B\xrightarrow{[\underline{\alpha}_{i};\overline{\alpha}_{1} ]}A_{10}, A_{10}\xrightarrow{[\underline{\alpha}_{2};\overline{\alpha}_{2}]}A_{0 0}+B,\] \[A_{00}+B\xrightarrow{[\underline{\alpha}_{3};\overline{\alpha}_{3 }]}A_{01}, A_{01}\xrightarrow{[\underline{\alpha}_{1};\overline{\alpha}_{4}]}A_{0 0}+B,\] \[A_{10}+B\xrightarrow{[\underline{\alpha}_{5};\overline{\alpha}_{5} ]}A_{11}, A_{11}\xrightarrow{[\underline{\alpha}_{5};\overline{\alpha}_{6}]}A_{10}+B,\] \[A_{01}+B\xrightarrow{[\underline{\alpha}_{7};\overline{\alpha}_{7 }]}A_{11}, A_{11}\xrightarrow{[\underline{\alpha}_{5};\overline{\alpha}_{8}]}A_{0 1}+B.\]
_The reactions model reversible binding of species \(B\) to a substrate \(A\) with two binding sites. Subscripts \(i,j\) in chemical species \(A_{ij}\) denote the availability of either binding site in the substrate \(A\), while the value on each arrow indicates the kinetic rate parameter. For state \(\sigma=A_{01}+A_{10}+B\), these yield_
\[q_{\sigma,A_{11}+A_{10}}(\cdot)\in[\underline{\alpha}_{7}; \overline{\alpha}_{7}], q_{\sigma,A_{00}+A_{10}+2B}(\cdot)\in[\underline{\alpha}_{4}; \overline{\alpha}_{4}],\] \[q_{\sigma,A_{11}+A_{01}}(\cdot)\in[\underline{\alpha}_{5}; \overline{\alpha}_{5}], q_{\sigma,A_{00}+A_{01}+2B}(\cdot)\in[\underline{\alpha}_{2}; \overline{\alpha}_{2}].\]
_In the following, we make the common assumption [17] that the uncertainty intervals do not depend on the binding site, that is, \([\underline{\alpha}_{i};\overline{\alpha}_{i}]=[\underline{\alpha}_{i+1}; \overline{\alpha}_{i+1}]\) for \(i\in\{1,2,5,6\}\)._
### Lumping of CCPNs
Ordinary lumpability is a partition \(\mathcal{H}\) of the state space such that any two states \(i\), \(j\) in each partition block \(H\in\mathcal{H}\) have equal aggregate rates toward states in any block \(H^{\prime}\in\mathcal{H}\). That is, writing \(\mathfrak{q}\) for the transition rates of a generic CTMC that is not necessarily related to (1), it must hold that \(\sum_{k\in H^{\prime}}\mathfrak{q}_{i,k}(\cdot)=\sum_{k\in H^{\prime}} \mathfrak{q}_{j,k}(\cdot)\). Given an ordinarily lumpable partition, a lumped CTMC can be constructed by associating a macro-state to each block. Transitions between macro-states are labeled with the overall rate from a state in the source block toward all states in the target block.
Checking the conditions for ordinary lumpability requires the full enumeration of the CTMC state space which grows combinatorially in the multiplicities of the initial state and may be even infinite in presence of species creation (e.g., \(A\xrightarrow{\alpha}A+A\)). Species equivalence [29] addresses this by detecting
ordinary lumpability at the level of the reaction network. To this end, it identifies an equivalence relation which induces an ordinary lumpable partition over the multisets representing CTMC states. Specifically, one considers a natural lifting of a partition \(\mathcal{H}\) of species to multisets of species, called _multiset lifting_ of \(\mathcal{H}\), denoted by \(\mathcal{H}^{\uparrow}\).
**Definition 1** (Multiset Lifting).: _Let \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) be a CCRN, a partition \(\mathcal{H}\) over \(\mathcal{S}\) and let \(\mathcal{E}\) be the equivalence relation of \(\mathcal{H}\), i.e., \(\mathcal{H}=\mathcal{S}/\mathcal{E}\). We define the multiset lifting of \(\mathcal{E}\) on \(\mathbb{N}_{0}^{\mathcal{S}}\), denoted by \(\mathcal{E}^{\uparrow}\subseteq\mathbb{N}_{0}^{\mathcal{S}}\times\mathbb{N}_{0} ^{\mathcal{S}}\), as_
\[\left\{(\sigma_{1},\sigma_{2})\in\mathbb{N}_{0}^{\mathcal{S}}\times\mathbb{N}_ {0}^{\mathcal{S}}\ |\ \forall H\in\mathcal{H}.\sum_{A\in H}\sigma_{1}(A)=\sum_{A\in H}\sigma_{2}(A)\right\}\]
_With this, we set \(\mathcal{H}^{\uparrow}=\mathbb{N}_{0}^{\mathcal{S}}/\mathcal{E}^{\uparrow}\)._
Intuitively, the multiset lifting relates multisets that have same cumulative multiplicity from each partition block.
**Example 2**.: _In Example 1, consider \(\mathcal{H}=\{\{B\},\{A_{00}\},\)\(\{A_{01},A_{10}\},\{A_{11}\}\}\) and let \(\mathcal{E}\) be such that \(\mathcal{H}=\mathcal{S}/\mathcal{E}\). Then, \((A_{01},A_{10})\in\mathcal{E}^{\uparrow}\), \((2A_{01},A_{01}+A_{10})\in\mathcal{E}^{\uparrow}\), while \((A_{00},A_{10})\not\in\mathcal{E}^{\uparrow}\) and \((2A_{01},A_{10})\notin\mathcal{E}^{\uparrow}\). That is, two species are equivalent w.r.t. \(\mathcal{E}^{\uparrow}\) when they agree on the number of occupied binding sites. More formally, \((\sigma,\sigma^{\prime})\in\mathcal{E}^{\uparrow}\) whenever \(\sigma(C)=\sigma^{\prime}(C)\) for all \(C\not\in\{A_{00},A_{11}\}\) and \(\sigma(A_{01})+\sigma(A_{10})=\sigma^{\prime}(A_{01})+\sigma^{\prime}(A_{10})\)._
We first review the notion of CRN species equivalence from [29].
**Definition 2** (CRN Species Equivalence).: _Fix a CRN \((\mathcal{S},\mathcal{R}_{\alpha})\). We call a partition \(\mathcal{H}\) of \(\mathcal{S}\) a CRN species equivalence if, for any two species \(A_{i},A_{j}\) in a block of \(\mathcal{H}\), any reagent \(\rho\in\mathbb{N}_{0}^{\mathcal{S}}\), any block \(H^{\uparrow}\in\mathcal{H}^{\uparrow}\), we have_
\[\sum_{\pi\in H^{\uparrow}}\mathbf{rr}_{\alpha}(A_{i}+\rho,\pi)=\sum_{\pi\in H ^{\uparrow}}\mathbf{rr}_{\alpha}(A_{j}+\rho,\pi) \tag{5}\]
_Here, \(\mathbf{rr}_{\alpha}\) is the reaction rate from \(\rho\) to \(\pi\)_
\[\mathbf{rr}_{\alpha}(\rho,\pi)=\begin{cases}\sum\limits_{(\rho \xrightarrow{\alpha_{i}}\pi)\in\mathcal{R}_{\alpha}}\alpha_{i}&,\ \rho\neq\pi\\ -\sum\limits_{\pi^{\prime}\neq\rho}\mathbf{rr}_{\alpha}(\rho,\pi^{\prime})&, \ \rho=\pi\end{cases}\]
_For any \(H^{\uparrow}\subseteq\mathbb{N}_{0}^{\mathcal{S}}\), we set \(\mathbf{rr}_{\alpha}[\rho,H^{\uparrow}]=\sum_{\pi\in H^{\uparrow}}\mathbf{rr }_{\alpha}(\rho,\pi)\)._
Any CRN species equivalence induces a lumped CRN given next.
**Definition 3** (Lumped CRN).: _Let \((\mathcal{S},\mathcal{R}_{\alpha})\) be a CRN, \(\mathcal{H}\) a CRN species equivalence and fix a representative \(A_{H}\in H\) for each \(H\in\mathcal{H}\). The lumped CRN is then given by \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\), where the species are \(\mathcal{\hat{S}}=\{A_{H}\mid H\in\mathcal{H}\}\), while reactions \(\mathcal{\hat{R}}_{\alpha}\) arise via_
1. _discard all reactions_ \(\rho\xrightarrow{\alpha_{i}}\pi\) _where_ \(\rho\) _has a nonrepresentative species;_
2. _replace the species in the products of the remaining reactions by their representatives;_
3. _fuse all reactions that have the same reactants and products by summing their rates._
With the foregoing definitions in place, CCRN species equivalence is defined as the CRN species equivalence of the extremal CRNs \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\) and \((\mathcal{S},\mathcal{R}_{\overline{\alpha}})\).
**Definition 4** (CCRN Species Equivalence).: _Fix a CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\). We call a partition \(\mathcal{H}\) of \(\mathcal{S}\) a CCRN species equivalence whenever \(\mathcal{H}\) is a CRN species equivalence of \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\) and \((\mathcal{S},\mathcal{R}_{\overline{\alpha}})\)._
Our example enjoys a CCRN species equivalence.
**Example 3**.: _Continuing Example 1 and 2, we note that for_
\[\alpha=(\alpha_{1},\ldots,\alpha_{8})\in\{(\underline{\alpha}_{1},\ldots, \underline{\alpha}_{8}),(\overline{\alpha}_{1},\ldots,\overline{\alpha}_{8})\},\]
_it holds that \(\mathbf{rr}_{\alpha}(A_{01},A_{00}+B)=\mathbf{rr}_{\alpha}(A_{10},A_{00}+B)\) and \(\mathbf{rr}_{\alpha}(A_{01}+B,A_{11})=\mathbf{rr}_{\alpha}(A_{10}+B,A_{11})\). Hence, \(\mathcal{H}\) is a CCRN species equivalence._
The lumped CCRN is given by the lumpings of the extremals, as stated next.
**Definition 5** (Lumped CCRN).: _Let \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) be a CCRN and \(\mathcal{H}\) a CCRN species equivalence. The lumped CCRN \((\mathcal{\hat{S}},\mathcal{\hat{R}}_{[\underline{\alpha};\overline{\alpha}]})\) arises by lumping the extremal CRNs \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\) and \((\mathcal{S},\mathcal{R}_{\overline{\alpha}})\) as outlined in Definition 3._
We remark that the lumped CCRN does not depend on the choice of the representative [29]. As next, we provide the lumped CCRN of our example.
**Example 4**.: _Continuing Example 1-3, the lumped CCRN is given by \(\mathcal{\hat{S}}=\mathcal{S}\setminus\{A_{10}\}\) and \(\mathcal{\hat{R}}_{[\underline{\alpha};\overline{\alpha}]}\) such that_
\[\begin{array}{ll}A_{00}+B\xrightarrow{[\underline{\alpha}_{1}+\underline{ \alpha}_{3};\overline{\alpha}_{1}+\overline{\alpha}_{3}]}A_{01},&A_{01} \xrightarrow{[\underline{\alpha}_{i};\overline{\alpha}_{4}]}A_{00}+B,\\ A_{11}\xrightarrow{[\underline{\alpha}_{i}+\underline{\alpha}_{3};\overline{ \alpha}_{6}+\overline{\alpha}_{8}]}A_{01}+B,&A_{01}+B\xrightarrow{[\underline{ \alpha}_{7};\overline{\alpha}_{7}]}A_{11}\end{array}\]
The CCRN species equivalence can be computed by invoking alternately the CRN lumping algorithm from [29] on the extremal CRNs that define a CCRN, as stated next.
**Theorem 1** (Computation of CCRN Species Equivalence).: _Let \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) be a CCRN. Then we have the following._
1. \(\mathcal{H}\) _is a CCRN species equivalence iff_ \(\mathcal{H}^{\uparrow}\) _is an ordinary lumpability of the CTMCs_ \((\mathbb{N}_{0}^{\mathcal{S}},q_{\underline{\alpha}})\) _and_ \((\mathbb{N}_{0}^{\mathcal{S}},q_{\underline{\alpha}})\)_._
2. _For any partition_ \(\mathcal{G}\) _of_ \(\mathcal{S}\)_, Algorithm_ 1 _computes the coarsest CCRN species equivalence of_ \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) _that refines_ \(\mathcal{G}\)_. That is,_ \(\mathcal{H}\) _is such that_ * _for every block_ \(G\in\mathcal{G}\)_, there exist unique blocks_ \(H_{1},\ldots,H_{l}\in\mathcal{H}\) _such that_ \(G=H_{1}\cup\ldots\cup H_{l}\) _and;_ * \(\mathcal{H}\) _is a CCRN species equivalence and has a minimal number of blocks, hence a lumped CCRN of minimal size._ _The number of steps performed by Algorithm_ 1 _is polynomial in_ \(|\mathcal{S}|\) _and_ \(|\mathcal{R}_{\underline{\alpha}}|\)_._
Proof.: We start by noting that \(\mathcal{H}\) is a CCRN species equivalence of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) if and only if \(\mathcal{H}\) is a CRN species equivalence [29] of \((\mathcal{S},\mathcal{R}_{\underline{\alpha}})\) and \((\mathcal{S},\mathcal{R}_{\overline{\alpha}})\). With this in mind, we first observe that 1) and 2) follow directly from [29] in the special case \(\underline{\alpha}=\overline{\alpha}\). Let us now consider the general case \(\underline{\alpha}<\overline{\alpha}\). Then, 1) follows from the special case of 1) and the definition of CCRN species equivalence. Likewise, 2) follows by the definition of Algorithm 1 and the special case of 2).
After addressing the computation of CCRN species equivalence, we observe next that the block sums of original CCRN states are equivalent in distribution to the states of the lumped CCRN. Following standard notation, equivalence in distribution is denoted by \(\stackrel{{\text{\tiny{$\pm$}}}}{{=}}\).
**Theorem 2** (CCRN Species Equivalence).: _Let \(\mathcal{H}\) be a CCRN species equivalence of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) and let \((\hat{\mathcal{S}},\hat{\mathcal{R}}_{[\underline{\alpha};\overline{\alpha}]})\) be the respective lumped CCRN. Moreover, let \(X^{q}\) and \(X^{\hat{q}}\) denote family members of the respective UCTMCs. Then, if both CCRNs are regular, we have:_
1. _For any_ \(q(\cdot)\in[q_{\underline{\alpha}};q_{\overline{\alpha}}]\)_, there exists a_ \(\hat{q}(\cdot)\in[q_{\underline{\alpha}};q_{\overline{\alpha}}]\) _such that_ \[\sum_{A\in H}X^{q}_{A}(t)\stackrel{{\text{\tiny{$\pm$}}}}{{=}} \hat{X}^{\hat{q}}_{A_{H}}(t),\quad\forall H\,\in\mathcal{H}\forall\,t>0,\] (6) _provided the statement holds for_ \(t=0\)_._
2. _Conversely, for any_ \(\hat{q}(\cdot)\in[q_{\underline{\alpha}};q_{\overline{\alpha}}]\)_, there is a_ \(q(\cdot)\in[q_{\underline{\alpha}};q_{\overline{\alpha}}]\) _with (_6_)._
Proof.: Let us first assume that \(\underline{\alpha}=\overline{\alpha}\). Then, for any block \(H^{\uparrow}\in\mathcal{H}^{\uparrow}\) and its representative \(\sigma_{H^{\uparrow}}\!\in\!H^{\uparrow}\cap\mathbb{N}_{0}^{\mathcal{S}}\), statement 1) of Theorem 1 and the regularity ensure [29] that
\[\sum_{\sigma\in H^{\uparrow}}p_{\sigma}(t)=\hat{p}_{\sigma_{H^{\uparrow}}}(t), \quad\forall\,t>0.\]
Together with
\[\left\{\sigma\in\mathbb{N}_{0}^{\mathcal{S}}\mid\forall H\in\mathcal{H}.\sum_ {A\in H}\sigma_{A}=(\sigma_{H^{\uparrow}})_{A_{H}}\right\}=H^{\uparrow},\]
this implies \(\big{(}\sum_{A\in H}X_{A}(t)\big{)}_{H\in\mathcal{H}}\stackrel{{ \text{\tiny{$\pm$}}}}{{=}}\big{(}\hat{X}_{A_{H}}(t)\big{)}_{H\in\mathcal{H}}\), yielding the claim. The general case, instead, follows from the proof of Theorem 6 from [33]. Specifically, the proof carries over verbatim to our setting of countable state spaces because \(\mathcal{H}^{\uparrow}\) has finite blocks and the assumption of regularity ensures that the forward Kolmogorov equations enjoy unique solutions.
Provided the \(n\)-th order moments exist, Theorem 2 implies in particular \(\mathbb{E}\big{[}(\sum_{A\in H}X_{A}(t))^{n}\big{]}=\mathbb{E}\big{[}\hat{X}^ {n}_{A_{H}}(t)\big{]}\).2 The moments, in turn, can be estimated by means of stochastic simulation [36].
Footnote 2: Similarly to regularity, the existence of moments can be addressed by means of Lyapunov conditions [34, 35]
**Example 5**.: _It can be shown that Example 1 is regular. With this, Theorem 2 essentially ensures that_
* _for any_ \(q\)_, there exists a_ \(\hat{q}\) _such that_ \(X^{q}_{A_{01}}+X^{q}_{A_{10}}\stackrel{{\text{\tiny{$\pm$}}}}{{=}} \hat{X}^{\hat{q}}_{A_{01}}\) _and_ \(X^{q}_{S}\stackrel{{\text{\tiny{$\pm$}}}}{{=}}\hat{X}^{\hat{q}}_{S}\) _with_ \(S\notin\{A_{01},A_{10}\}\)_._
* _for any_ \(\hat{q}\)_, there exists a_ \(q\) _such that_ \(X^{q}_{A_{01}}+X^{q}_{A_{10}}\stackrel{{\text{\tiny{$\pm$}}}}{{=}} \hat{X}^{\hat{q}}_{A_{01}}\) _and_ \(X^{q}_{S}\stackrel{{\text{\tiny{$\pm$}}}}{{=}}\hat{X}^{\hat{q}}_{S}\) _with_ \(S\notin\{A_{01},A_{10}\}\)_._
_That is, if one is only interested in species \(S\notin\{A_{01},A_{10}\}\) or the cumulative behavior of species \(X^{q}_{A_{01}}+X^{q}_{A_{10}}\), any behavior of the original CCRN can be matched by the lumped CCRN and vice versa._
Example 5 demonstrates that CRN species equivalence allows one to lump the original CCRN to a smaller lumped CCRN at the expense of preserving sums of original species. For instance, if the modeler is interested in \(X_{A_{00}}\) and \(X_{B}\), partition \(\mathcal{H}\) from Example 2 can be used because \(\{A_{00}\},\{B\}\in\mathcal{H}\). Instead, if a modeler is interested in \(X_{A_{10}}\), it is not possible to use \(\mathcal{H}\) because the lumped CCRN would only capture the cumulative behavior \(X_{A_{10}}+X_{A_{01}}\). A natural question would be then if there is a CCRN species equivalence \(\mathcal{H}^{\prime}\) which contains \(\{A_{10}\}\). This can be readily checked by applying the algorithm from Theorem 1 to \(\mathcal{G}^{\prime}=\{\{A_{10}\},\mathcal{S}\setminus\{A_{10}\}\}\). This is because any CCRN species equivalence \(\mathcal{H}^{\prime}\) refining \(\mathcal{G}^{\prime}\) has to contain the block \(\{A_{10}\}\). An application of the algorithm returns then the trivial CCRN species equivalence \(\mathcal{H}^{\prime}=\{\{S\}\mid S\in\mathcal{S}\}\). We call \(\mathcal{H}^{\prime}\) trivial because it does not lump the original CCRN.
## IV Optimality-preserving Lumping
We start by providing the deterministic control system of our example.
**Example 6**.: _The CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) from Example 1 gives rise to the ODE system_
\[\partial_{t}v_{A_{00}} =-(\alpha_{1}+\alpha_{3})v_{A_{00}}v_{B}+\alpha_{2}v_{A_{10}}+ \alpha_{4}v_{A_{01}}\] \[\partial_{t}v_{A_{10}} =\alpha_{1}v_{A_{00}}v_{B}-\alpha_{2}v_{A_{10}}-\alpha_{5}v_{A_{10} }v_{B}+\alpha_{6}v_{A_{11}} \tag{7}\] \[\partial_{t}v_{A_{01}} =\alpha_{3}v_{A_{00}}v_{B}-\alpha_{4}v_{A_{01}}-\alpha_{7}v_{A_{01} }v_{B}+\alpha_{8}v_{A_{11}}\] \[\partial_{t}v_{A_{11}} =\alpha_{5}v_{A_{10}}v_{B}+\alpha_{7}v_{A_{01}}v_{B}-(\alpha_{6}+ \alpha_{8})v_{A_{11}}\] \[\partial_{t}v_{B} =-(\alpha_{1}+\alpha_{3})v_{A_{00}}v_{B}+\alpha_{2}v_{A_{10}}+ \alpha_{4}v_{A_{01}}\] \[\qquad-\alpha_{5}v_{A_{10}}v_{B}-\alpha_{7}v_{A_{01}}v_{B}+(\alpha_{6}+ \alpha_{8})v_{A_{11}}\]
The deterministic control system (3) is also known as the fluid model of a CRN. This is because it can be approximated by CTMCs that have as states, loosely speaking, fractions \(\frac{1}{N}\mathbb{N}_{0
_where each \(\rho\xrightarrow[]{\alpha_{i}}\pi\in\mathcal{R}_{\alpha}\) induces a \(\rho\xrightarrow[]{\alpha_{i}^{N}}\pi\in\mathcal{R}_{\alpha^{N}}\) with \(\alpha_{i}^{N}=\alpha_{i}/N^{|\rho|-1}\) for \(|\rho|=\sum_{A\in\mathcal{S}}\rho(A)\).3_
Footnote 3: The CTMC approximation could be given without a cutoff function \(g\). It will be mainly needed for the UCTMC counterpart from Definition 7.
Generalizing the foregoing notion, we introduce a UCTMC approximation of a CCRN.
**Definition 7** (UCTMC Approximation).: _Fix a CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) and a constant \(c>0\). The \(N\)-th UCTMC approximation of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) is \(X_{N}=(\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}},q_{[\underline{\alpha}; \overline{\alpha}]}^{N})\), where \(q_{[\underline{\alpha};\overline{\alpha}]}^{N}=[q_{\underline{\alpha}}^{N};q _{\overline{\alpha}}^{N}]\), with \(q_{\underline{\alpha}}^{N}\) and \(q_{\overline{\alpha}}^{N}\) as in Definition 6._
We next prove that the UCTMCs \(X_{N}\) converge to the fluid CCRN model of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\). To this end, we first show that for any \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\) there exists a \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) such that the ODE solution \(v^{\alpha}\) is sufficiently close to the CTMC simulation \(X_{N}^{q}\), provided that \(N\) is large enough and \(X_{N}^{q}\) denotes the CTMC induced by \(q\). This follows from standard fluid limit results [31, SS11.1-SS11.2].
**Proposition 1**.: _Fix a CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\), a time \(T>0\) and assume that \(\mathfrak{R}(t)\subseteq B(c)\) for all \(t\in[0;T]\), where \(B(c)\) is the \(L_{1}\) ball with radius \(c\) centered at the origin. Assume further that the UCTMC approximations \((X_{N}(0))_{N\geq 1}\) satisfy \(X_{N}(0)=\lfloor Nv(0)\rfloor/N\) for some \(v(0)\in I\) in the set of initial conditions \(I\) of the CCRN. Then, for any \(\varepsilon,\delta>0\), there exists an \(N\geq 1\) such that for any \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\), there exists a \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) such that_
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|X_{N}^{q}(t)-v^{\alpha}(t)|>\varepsilon \big{\}}<\delta.\]
Proof.: We use [31, SS11.2, Theorem 2.1]. Specifically, we first note that the discussion in [31, SS11.1-SS11.2] readily extends to time-varying transition rates. Moreover, it is possible to use uniform estimations in the proof of Theorem 2.1 which do not depend on the choice of \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\). Armed with this insight, we pick any \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\) and consider the CTMCs \(Z_{N}=(\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}},\mathfrak{q}_{\alpha(\cdot)}^{ N})\), where
\[\mathfrak{q}_{\alpha(\cdot)}^{N}(\sigma,\theta)=\sum_{\begin{subarray}{c}r=( \rho\xrightarrow[]{\alpha_{i}(\cdot)}\pi)\in\mathcal{R}_{\alpha(\cdot)}\\ \theta=\sigma+\frac{1}{N}(\pi-\rho)\end{subarray}}\frac{\alpha_{i}(\cdot)}{ N^{|\rho|-1}}\cdot\binom{N\sigma}{\rho} \tag{8}\]
The result then follows by applying Theorem 2.1 to \(\min\{T,\tau\}\) rather than \(T\), where \(\tau\) is the first exit time of \(\{\sigma\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\mid|\sigma|\leq c\}\), see also [30, Corollary 2.8]. Crucially, due to uniform estimations, one can pick an \(N\) such that the statement holds for all \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\).
Our second approximation result ensures, conversely, that for any \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) there exists a \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\) such that the ODE solution \(v^{\alpha}\) is sufficiently close to the CTMC simulation \(X_{N}^{q}\), provided that \(N\) is large enough.
**Proposition 2**.: _Under the same assumptions as Proposition 1 and for any \(\varepsilon,\delta>0\), there exists an \(N\geq 1\) such that for any \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\), there exists \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\) such that_
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|X_{N}^{q}(t)-v^{\alpha}(t)|>\varepsilon \big{\}}<\delta.\]
Proof.: To increase readability, we postpone the lengthy proof to Section VII.
### _Proof of Optimality-Preservation_
Before proving the main result, we establish our last auxiliary result which ensures that the transient probabilities of the \(N\)-th UCTMC approximation of the original and the lumped CCRN coincide on the blocks of \(H^{\uparrow}\), if \(N\) is large enough and \(\mathcal{H}\) is a CCRN species equivalence. The proof relies on [33].
**Proposition 3**.: _Let \(\mathcal{H}\) be a CCRN species equivalence of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\) and let \(X_{N}\) and \(\tilde{X}_{N}\) denote, respectively, the \(N\)-UCTMC approximation of the original and the lumped CCRN, see Definition 5 and 7. Then, we have the following._
* _For any_ \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) _there is a_ \(\hat{q}(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) _such that_ \[\forall H^{\uparrow}\in\mathcal{H}^{\uparrow}.\sum_{\sigma\in H^{\uparrow}}p_{ \frac{1}{N}\sigma}(t)=\hat{p}_{\frac{1}{N}\sigma_{H^{\uparrow}}}(t)\] (9) _holds for all_ \(t>0\)_, provided it holds for_ \(t=0\)_. Here,_ \(p\) _and_ \(\hat{p}\) _is the transient probability of_ \(X_{N}^{q}\) _and_ \(\tilde{X}_{N}^{\hat{q}}\)_, respectively, while_ \(\sigma_{H^{\uparrow}}\in H^{\uparrow}\cap\mathbb{N}_{0}^{\mathcal{S}}\) _is the unique representative of_ \(H^{\uparrow}\)_._
* _Conversely, for any_ \(\hat{q}(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) _there is a_ \(q(\cdot)\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) _such that (_9_) holds for all_ \(t>0\)_, if it holds for_ \(t=0\)_._
Proof.: We begin by proving that \(\mathcal{H}\) is a CCRN species equivalence of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha}^{N};\overline{\alpha}^{N}]})\), where \(\underline{\alpha}^{N}\) and \(\overline{\alpha}^{N}\) are as in Definition 6. This holds true if \(\mathcal{H}\) is a CRN species equivalence of \((\mathcal{S},\mathcal{R}_{\alpha^{N}})\) with \(\alpha^{N}\in\{\underline{\alpha}^{N},\overline{\alpha}^{N}\}\). To see this, pick any \(\alpha\in\{\underline{\alpha},\overline{\alpha}\}\), \(H\in\mathcal{H}\), \(A_{i},A_{j}\in H\) and \(H^{\uparrow}\in\mathcal{H}^{\uparrow}\). Then, we need to show that
\[\sum_{\pi\in H^{\uparrow}}\mathbf{rr}_{\alpha}^{N}(A_{i}+\rho,\pi)=\sum_{\pi \in H^{\uparrow}}\mathbf{rr}_{\alpha}^{N}(A_{j}+\rho,\pi). \tag{10}\]
Here, \(\mathbf{rr}_{\alpha}^{N}\) is defined according to Definition 4 as
\[\mathbf{rr}_{\alpha}^{N}(A_{k}+\rho,\pi)=\] \[\begin{cases}\sum\limits_{(A_{k}+\rho\xrightarrow[]{\alpha_{i}^{N}} \pi)\in\mathcal{R}_{\alpha^{N}}}\frac{\alpha_{i}}{N^{|A_{k}+\rho|-1}}&,\ A_{k}+\rho\neq\pi,\\ -\sum\limits_{\pi\neq\neq A_{k}+\rho}\mathbf{rr}_{\alpha}^{N}(A_{k}+\rho, \pi^{\prime})&,\ A_{k}+\rho=\pi.\end{cases}\]
Then obviously \(\mathbf{rr}_{\alpha}^{N}(A_{k}+\rho,\pi)=\frac{\mathbf{rr}_{\alpha}(A_{k}+\rho, \pi)}{N^{|A_{k}+\rho|-1}}\). As \(\mathcal{H}\) is a CRN species equivalence of \((\mathcal{S},\mathcal{R}_{\alpha})\) by assumption, it holds that
\[\sum_{\pi\in H^{\uparrow}}\mathbf{rr}_{\alpha}(A_{i}+\rho,\pi)=\sum_{\pi\in H ^{\uparrow
**Theorem 3** (Deterministic CCRN Lumping).: _Let us fix a CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\), a constant \(c>0\), assume that \(\mathcal{H}\) is a CCRN species equivalence and denote the corresponding lumped CCRN by \((\hat{\mathcal{S}},\hat{\mathcal{R}}_{[\underline{\alpha};\overline{\alpha}]})\). If \(T>0\) is such that \(\mathfrak{R}(t)\subseteq B(c)\) for any \(t\in[0;T]\), then for any initial condition \(v(0)\in I\) and any \(\varepsilon,\delta>0\), the original and lumped deterministic models, \(\partial_{t}v^{\alpha}=f(v^{\alpha},v)\) and \(\partial_{t}\hat{v}^{\alpha}=f(\hat{v}^{\hat{\alpha}},\hat{v})\), enjoy the following._
1. _For any_ \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\)_, there is some_ \(\hat{\alpha}(\cdot)\in[\underline{\alpha};\widehat{\alpha}]\) _such that_ \(\partial_{t}v^{\alpha}=f(v^{\alpha},v)\) _and_ \(\partial_{t}\hat{v}^{\hat{\alpha}}=f(\hat{v}^{\hat{\alpha}},\hat{v})\) _satisfy_ \[\mathbb{P}\Big{\{}\max_{H\in\mathcal{H}}\max_{0\leq t\leq T}\big{|}\sum_{A \in H}v_{A}^{\alpha}(t)-\hat{v}_{A_{H}}^{\hat{\alpha}}(t)\big{|}>\varepsilon \Big{\}}<\delta\] _provided that_ \(\sum_{A\in\mathcal{H}}v_{A}(0)=\hat{v}_{A_{H}}(0)\) _for all_ \(H\in\mathcal{H}\)_._
2. _For any_ \(\hat{\alpha}(\cdot)\in[\underline{\alpha};\overline{\alpha}]\)_, there is some_ \(\alpha(\cdot)\in[\underline{\alpha};\overline{\alpha}]\) _such that_ \(\partial_{t}v^{\alpha}=f(v^{\alpha},v)\) _and_ \(\partial_{t}\hat{v}^{\hat{\alpha}}=\hat{f}(\hat{v}^{\hat{\alpha}},\hat{v})\) _satisfy_ \[\mathbb{P}\Big{\{}\max_{H\in\mathcal{H}}\max_{0\leq t\leq T}\big{|}\sum_{A \in H}v_{A}^{\alpha}(t)-\hat{v}_{A_{H}}^{\hat{\alpha}}(t)\big{|}>\varepsilon \Big{\}}<\delta\] _provided that_ \(\sum_{A\in\mathcal{H}}v_{A}(0)=\hat{v}_{A_{H}}(0)\) _for all_ \(H\in\mathcal{H}\)_._
Proof.: We first prove 1). To this end, pick some small \(\varepsilon,\delta>0\) and some arbitrary \(\alpha\in[\underline{\alpha};\overline{\alpha}]\). By Proposition 1-2, we can pick an \(N\geq 1\) such that
* There is a \(q\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) such that \[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|X_{N}^{q}(t)-v^{\alpha}(t)|> \varepsilon/|\mathcal{S}|\big{\}}<\delta/|\mathcal{S}|.\]
* For any \(\hat{q}\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\), there is an \(\hat{\alpha}\in[\underline{\alpha};\widehat{\alpha}]\) such that \[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|\hat{X}_{N}^{\hat{q}}(t)-\hat{v}^{ \hat{\alpha}}(t)|>\varepsilon\big{\}}<\delta.\]
Since \(\mathcal{H}\) is a CCRN species equivalence of \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\), Proposition 3 ensures that there is a \(\hat{q}\in q_{[\underline{\alpha};\widehat{\alpha}]}^{N}\) such that the solutions of forward equations \(\partial_{t}p^{T}=p^{T}\hat{q}\) and \(\partial_{t}\hat{p}^{T}=\hat{p}^{T}\hat{q}\) satisfy
\[\forall H^{\uparrow}\in\mathcal{H}^{\uparrow}.\,\forall t\geq 0.\,\sum_{ \sigma\in H^{\uparrow}}p_{\frac{1}{N}\sigma}(t)=\hat{p}_{\frac{1}{N}\sigma_{H^ {\uparrow}}}(t)\]
Moreover, for any \(H\in\mathcal{H}\), the first bullet point above and the inequality \(\mathbb{P}\{|\sum_{i=1}^{\nu}Z_{i}|>\eta\}\leq\sum_{i=1}^{\nu}\mathbb{P}\{|Z_ {i}|>\eta/\nu\}\), where \(Z_{i}\) are real random variables, imply that
\[\mathbb{P}\Big{\{}\sup_{0\leq t\leq T}\big{|}\sum_{A\in H}(X_{N}^{q})_{A}(t)- \sum_{A\in H}v_{A}^{\alpha}(t)\big{|}>\varepsilon\Big{\}}<\delta.\]
This and the foregoing choice of \(\hat{q}\) imply for all \(H\in\mathcal{H}\)
\[\mathbb{P}\Big{\{}\sup_{0\leq t\leq T}\big{|}(\hat{X}_{N}^{\hat{q}})_{A_{H}}(t )-\sum_{A\in H}v_{A}^{\alpha}(t)\big{|}>\varepsilon\Big{\}}<\delta.\]
Thanks to the second bullet point from above, we can pick an \(\hat{\alpha}\in[\underline{\alpha};\widehat{\alpha}]\) such that
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|(\hat{X}_{N}^{\hat{q}})_{A_{H}}(t)-\hat{ v}_{A_{H}}^{\hat{\alpha}}(t)|>\varepsilon\big{\}}<\delta.\]
Using again \(\mathbb{P}\{|\sum_{i=1}^{\nu}Z_{i}|>\eta\}\leq\sum_{i=1}^{\nu}\mathbb{P}\{|Z_ {i}|>\eta/\nu\}\), the above discussion allows us thus to conclude that
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}\big{|}\sum_{A\in H}v_{A}^{\alpha}(t)- \hat{v}_{A_{H}}^{\hat{\alpha}}(t)|>2\varepsilon\big{\}}<2\delta.\]
Since the choice of \(H\in\mathcal{H}\) and \(\varepsilon,\delta>0\) was arbitrary, we obtain 1).
We prove 2) in a similar fashion. Specifically, thanks to Proposition 1-2, we can pick an \(N\geq 1\) such that
* There is a \(\hat{q}\in q_{[\underline{\alpha};\widehat{\alpha}]}^{N}\) such that \[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|\hat{X}_{N}^{\hat{q}}(t)-\hat{v}^{ \hat{\alpha}}(t)|>\varepsilon\big{\}}<\delta.\]
* For any \(q\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\), there is an \(\alpha\in[\underline{\alpha};\overline{\alpha}]\) such that \[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|X_{N}^{q}(t)-v^{\alpha}(t)|> \varepsilon/|\mathcal{S}|\big{\}}<\delta/|\mathcal{S}|.\]
Using the first bullet point, we pick a \(\hat{q}\) such that for any \(H\in\mathcal{H}\) it holds
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|\hat{X}_{A_{H}}^{\hat{q}}(t)-\hat{v}_{A_{ H}}^{\hat{\alpha}}(t)|>\varepsilon\big{\}}<\delta.\]
Thanks to Proposition 3, we can further pick a \(q\in q_{[\underline{\alpha};\overline{\alpha}]}^{N}\) such that the solutions of forward equations \(\partial_{t}p=p^{T}\hat{q}\) and \(\partial_{t}\hat{p}=\hat{p}^{T}\hat{q}\) satisfy
\[\forall H^{\uparrow}\in\mathcal{H}^{\uparrow}.\,\forall t\geq 0.\,\sum_{ \sigma\in H^{\uparrow}}p_{\frac{1}{N}\sigma}(t)=\hat{p}_{\frac{1}{N}\sigma_{H^ {\uparrow}}}(t)\]
Combining both statements yields
\[\mathbb{P}\Big{\{}\sup_{0\leq t\leq T}\big{|}\sum_{A\in H}X_{A}^{q}(t)-\hat{v}_{A _{H}}^{\hat{\alpha}}(t)\big{|}>\varepsilon\Big{\}}<\delta.\]
Thanks to the second bullet point from above, we can pick an next an \(\alpha\in[\underline{\alpha};\widehat{\alpha}]\) such that
\[\mathbb{P}\Big{\{}\sup_{0\leq t\leq T}\big{|}\sum_{A\in H}X_{A}^{q}(t)-\sum_{A \in H}v_{A}^{\alpha}(t)\big{|}>\varepsilon\Big{\}}<\delta\]
The above discussion yields then
\[\mathbb{P}\big{\{}\sup_{0\leq t\leq T}|\sum_{A\in H}v_{A}^{\alpha}(t)-\hat{v}_{A_{ H}}^{\hat{\alpha}}(t)|>2\varepsilon\big{\}}<2\delta.\]
Since the choice of \(H\in\mathcal{H}\) and \(\varepsilon,\delta>0\) was arbitrary, we obtain 2).
Let us next apply Theorem 3 to our example.
Figure 3: Proof strategy of Theorem 3, part 1). The result is proven by approximating the deterministic control systems of the original and the lumped CCRN by means of UCTMCs (Prop. 1 and 2). This, in turn, are shown in Prop. 3 to coincide on the blocks (of the multiset lifting) of an ordinary lumpable partition. Part 2) is proven in a similar fashion by reversing the directions.
**Example 7**.: _The lumped CCRN \((\hat{\mathcal{S}},\hat{\mathcal{R}}_{[\underline{\hat{\alpha}};\overline{\hat{ \alpha}}]})\) from Example 4 has the fluid model_
\[\partial_{t}\hat{v}_{A_{00}} =-\hat{\alpha}_{1}\hat{v}_{A_{00}}\hat{v}_{B}+\hat{\alpha}_{2}\hat {v}_{A_{01}}\] \[\partial_{t}\hat{v}_{A_{10}} =\hat{\alpha}_{1}\hat{v}_{A_{00}}\hat{v}_{B}-\hat{\alpha}_{2}\hat {v}_{A_{01}}-\hat{\alpha}_{3}\hat{v}_{A_{01}}\hat{v}_{B}+\hat{\alpha}_{4}\hat {v}_{A_{11}} \tag{11}\] \[\partial_{t}\hat{v}_{A_{11}} =\hat{\alpha}_{3}\hat{v}_{A_{01}}\hat{v}_{B}-\hat{\alpha}_{4}\hat {v}_{A_{11}}\] \[\partial_{t}\hat{v}_{B} =-\hat{\alpha}_{1}\hat{v}_{A_{00}}\hat{v}_{B}+\hat{\alpha}_{2}\hat {v}_{A_{01}}-\hat{\alpha}_{1}\hat{v}_{A_{00}}\hat{v}_{B}+\hat{\alpha}_{2}\hat {v}_{A_{01}},\]
_where \(\hat{\alpha}\in[\underline{\hat{\alpha}};\overline{\hat{\alpha}}]\). Then, for any \(\varepsilon,\delta>0\), Theorem 3 essentially implies that_
* _for any_ \(\alpha\)_, there is an_ \(\hat{\alpha}\) _such that, with a probability of_ \(1-\delta\) _or higher, it holds that_ \(|v^{\alpha}_{A_{01}}+v^{\alpha}_{A_{10}}-\hat{v}^{\hat{\alpha}}_{A_{01}}|<\varepsilon\) _and_ \(|v^{\alpha}_{\delta}-\hat{v}^{\hat{\alpha}}_{S}|<\varepsilon\) _for all_ \(S\notin\{A_{01},A_{10}\}\)_;_
* _for any_ \(\hat{\alpha}\)_, there is an_ \(\alpha\) _such that, with a probability of_ \(1-\delta\) _or higher, it holds that_ \(|v^{\alpha}_{A_{01}}+v^{\alpha}_{A_{10}}-\hat{v}^{\hat{\alpha}}_{A_{01}}|<\varepsilon\) _and_ \(|v^{\alpha}_{S}-\hat{v}^{\hat{\alpha}}_{S}|<\varepsilon\) _for all_ \(S\notin\{A_{01},A_{10}\}\)_._
**Remark 3**.: _In contrast to Theorem 2, Theorem 3 does not require the CCRN or its lumping to be regular. Rather, it requires that the reachable set \(\mathfrak{R}\) of the CCRN does not exhibit an explosion on \([0;T]\), a property that can be often established for CRNs via conservation laws [1, 8, 29]._
Since CCRN species equivalence essentially ensures that the trajectories of the fluid models of the original and lumped CCRN coincide, it is natural to extend Theorem 3 to value functions.
**Theorem 4** (Value preservation).: _Additionally to the assumptions made in Theorem 3, introduce_
* _the differentiable running cost_ \(L:\mathbb{R}^{\mathcal{S}}\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) _and final cost_ \(K:\mathbb{R}^{\mathcal{S}}\to\mathbb{R}_{\geq 0}\) _and;_
* _the functional_ \(J_{\alpha}(v[0])=\int_{0}^{T}L(t,v^{\alpha}(t))dt+K(v^{\alpha}(T))\)_, where_ \(\partial_{t}v^{\alpha}=f(v^{\alpha},\alpha)\)_,_ \(v(0)=v[0]\) _and_ \(\alpha\in[\underline{\alpha};\overline{\alpha}]\)_;_
* _assume that_ \(\partial_{v_{A}}L=\partial_{v_{B}}L\) _and_ \(\partial_{\hat{v}_{A}}K=\partial_{v_{B}}K\) _for all_ \(H\in\mathcal{H}\) _and_ \(A,B\in H\)_._
_With this, define for any \(\hat{v}\in\mathbb{R}^{\mathcal{S}}_{\geq 0}\) the lumped costs as \(\hat{L}(t,\hat{v})=L(t,v)\) and \(\hat{K}(t,\hat{v})=K(t,v)\), where \(v\in\mathbb{R}^{\mathcal{S}}_{\geq 0}\) is arbitrary such that \(\sum_{A\in H}v_{A}=\hat{v}_{A_{H}}\) for all \(H\in\mathcal{H}\). Then, for any initial condition \(v[0]\in I\), almost surely it holds that_
\[\inf_{\alpha}J_{\alpha}(v[0])=\inf_{\hat{\alpha}}\hat{J}_{\hat{\alpha}}(\hat{ v}[0]),\]
_provided that \(\sum_{A\in H}v_{A}[0]=\hat{v}_{A_{H}}[0]\) for all \(H\in\mathcal{H}\). A similar statement holds true for \(\sup\)._
Proof.: The assumption on the running and final cost ensure that \(L(t,v)=\hat{L}(t,\hat{v})\) and \(\hat{K}(t,\hat{v})=K(t,v)\) for all \(\hat{v}\in\mathbb{R}^{\mathcal{S}}_{\geq 0}\) and all \(v\in\mathbb{R}^{\mathcal{S}}_{\geq 0}\) satisfying \(\sum_{A\in H}v_{A}=\hat{v}_{A_{H}}\) for all \(H\in\mathcal{H}\). Since \(\mathfrak{R}([0;T])\subseteq B(c)\) and \(L\), \(K\) are Lipschitz continuous on \(B(c)\) as differentiable functions, Theorem 3 ensures that for any initial condition \(v[0]\in I\) and \(\varepsilon,\delta>0\), we have that
\[\mathbb{P}(E(\varepsilon))=\mathbb{P}\big{\{}\big{|}\inf_{\alpha}J_{\alpha}(v[0 ])-\inf_{\hat{\alpha}}\hat{J}_{\hat{\alpha}}(\hat{v}[0])\big{|}>\varepsilon \big{\}}<\delta,\]
provided that \(\sum_{A\in H}v_{A}[0]=\hat{v}_{A_{H}}[0]\) for all \(H\in\mathcal{H}\). Since this implies that \(\mathbb{P}(E(\frac{1}{n}))<\frac{1}{n^{2}}\) for all \(n\geq 1\), the Borel-Cantelli lemma ensures that
\[\mathbb{P}\big{\{}\big{|}\inf_{\alpha}J_{\alpha}(v[0])-\inf_{\hat{\alpha}}\hat{J }_{\hat{\alpha}}(\hat{v}[0])\big{|}>0\big{\}}=0,\]
thus yielding the claim.
### _Reconstruction of Optimal Controls_
Thanks to Theorem 4, we know that the original and the lumped system coincide on the optimal costs. The next result describes how an optimal control of the original system can be reconstructed from an optimal control of the lumped system.
**Theorem 5** (Control Reconstruction).: _Let us fix a CCRN \((\mathcal{S},\mathcal{R}_{[\underline{\alpha};\overline{\alpha}]})\), a CCRN lumpability \(\mathcal{H}\) and let \((\hat{\mathcal{S}},\hat{\mathcal{R}}_{[\underline{\hat{\alpha}};\overline{\hat{ \alpha}}]})\) be the lumped CCRN. Further, let \(c>0\) and \(T>0\) be such that \(\mathfrak{R}(t)\subseteq B(c)\) for any \(t\in[0;T]\). Then, for any \(\hat{a}\in[\underline{\hat{\alpha}};\overline{\hat{\alpha}}]\), \(v\in B(c)\) and \(\hat{v}\) such that \(\hat{v}_{A_{H}}=\sum_{A\in H}v_{A}\) for all \(H\in\mathcal{H}\), it holds that_
\[\min_{a\in[\underline{\alpha};\overline{\hat{\alpha}}]}\max_{H\in\mathcal{H}}|\sum_ {A\in H}f_{A}(v,a)-\hat{f}_{A_{H}}(\hat{v},\hat{a})|=0\]
_Additionally, for any optimal solution \(\partial_{t}\hat{v}=\hat{f}(\hat{v},\hat{\alpha})\) of the lumped system, \(\partial_{t}v^{*}(t)=f(v^{*}(t),a(v^{*}(t),t))\) is an optimal solution of the original system, where_
\[a(v,t):=\operatorname*{arg\,min}_{a\in[\underline{\alpha};\overline{\alpha}]}\sum_ {H\in\mathcal{H}}\Big{\|}\sum_{A\in H}f_{A}(v,a)-\hat{f}_{A_{H}}(\hat{v}(t),\hat{ \alpha}(t))\Big{\|}_{2}^{2}\]
_can be computed by means of convex quadratic programming in polynomial time._
Proof.: We begin with the first statement, writing \(v(0)\) and \(\hat{v}(0)\) for \(v\) and \(\hat{v}\), respectively. Thanks to continuity, we can assume without loss of generality that \(v(0)\) is from the interior of \(B(c)\). Following the argumentation from Theorem 4 that invokes the Borel-Cantelli lemma, it suffices to prove that \(\mathbb{P}(E(\eta))<\delta\) for any \(\eta,\delta>0\), where event \(E(\eta)\) is
\[E(\eta)=\{\omega\mid\min_{a\in[\underline{\alpha};\overline{\hat{\alpha}}]}\max_ {H\in\mathcal{H}}|\sum_{A\in H}f_{A}(v,a)-\hat{f}_{A_{H}}(\hat{v},\hat{a})|>\eta\}.\]
To this, end we set \(\varepsilon=\eta^{2}\) and pick, using Theorem 3 for \(T=\eta\), \(v(0)\) and \(\hat{\alpha}(\cdot)=\hat{a}\), some \(\alpha(\cdot)\in[\underline{\alpha};\overline{\hat{\alpha}
this implies that there exists a constant \(K_{1}\geq 0\), non dependent on \(\eta\), such that
\[\max_{H\in\mathcal{H}}\Big{|}\sum_{A\in H}f_{A}(v(0),\alpha(0))-\frac{\hat{v}_{A _{H}}(\eta)-\hat{v}_{A_{H}}(0)}{\eta}\Big{|}\leq K_{1}\eta\]
Moreover, applying Lagrange's form of Taylor's theorem to the function \(t\mapsto\hat{v}(t)\) ensures the existence of some \(K_{2}\geq 0\), non dependent on \(\eta\), such that
\[\max_{H\in\mathcal{H}}\Big{|}\frac{\hat{v}_{A_{H}}(\eta)-\hat{v}_{A_{H}}(0)}{ \eta}-f_{A_{H}}(\hat{v}(0),\hat{a})\Big{|}\leq K_{2}\eta^{2}\]
Overall, the discussion implies
\[\min_{a\in[\underline{\alpha};\overline{\alpha}]}\max_{H\in\mathcal{H}}\Big{|} \sum_{A\in H}f_{A}(v(0),a)-f_{A_{H}}(\hat{v}(0),\hat{a})\Big{|}\leq K_{3}\eta\]
for some \(K_{3}\geq 0\) that does not depend on \(\eta\). Taking \(\eta\to 0\) yields then the first statement. We next sketch the proof of the second statement. To this end, we approximate \(a(\cdot,\cdot)\) by a Lipschitz continuous function \(\bar{a}(\cdot,\cdot)\) given by \(\bar{a}(v,t):=a(v,t)\) if \(v\) is a point of a grid with mesh size \(\tau>0\), that is
\[v\in G_{\tau}=B(c)\cap\{(i\cdot\tau,j\cdot\tau)\mid(i,j)\in\mathbb{Z}\times \mathbb{Z}\};\]
for \(v\notin G_{\tau}\), instead, we define \(\bar{a}\) via an interpolation which ensures Lipschitzianity of \(\bar{a}\), e.g., as a weighted sum of values at grid points most closest to \(v\). Thanks to the definition of \(a\) and the fact that \(B(c)\) is a compactum, it can be shown that there exists a \(K_{4}\geq 0\), non dependent on \(\tau\), such that
\[\sup_{\begin{subarray}{c}v\in B(c)\\ t\in[0;T]\end{subarray}}\sum_{H\in\mathcal{H}}\Big{|}\sum_{A\in H}f_{A}(v,a(v, t))-\sum_{A\in H}f_{A}(v,\bar{a}(v,t))\Big{|}\leq K_{4}\tau\]
Since \(\bar{a}(\cdot,\cdot)\) is Lipschitz continuous on \(B(c)\times[0;T]\), there exists a unique solution of \(\partial_{t}v^{\star}(t)=f(v^{\star}(t),\bar{a}(v^{\star}(t),t))\). Moreover, the theory of differential equations (e.g., Gronwall's inequality) ensures that there is a constant \(K_{5}\geq 0\), non dependent on \(\tau\), such that \(\|\sum_{A\in H}v^{\star}_{A}(t)-\hat{v}_{A_{H}}(t)\|_{2}\leq K_{5}\tau\) for all \(H\in\mathcal{H}\) and \(0\leq t\leq T\). This completes the proof.
## V Evaluation
We apply our framework to two families of models, one from epidemiology (and networks), and one from biology. The former class is used as an example for control and reachability, while the latter for reachability only. Algorithm 1 has been implemented in ERODE [14] which supports [29]. The experiments were run on a 3.22 GHz machine assigning 6 GB of RAM to ERODE. In all cases, two iterations of Algorithm 1 were sufficient.
### _SIR models over Networks_
Scalability analysis:_ Disease spread over networks is often modeled by variants of the susceptible-infected-recovered (SIR) model [15] over graphs [29]. Here, we study an SIR variant with vaccination [37] over a star topology with \(n\) locations (\(5000\) to \(50000\) with step \(5000\) in Table I). The respective reactions are
\[S_{i} \xrightarrow{[\underline{\alpha};\overline{\alpha}]}R_{i}+V_{i}, \qquad S_{i}+I_{j} \xrightarrow{b_{ij}\beta}I_{i}+I_{j},\] \[I_{i} \xrightarrow{\gamma}R_{i}, R_{i} \xrightarrow{\eta}S_{i},\]
where \(i,j\in\{1,\ldots,n\}\). The first reaction models the vaccination, the second captures the infection across different locations, the third recovery, while the forth corresponds to the loose of immunity. Subscripts denote locations and \(B=(b_{i,j})\) is the adjacency matrix of the graph representing the network topology, with \(b_{ij}>0\) denoting the presence of an edge between node \(i\) and \(j\). The auxiliary species \(V_{i}\) keep track of the vaccinated. Parameters \(\beta,\gamma,\eta\) were chosen as in [29], while vaccination bounds were set to \(\underline{\alpha}=0\) and \(\overline{\alpha}=1\) for lack of better alternative. By identifying \(i=1\) as the center of the network, the aforementioned star topology can be realized by setting \(b_{1,i}=b_{i,1}=1\) for all \(i\in\{2,\ldots,n\}\), and \(b_{k,l}=0\) for all other cases. This means that infections can occur only among the center node and the others, further preventing infections within the same location. The intuition is that each node represents an individual that can get infected by interacting with others accordingly to the network topology. We applied our lumping algorithm starting from the initial partition \(\mathcal{H}=\{\{S_{1}\},\,\{I_{1}\},\,\{R_{1}\},\,\{V_{1},\ldots,V_{n}\}\), {_remaining variables_}}. Its lumped CCRN is given by the same reactions, but for \(n=2\). The original star topology with \(n\) locations is thus reduced to one with \(2\) locations. As a possible cost, one can consider the cumulative population of infected and vaccinated over time, that is
\[J=\omega_{1}\int_{0}^{T}\sum_{i=1}^{n}v_{I_{i}}(s)ds+\omega_{2}\sum_{i=1}^{n}v_ {V_{i}}(T),\]
where \(\omega_{1}\) and \(\omega_{2}\) are non-negative weights. Intuitively, the cost aims at minimize the spread of infection while using a minimal amount of vaccination. With this, Theorem 4 ensures that the optimal value of the original CCRN of size \(4n\) can be obtained by optimizing the lumped one of size \(7\).
Overall, Table I shows that, for the considered family of models, the runtime of our technique scales well with the model size, taking less than 2 seconds for a model with 150 thousand variables.
Reduction power analysis:Here we perform an analysis akin to the one in the foregoing paragraph. Specifically, we study the SIR model with vaccination, but this time over real-world networks taken from the Netzschleuder repository [38]. The rationale is that, after having studied runtime performances of CCRN species equivalence, here we focus on the reduction power of our technique in realistic settings. We consider all weighted networks from the repository with at most 52000 nodes. Part of the networks are directed, while the others are undirected. We implicitly transform the latter ones by replacing every undirected edge with two corresponding directed ones with same weight. Overall, we considered 1558 real networks from the repository. The largest considered network contains 51919 nodes, corresponding to a CCRN with 155757 variables and 330149 reactions on which our reduction algorithm took about 50 minutes on a standard laptop machine.
All reactions apart the one modeling infections remain the same for the star topology, including the parameters and \(\underline{\alpha}=0\) and \(\overline{\alpha}=1\). As regards infections, for every edge from node \(i\) to node \(j\) with weight \(w_{ij}\) we add a reaction
\[S_{j}+I_{i}\xrightarrow{w_{ij}}I_{j}+I_{i}\]
In other words, in our experiment we interpret the presence of an edge from node \(i\) to \(j\) as the possibility for individual \(i\) (\(I_{i}\)) to infect individual \(j\) (\(S_{j}\)). On these models, we applied our lumping algorithm starting from an initial partition with blocks that separate the types of variables across all nodes:
\[\mathcal{H}=\{\{S_{i}\mid i\leq n\},\{I_{i}\mid i\leq n\},\{R_{i}\mid i\leq n\}, \{V_{i}\mid i\leq n\}\}\]
with \(n\) the number of nodes in the considered network. The cost can be taken as in the case of the star topology. More generally, for any reported lumping \(\mathcal{H}\), any costs \(L,K\) satisfying the assumptions of Theorem 4 are applicable, e.g., costs that try to minimize the cumulative infection in a specific block of \(\mathcal{H}\).
The results are summarized in Figure 4. We define the reduction ratio of a model as the number of reduced variables over that of original ones (the auxiliary species \(V_{i}\) have been dropped for providing a cleaner picture). Overall, 877 models could be reduced (i.e., have a reduction ratio smaller than 1), while 681 were not reduced (reduction ratio = 1). Figure 4 (a) focuses on the 877 models that admitted reduction, sorted by reduction ratio. We can see that about 250 models could be reduced to less than half the original number of variables. This is visualized better in Figure 4 (b). Here we count how many models have a reduction ratio within ten intervals from [0.0;0.1] (the bar from 0 to 10), to [0.9;1.0] (the bar from 90 to 100). The right-most bar refers to the 681 models that did not admit any reduction.
Overall, more than 56% of the models admitted reduction. Among these, about 28% admitted substantial reductions obtaining a reduction ratio smaller than 0.4.
Impact of uncertain weights on reduction powerIn this experiment, rather than focusing on the control problem of vaccination, we study the impact of weights' uncertainty on the reduction power of our technique. To this end, we perform a new analysis of the SIR vaccination model over networks from [38] by fixing the vaccination rate (to 1), while assuming that there is uncertainty in the weights of the 1558 networks considered (we use an arbitrary interval of \(0.05\) centered at weights's values, to ensure that intervals remain positive). The results are summarized in Figure 5. Similarly to Figure 4(b), we group models by reduction ratio. In particular, Figure 5(a) considers models without uncertainty of the weights, while Figure 5(b) with uncertainty on the weights. We can see that the absolute number of reducible models is not affected (in both cases, 671 models could not be reduced at all). Likewise, mild reductions with reduction ratios from 0.7 to 1.0 are not affected either. Considering the cases with lower reduction
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline _Spatial SIR with vaccination_ & _for_ & \(\mathcal{G}=\{\{S_{1}\},\{I_{1}\},\{R_{1}\},\{r\}\) & _(remaining variables)_\(\}\) & _Reductions have_ & _7 state variables_. \\ \hline \hline \(n\) & 5000 & 10000 & 15000 & 20000 & 25000 & 30000 & 35000 & 40000 & 45000 & 50000 \\ State variables & 20000 & 40000 & 60000 & 80000 & 100000 & 120000 & 140000 & 160000 & 180000 & 200000 \\ Lumping time (ms) & 82 & 301 & 421 & 618 & 622 & 754 & 1045 & 1192 & 1276 & 1824 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Running times of Algorithm 1 for SIR on star networks.
Fig. 4: CCRN lumping of SIR-vaccination model over weighted networks from [38]. Reduction ratios are given as number of reduced variables over original ones.
Fig. 5: CCRN lumping of SIR-vaccination model over weighted networks from [38]. Reduction ratios are given as number of reduced variables over original ones. All 1558 models grouped by reduction ratio. The gray numbers count the models in the corresponding range.
ratio, we can clearly see a similar pattern in the two figures, shifted to the right in the case of uncertain weights: reduction ratios from 0.1 to 0.4 appear to get shifted from 0.3 to 0.7.
### Protein-interaction Networks
Models of signaling pathways exhibit often a rapid growth in the number of species and reactions because of distinct molecule configurations [39, 40]. A possible instance of this is an extension of Example 1 to \(n\) binding sites (\(9\leq n\leq 18\) in Table II), yielding species \(\mathcal{S}=\big{\{}\{B\},\{A_{b}\mid b\in\{0,1\}^{n}\}\big{\}}\) and reactions
\[A_{b}+B\xrightarrow{[\underline{\alpha_{a}};\overline{\alpha_{a }}]}A_{b+e_{i}}, b_{i}=0\] \[A_{b}\xrightarrow{[\underline{\alpha_{a}};\overline{\alpha_{b}}] }A_{b-e_{i}}+B, b_{i}=1,\]
where \(e_{i}\) denotes the \(i\)-th unit vector, and the subscripts \(a\) and \(d\) denote association and disassociation with parameter bounds \([9.95;10.05]\) and \([0.05;0.15]\), respectively. The bounds are in accordance with the exact values \(10\) and \(0.1\) from [40]. Similarly to Example 1 with two binding sites, it can be shown that \(\mathcal{H}=\big{\{}\{A_{b}\mid|b|=0\},\,\ldots,\,\{A_{b}\mid|b|=n\},\,\{B\} \big{\}}\) is a CCRN species equivalence. The lumped CCRN can be then described by \(\bar{\mathcal{S}}=\{B,A_{0},\ldots,A_{n}\}\) and the reactions
\[A_{i}+B\xrightarrow{[\underline{\alpha_{a}};\overline{\alpha_{a }}]}A_{i+1}, 0\leq i<n,\] \[A_{i}\xrightarrow{[\underline{\alpha_{i}};\overline{\alpha_{a }}]}A_{i-1}+B, 0<i\leq n.\]
That is, similarly to Example 1, the CCRN species equivalences keep track of the number of occupied binding sites rather than the actual configuration of each binding site. The largest considered model has about 250000 variables, requiring about 10 seconds. The running times are summarized in Table II.
Overall, Theorem 4 ensures that the reachable set of the original CCRN of size \(2^{n}+1\) coincides with that of the lumped CCRN of size \(n+2\) on the blocks of \(\mathcal{H}\). The lumped CCRN can be over-approximated by known techniques like [5, 7].
## VI Conclusion
We introduced a model reduction technique for controlled chemical reaction networks (CCRNs) whose kinetic reaction parameters are subject to control or distrubance. The smallest (lumped) CCRN can be computed in polynomial time and is shown to preserve the optimal costs of the original CCRN. The applicability has been demonstrated by reducing the reachability and control problems of protein-interaction networks and vaccination models over networks with hundreds of thousands of variables. In the latter case, the runtime scalability has been demonstrated on synthetic networks with star topology, while the aggregation power in practice has been demonstrated by considering real-world weighted networks.
The proposed framework is holistic in that it can be used as a precomputation step before any optimization approach. In case the reduced model is sufficiently small, global optimization techniques such as the Hamilton-Jacobi-Bellman equations [5, 26] or reachability analysis tools such as [7, 21, 41] can be invoked. If the reduced model is still too large for global optimization techniques, local optimization approaches such as the functional gradient descent, also known as Pontyagin's maximum principle [19], can be invoked. While the principle has gained recently momentum in AI by training so-called neural ordinary differential equations [42], its computational complexity is at least quadratic in the size of the model, thus justifying the need for optimality-preserving model reduction techniques. Likewise, heuristic approaches involving sampling and simulation, as commonly used in systems biology [43], can profit from optimality-preserving reductions as well.
## VII Proof of Proposition 2
Proof.: We denote a reaction \(r=(\rho\xrightarrow{[\underline{\alpha};\overline{\alpha}_{r}]}\pi)\in \mathcal{R}[\underline{\alpha};\overline{\alpha}]\) simply by \(\rho\longrightarrow\pi\) since the range of its reaction rate is clear from the context. For a given \(N\geq 1\), fix \(q\in q^{N}_{[\overline{\alpha};\overline{\alpha}]}\). For every \(\sigma,\theta\in\frac{1}{N}\mathbb{N}^{\mathcal{O}}_{0}\) with \(\theta\neq\sigma\), the \((\sigma,\theta)\) entry of \(q\) is \(q(\sigma,\theta)\in g(\sigma)\cdot[q_{\overline{\alpha}^{N}}(N\sigma,N\theta);q _{\overline{\alpha}^{N}}(N\sigma,N\theta)]\), so it has the form that we now describe. For every reaction \(r=(\rho\rightarrow\pi)\in\mathcal{R}\) such that \(\theta=\sigma+\frac{1}{N}(\pi-\rho)\) there is \(\alpha_{r}=\alpha_{r}(N,\sigma,\theta,t)\in[\underline{\alpha}_{r};\overline{ \alpha}_{r}]\) with \(t\in[0;T]\) such that
\[q(\sigma,\theta)=g(\sigma)\cdot\sum_{\begin{subarray}{c}r=(\rho \rightarrow\pi)\in\mathcal{R}\\ \theta=\sigma+\frac{1}{N}(\pi-\rho)\end{subarray}}\frac{\alpha_{r}(N,\sigma, \theta,t)}{N^{|\rho|-1}}\binom{N\sigma}{\rho} \tag{12}\]
In particular, for a given \(\sigma\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}\), one has \(q(\sigma,\theta)\neq 0\) only for finitely many \(\theta\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}\). We begin the proof by checking that our scaled UCTMCs satisfy [13, Definition 4 (i)-(iii)], in order to then apply [13, Theorem 1].
(i) We show that for every \(N\) we have
\[s:=\sup_{\sigma\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}}\sup_{q\in q^{N}_{ [\overline{\alpha}]}}|q(\sigma,\sigma)|<\infty.\]
As \(g(\sigma)=0\) for every \(\sigma\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}\) with \(|\sigma|\geq 2c\), we have
\[s =\sup_{\sigma\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}}\sup_{q\in q ^{N}_{[\overline{\alpha}]}}\sum_{\sigma\neq\theta\in\frac{1}{N}\mathbb{N}^{ \mathcal{S}}_{0}}q(\sigma,\theta)\] \[=\sup_{\begin{subarray}{c}\sigma\in\frac{1}{N}\mathbb{N}^{ \mathcal{S}}_{0}\\ |\sigma|\leq 2c\end{subarray}}g(\sigma)\cdot\sum_{\sigma\neq\theta\in\frac{1}{ N}\mathbb{N}^{\mathcal{S}}_{0}}q_{\overline{\alpha}^{N}}(N\sigma,N\theta).\]
The number of elements \(\sigma\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}\) with \(|\sigma|\leq 2c\) is finite, and for each of them the term \(\sum_{\sigma\neq\theta\in\frac{1}{N}\mathbb{N}^{\mathcal{S}}_{0}}q_{\overline{ \alpha}^{N}}(N\sigma,N\theta)\) is a finite sum, so \(s\) is finite.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{_Protein-interaction networks for \(\mathcal{G}=\{\mathcal{S}\}\). Reductions have \(n+1\) state variables._} & & & & & & & & & \\ \hline \(n\) & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\ State variables & 513 & 1025 & 2049 & 4097 & 8193 & 16385 & 32769 & 65537 & 131073 & 262145 \\ Lamping time (ms) & 63 & 81 & 88 & 96 & 253 & 430 & 841 & 2157 & 5024 & 10582 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Running times of Algorithm 1 on protein-interaction networks.
(ii) For \(\varepsilon\geq 0\), let
\[\Phi_{\varepsilon}(N)=\sup_{\sigma\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}}\sup _{q\in q^{N}_{[\underline{\alpha};\overline{n}]}}\sum_{\theta\in\frac{1}{N} \mathbb{N}_{0}^{\mathcal{S}}}q(\sigma,\theta)|\theta-\sigma|^{1+\varepsilon}.\]
Here we prove that \(\lim_{N\to\infty}\Phi_{\varepsilon}(N)=0\) for every \(\varepsilon>0\), and to this end it is sufficient to show that
\[\Phi_{\varepsilon}(N)\text{ is }O(N^{-\varepsilon})\text{ as }N\to\infty,\text{ for every } \varepsilon\geq 0. \tag{13}\]
Fix \(\varepsilon\geq 0\). As \(g(\sigma)\leq 1\) for every \(\sigma\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\), and \(g(\sigma)=0\) when \(|\sigma|\geq 2c\), using (12) we obtain
\[\Phi_{\varepsilon}(N)\leq\sup_{\begin{subarray}{c}\sigma\in \frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\\ |\sigma|\leq 2c,\\ \sigma+\frac{-\varepsilon}{N}\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}} \end{subarray}}\frac{\sum_{r=(\rho-\pi)\in\mathcal{R}}}{N^{|\rho|-1}}\cdot \binom{N\sigma}{\rho}\cdot|\frac{\pi-\rho}{N}|^{1+\varepsilon}\] \[=\sup_{\begin{subarray}{c}\sigma\in\frac{1}{N}\mathbb{N}_{0}^{ \mathcal{S}}\\ |\sigma|\leq 2c,\\ \sigma+\frac{-\varepsilon}{N}\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}} \end{subarray}}\frac{\sum_{r=(\rho-\pi)\in\mathcal{R}}}{\sigma_{r}|\pi-\rho|^{ 1+\varepsilon}}\cdot\prod_{A\in\mathcal{S}}\frac{\binom{N\sigma(A)}{\rho(A)}}{ N^{\rho(A)}}\cdot N^{-\varepsilon}.\]
It can be shown that for every \(\rho\in\rho(\mathcal{R})\) and \(\sigma\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\) with \(|\sigma|\leq 2c\) one has
\[\frac{\binom{N\sigma(A)}{\rho(A)}}{N^{\rho(A)}}=\frac{\sigma(A)^{\rho(A)}}{ \rho(A)!}+O(N^{-1})\leq\frac{(2c)^{\rho(A)}}{\rho(A)!}+O(N^{-1}). \tag{14}\]
Then one can find a constant \(K\geq 0\) depending only on the set \(\mathcal{R}\) of reactions (and on \(c\)) such that for every reaction \((\rho\to\pi)\in\mathcal{R}\) one has
\[|\pi-\rho|^{1+\varepsilon}\prod_{A\in\mathcal{S}}\frac{1}{N^{\rho(A)}}\binom{N \sigma(A)}{\rho(A)}\leq K+O(N^{-1})\]
for every \(N\) that is big enough. Then for such \(N\) we obtain
\[\Phi_{\varepsilon}(N)\leq\sum_{r=(\rho\to\pi)\in\mathcal{R}}\overline{\alpha} _{r}\big{(}K+O(N^{-1})\big{)}N^{-\varepsilon}.\]
As the right-hand side above is \(O(N^{-\varepsilon})+O(N^{-1-\varepsilon})=O(N^{-\varepsilon})\) for \(N\to\infty\), we deduce (13).
(iii) We have to check that \(\limsup_{N\to\infty}\Phi_{0}(N)<\infty\), which readily follows from (13). This finishes the proof [13, Definition 4 (i)-(iii)].
To apply [13, Theorem 1], we also need to show that the drifts \(f^{N}\) of the CTMCs \(\left(\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}},q^{N}\right)\), where \(q^{N}\in q^{N}_{[\underline{\alpha};\overline{n}]}\), describe for \(N\to\infty\) the (upper semicontinuous) differential inclusion
\[F(v)=\bigcup_{\alpha\in[\underline{\alpha};\overline{n}]}g(v)\cdot f(v,\alpha)\]
For this, fix \(q\in q^{N}_{[\underline{\alpha};\overline{n}]}\) and \(\sigma\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\). Then (12) gives
\[f^{N}(\sigma,q)=\sum_{\theta\in\frac{1}{N}\mathbb{N}_{0}^{ \mathcal{S}}}q(\sigma,\theta)(\theta-\sigma)\] \[=g(\sigma)\cdot\sum_{r=(\rho\to\pi)\in\mathcal{R}}\frac{1}{N}(\pi- \rho)\cdot\frac{\alpha_{r}}{N^{|\rho|-1}}\cdot\prod_{A\in\mathcal{S}}\binom{N \sigma(A)}{\rho(A)}\] \[=g(\sigma)\cdot\sum_{r=(\rho\to\pi)\in\mathcal{R}}(\pi-\rho)\cdot \alpha_{r}\cdot\prod_{A\in\mathcal{S}}\frac{\frac{\rho(A)}{\rho(A)}}{N^{\rho(A )}}.\]
Then the drift of the UCTMC \(X_{N}=(\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}},q^{N}_{[\underline{\alpha}; \overline{n}]})\) at \(\sigma\) is
\[\bigcup_{q\in q^{N}_{[\underline{\alpha};\overline{n}]}}f^{N}(\sigma,q)=\bigcup_{\alpha\in[\underline{\alpha};\overline{n}]}f^{N}(\sigma,q)\] \[=g(\sigma)\cdot\bigcup_{\alpha\in[\underline{\alpha};\overline{ n}]}\sum_{r=(\rho\to\pi)\in\mathcal{R}}\left(\pi-\rho\right)\cdot\alpha_{r} \cdot\prod_{A\in\mathcal{S}}\frac{\binom{N\sigma(A)}{\rho(A)}}{N^{\rho(A)}}.\]
Now fix \(v\in\mathbb{R}_{\geq 0}^{\mathcal{S}}\) and let \(\sigma^{N}\in\frac{1}{N}\mathbb{N}_{0}^{\mathcal{S}}\) be such that \(\lim_{N\to\infty}\sigma^{N}=v\). As the function \(g\) is continuous, we have \(\lim_{N\to\infty}g(\sigma^{N})=g(v)\). Similarly to (14), for any \(\rho\in\rho(\mathcal{R})\)
\[\lim_{N\to\infty}\frac{\binom{N\sigma^{N}(A)}{\rho(A)}}{N^{\rho(A)}}=\frac{v_{A} ^{\rho(A)}}{\rho(A)!},\]
so the limit drift as \(N\to\infty\) is
\[\lim_{N\to\infty}\bigcup_{q\in q^{N}_{[\underline{\alpha};\overline{n}]}}f^{N}( \sigma^{N},q)=\] \[=\lim_{N\to\infty}g(\sigma^{N})\cdot\bigcup_{\alpha\in[ \underline{\alpha};\overline{n}]}\sum_{r=(\rho\to\pi)\in\mathcal{R}}(\pi-\rho) \cdot\alpha_{r}\cdot\prod_{A\in\mathcal{S}}\frac{\binom{N\sigma^{N}(A)}{\rho( A)}}{N^{\rho(A)}}\] \[=g(v)\cdot\bigcup_{\alpha\in[\underline{\alpha};\overline{n}]}g(v) \cdot f(v,\alpha).\]
The above discussion allows us to apply [13, Theorem 1] which, in turn, yields 1), see also [44, Theorem 3.2].
|
2302.11552 | Reduce, Reuse, Recycle: Compositional Generation with Energy-Based
Diffusion Models and MCMC | Since their introduction, diffusion models have quickly become the prevailing
approach to generative modeling in many domains. They can be interpreted as
learning the gradients of a time-varying sequence of log-probability density
functions. This interpretation has motivated classifier-based and
classifier-free guidance as methods for post-hoc control of diffusion models.
In this work, we build upon these ideas using the score-based interpretation of
diffusion models, and explore alternative ways to condition, modify, and reuse
diffusion models for tasks involving compositional generation and guidance. In
particular, we investigate why certain types of composition fail using current
techniques and present a number of solutions. We conclude that the sampler (not
the model) is responsible for this failure and propose new samplers, inspired
by MCMC, which enable successful compositional generation. Further, we propose
an energy-based parameterization of diffusion models which enables the use of
new compositional operators and more sophisticated, Metropolis-corrected
samplers. Intriguingly we find these samplers lead to notable improvements in
compositional generation across a wide set of problems such as
classifier-guided ImageNet modeling and compositional text-to-image generation. | Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Grathwohl | 2023-02-22T18:48:46Z | http://arxiv.org/abs/2302.11552v6 | # Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC
###### Abstract
Since their introduction, diffusion models have quickly become the prevailing approach to generative modeling in many domains. They can be interpreted as learning the gradients of a time-varying sequence of log-probability density functions. This interpretation has motivated classifier-based and classifier-free guidance as methods for post-hoc control of diffusion models. In this work, we build upon these ideas using the score-based interpretation of diffusion models, and explore alternative ways to condition, modify, and reuse diffusion models for tasks involving compositional generation and guidance. In particular, we investigate why certain types of composition fail using current techniques and present a number of solutions. We conclude that the sampler (not the model) is responsible for this failure and propose new samplers, inspired by MCMC, which enable successful compositional generation. Further, we propose an energy-based parameterization of diffusion models which enables the use of new compositional operators and more sophisticated, Metropolis-corrected samplers. Intriguingly we find these samplers lead to notable improvements in compositional generation across a wide set of problems such as classifier-guided ImageNet modeling and compositional text-to-image generation. Project web-page: [https://energy-based-model.github.io/reduce-reuse-recycle/](https://energy-based-model.github.io/reduce-reuse-recycle/).
Machine Learning, ICML
## 1 Introduction
In recent years, tremendous progress has been made in generative modeling across a variety of domains (Brown et al., 2020; Brock et al., 2018; Ho et al., 2020). These models now serve as powerful priors for downstream applications such as code generation (Li et al., 2022), text-to-image generation (Saharia et al., 2022), question-answering (Brown et al., 2020) and many more. However, to fit this complex data, generative models have grown inexorably larger (requiring 10's or even 100's of billions of parameters) (Kaplan et al., 2020) and require datasets containing non-negligible fractions of the entire internet, making it costly and difficult to train and or finetune such models. Despite this, some of the most compelling applications of large generative models do not rely on finetuning. For example, prompting (Brown et al., 2020) has been a successful strategy to selectively extract insights from large models. In this paper, we explore an alternative to finetuning and prompting, through which we may repurpose the underlying prior learned by generative models for downstream tasks.
Diffusion Models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020) are a recently popular approach to generative modeling which have demonstrated a favorable combination of scalability, sample quality, and log-likelihood. A key feature of diffusion models is the ability for their sampling to be "guided" after training. This involves combining the pre-trained Diffusion Model \(p_{\theta}(x)\) with a predictive model \(p_{\theta}(y|x)\) to generate samples from \(p_{\theta}(x|y)\). This predictive model can be either explicitly defined (such as a pre-trained classifier) (Sohl-Dickstein et al., 2015; Dhariwal and Nichol, 2021) or an implicit predictive model defined through the combination of a conditional and unconditional generative model (Ho and Salimans, 2022). These forms of conditioning are particularly appealing (especially the former) as they allow us to reuse pre-trained generative models for many downstream applications, beyond those considered at training time.
These conditioning methods are a form of model composition, i.e. combining probabilistic models together to create new models. Compositional models have a long history back to early work on Mixtures-Of-Experts (Jacobs et al., 1991) and Product-Of-Experts models (Hinton, 2002; Mayraz and Hinton, 2000). Here, many simple models or predictors were combined to increase their capacity. Much of this early work on model composition was done in the context of Energy-Based Models (Hinton, 2002), an alternative class of generative model which bears many similarities to diffusion models.
In this work, we explore the ways that diffusion models can be reused and composed with one-another. First, we
introduce a set of methods which allow pre-trained diffusion models to be composed, with one-another and with other models, to create new models without retraining. Second, we illustrate how existing methods for composing diffusion models are not fully correct, and propose a remedy to these issues with MCMC-derived sampling. Next, we propose the use of an energy-based parameterization for diffusion models, where the unnormalized density of each reverse diffusion distribution is explicitly modeled. We illustrate how this parameterization enables both additional ways to compose diffusion models, as well as the use of more powerful Metropolis-adjusted MCMC samplers. Finally, we demonstrate the effectiveness of our approach in settings from 2D data to high-resolution text-to-image generation. An illustration of our domains can be found in Figure 1.
## 2 Background
### Diffusion Models
Diffusion models seek to model a data distribution \(q(x_{0})\). We augment this distribution with auxiliary variables \(\{x_{t}\}_{t=1}^{T}\) defining a Gaussian diffusion \(q(x_{0},\ldots,x_{T})=q(x_{0})q(x_{1}|x_{0})\ldots q(x_{T}|x_{T-1})\) where each transition is defined \(q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\) for some \(0<\beta_{t}\leq 1\). This transition first scales down \(x_{t-1}\) by \(\sqrt{1-\beta_{t}}\) and then adds Gaussian noise of variance \(\beta_{t}\). For large enough \(T\), we will have \(q(x_{T})\approx\mathcal{N}(0,I)\).
Our model takes the form \(p_{\theta}(x_{t-1}|x_{t})\) and seeks to learn the reverse distribution of \(q(x_{t}|x_{t-1})\) which seeks to denoise \(x_{t}\) to \(x_{t-1}\). In the limit of small \(\beta_{t}\) this reversal becomes Gaussian (Sohl-Dickstein et al., 2015) so we parameterize our model \(p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\tilde{ \beta}_{t}I)\) with:
\[\mu_{\theta}(x_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}-\frac{\beta_{t}} {\sqrt{1-\tilde{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t)\right). \tag{1}\]
where \(\epsilon_{\theta}(x_{t},t)\) is a neural network, and \(\alpha_{t}\), \(\tilde{\alpha}_{t}\), \(\tilde{\beta}_{t}\) are functions of \(\{\beta_{t}\}_{t=1}^{T}\).
A useful feature of the diffusion process \(q\) is that we can analytically derive any time marginal \(q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{1-\sigma_{t}^{2}}x_{0},\sigma_{t}^{2}I)\) where again \(\sigma_{t}\) is a function of \(\{\beta_{t}\}_{t=1}^{T}\). We can sample \(x_{t}\) from this distribution using reparameterization, i.e \(x_{t}(x_{0},\epsilon)=\sqrt{1-\sigma_{t}^{2}}x_{0}+\sigma_{t}\epsilon\) where \(\epsilon\sim\mathcal{N}(0,I)\). Exploiting this, diffusion models are typically trained with the loss \(\mathcal{L}(\theta)=\sum_{t=1}^{T}\mathcal{L}_{t}(\theta)\), where
\[\mathbb{E}_{q(x_{0})\mathcal{N}(\epsilon;0,I)}\left[||\epsilon-\epsilon_{ \theta}(x_{t}(x_{0},\epsilon),t)||^{2}\right]. \tag{2}\]
Once \(\epsilon_{\theta}(x,t)\) is trained, we recover \(\mu_{\theta}(x,t)\) with Equation 1 to parameterize \(p_{\theta}(x_{t-1}|x_{t})\) and perform ancestral sampling (also known as the reverse process) to reverse the diffusion, i.e sample \(x_{T}\sim\mathcal{N}(0,I)\), then for \(t=T-1\to 1\), sample \(x_{t-1}\sim p_{\theta}(x_{t-1}|x_{t})\). A more detailed description can be found in Appendix B.
### Energy-Based Models and MCMC Sampling
Energy-Based Models (EBMs) are a class of probabilistic model which parameterize a distribution as \(p_{\theta}(x)=\frac{e^{f_{\theta}(x)}}{Z(\theta)}\) where the normalizing constant \(Z(\theta)=\int e^{f_{\theta}(x)}dx\) is not modeled. Choosing not to model this quantity gives the model much more flexibility but comes with considerable limitations. We can no longer efficiently compute likelihoods or draw samples from the model. This complicates training, as most generative models are trained by maximizing likelihood.
One popular method for EBM training is denoising score matching. This approach minimizes the Fisher Divergence1 between the model and a Gaussian-smoothed version of the data distribution \(q_{\sigma}(x)=\int q(x^{\prime})\mathcal{N}(x;x^{\prime},\sigma^{2}I)dx^{\prime}\) by
Figure 1: **Creating new models through composition. Simple operators enable diffusion models to be composed without retraining in settings such (a) products, (b) classifier conditioning, (c) compositional text-to-image generation with products and mixtures, (d) image tapestries with different content at different locations (captions shortened, see Appendix G for full text). All samples are generated.**
minimizing the following objective
\[\mathcal{J}_{\sigma}(\theta)=\mathbb{E}_{q(x)\mathcal{N}(\epsilon;0,I)}\left[ \left|\left|\frac{\epsilon}{\sigma}+\nabla_{x}f_{\theta}(x+\sigma\epsilon) \right|\right|^{2}\right]. \tag{3}\]
When minimized, this ensures that \(e^{f_{\theta}(x)}\propto q_{\sigma}(x)\) and therefore \(\nabla_{x}f_{\theta}(x)=\nabla_{x}\log q_{\sigma}(x)\). To estimate likelihoods or sample from our model, we must rely on approximate methods, such as MCMC sampling or numerical ODE integration. MCMC works by simulating a Markov chain beginning at \(x_{0}\sim p(x_{0})\) and using a transition distribution \(x_{t}\sim k(x_{t}|x_{t-1})\). If \(k(\cdot|\cdot)\) has certain properties, namely invariance w.r.t. the target and ergodicity, then as \(t\rightarrow\infty\), \(x_{t}\) converges to a sample from our target distribution.
Perhaps the most popular MCMC sampling algorithm for EBMs is Unadjusted Langevin Dynamics (ULA) (Roberts and Tweedie, 1996; Du and Mordatch, 2019; Nijkamp et al., 2020) which is defined by
\[k(x_{t}|x_{t-1})=\mathcal{N}\left(x_{t};x_{t-1}+\tfrac{\sigma^{2}}{2}\nabla_{ x}f_{\theta}(x_{t-1}),\sigma^{2}I\right). \tag{4}\]
This resembles a step of gradient ascent (with step-size \(\tfrac{\sigma^{2}}{2}\)) with added Gaussian noise of variance \(\sigma^{2}\). This transition is based on a discretization of the Langevin SDE. In the limit of infinitesimally small \(\sigma\) this approach will draw exact samples. To handle the error accrued when using larger step sizes, a Metropolis correction can be added giving the Metropolis-Adjusted-Langevin-Algorithm (MALA) (Besag, 1994). With Metropolis correction, we first generate a proposed update \(\hat{x}\sim k(x|x_{t-1})\), then with probability \(\min\left(1,\tfrac{e^{f_{\theta}(x)}}{e^{f_{\theta}(x_{t-1})}}\tfrac{k(x_{t-1 }|\hat{x})}{k(\hat{x}|x_{t-1})}\right)\) we set \(x_{t}=\hat{x}\), otherwise \(x_{t}=x_{t-1}\).
Hamiltonian Monte Carlo (HMC) (Duane et al., 1987; Neal, 1996) is a more advanced MCMC sampling method which augments the state-space with auxiliary momentum variables and numerically integrates energy-conserving Hamiltonian dynamics to advance the sampler. HMC is typically applied with a Metropolis correction, but an approximate variant can be used without it (U-HMC) (Geffner and Domke, 2021). See Appendix C.1 for details of HMC variants we use.
### Connection Between Diffusion Models and EBMs
Diffusion models and EBMs are closely related. For instance, Song and Ermon (2019) uses an EBM perspective to propose a close cousin to diffusion models. We can see from inspection that the training objective of diffusion models is identical (up to a constant) to the denoising score matching objective
\[\sigma_{t}^{2}\mathcal{J}_{\sigma_{t}}(\theta)=\mathbb{E}_{q(x)\mathcal{N}( \epsilon,0,I)}\left[\left|\left|\epsilon+\sigma_{t}\nabla_{x}f_{\theta}(x+ \sigma_{t\epsilon})\right|\right|^{2}\right]=\mathcal{L}_{t}(\theta) \tag{5}\]
where we have replaced \(\epsilon_{\theta}(x,t)\) with \(-\sigma_{t}\nabla_{x}f_{\theta}(x+\sigma_{t}\epsilon)\). Thus by training \(\epsilon_{\theta}(x,t)\) to minimize Equation 2, we can recover the diffused data distribution score with \(\nabla_{x}\log q_{\sigma}(x)\approx-\tfrac{\epsilon_{\theta}(x,t)}{\sigma_{ t}}\). From this, we can define \(\epsilon_{\theta}(x,t)=\nabla_{x}f_{\theta}(x,t)\) (the derivative of an explicitly defined scalar function) to learn a noise-conditional potential function \(f_{\theta}(x,t)\). We later demonstrate the benefits of this in two ways; it enables the use of more sophisticated sampling algorithms and more forms of composition.
### Controllable Generation
It may be convenient to train a model of \(p(x)\) where \(x\) is, say, the distribution of all images, but in practice we often want to generate samples from \(p(x|y)\) where \(y\) is some attribute, label, or feature. This can be accomplished within the framework of diffusion models by introducing a learned predictive model \(p_{\theta}(y|x;t)\), i.e a time-conditional model of the distribution of some feature \(y\) given \(x\). We can then exploit Bayes' rule to notice that (for \(\lambda=1\)),
\[\nabla_{x}\log p_{\theta}(x|y;t)=\nabla_{x}\log p_{\theta}(x;t)+\lambda\nabla _{x}\log p_{\theta}(y|x;t). \tag{6}\]
In practice, when using the right side of Equation 6 for sampling, it is beneficial to increase the 'guidance scale' \(\lambda\) to be \(>1\)(Dhariwal and Nichol, 2021). Thus, we can re-purpose the unconditional diffusion model and turn it into a conditional model.
If instead of a classifier, we have a both an unconditional diffusion model \(\nabla_{x}\log p_{\theta}(x;t)\) and a conditional diffusion model \(\nabla_{x}\log p_{\theta}(x|y;t)\), we can again utilize Bayes' rule to derive an implicit predictive model's gradients
\[\nabla_{x}\log p_{\theta}(y|x;t)=\nabla_{x}\log p_{\theta}(x|y;t)-\nabla_{x} \log p_{\theta}(x;t) \tag{7}\]
which can be used to replace the explicit model in Equation 6, giving what is known as classifier-free guidance (Ho and Salimans, 2022). This method has led to incredible performance, but comes at a cost to modularity. This contrasts with the classifier-guidance setting, where we only need to train a single (costly) generative model. We can then attach any predictive model we would like to for conditioning. This is beneficial as it is often much easier and cheaper to train predictive models than a flexible generative model. In the classifier-free setting, we must know exactly which \(y\) we would like to condition on, and incorporate these labels into model training. In both guidance settings, we use our (possibly implicit) predictive model to modify the learned score of our model. We then perform diffusion model sampling as we would in the unconditional setting. We will see later that even in toy settings, this is often not the optimal thing to do.
## 3 Compositional Generation Beyond Guidance
Most work on conditional diffusion models has come in the form of classifier or classifier-free guidance, but these are far
from the only ways we can compose distributions to obtain new models. These ideas have been studied primarily in the context of EBMs because most compositional operators leave the resulting distribution unnormalized. We outline various options below.
Products: We can take a product of \(N\) distributions and re-normalize to create a new distribution, roughly equivalent to the "intersection" of the composite distributions,
\[q^{\text{prod}}(x)=\frac{1}{Z}\prod_{i=1}^{N}q^{i}(x),\qquad Z=\int\prod_{i=1} ^{N}q^{i}(x)dx \tag{8}\]
Regions of high probability under \(q^{\text{prod}}(x)\) will typically have high probability under _all_\(q^{i}(x)\). A simple product model can be seen in Figure 2. These ideas were initially proposed to increase the capacity of weaker models by allowing individual "experts" to model specific features in the input (Hinton, 2002), and were recently demonstrated at scale in the image domain using Deep Energy-Based Models (Du et al., 2020).
The approaches to guidance discussed in Section 2.4 define product models with only two experts. The first models the relative density of the input data and the second models the conditional probability of \(y\). Combining these by a product models likely inputs which have the desired property \(y\). This form of composition has become popular for diffusion models since they do not directly model the probability, but instead the gradient of the log-probability which can also be composed in this way.
Mixtures: Complementary to the product or intersection is the mixture or union of multiple distributions. We can combine \(N\) distributions through a mixture to create a new distribution equivalent to the _union_ of the concepts captured in each distribution
\[q^{\text{mix}}(x)=\frac{1}{N}\sum_{i=1}^{N}q^{i}(x) \tag{9}\]
where regions of high probability consist of regions of high probability under _any_\(q^{i}(x)\). We cannot compose score-functions to define mixtures (unlike products). Instead, we need a model which specifies probability. Generating from mixtures of energy based models requires knowing the ratio of normalizers between the models. In our experiments, we assume this ratio is 1. A simple compositional mixture model can be seen in Figure 2.
Negation: Finally, given two distributions \(p_{0}(x)\) and \(p_{1}(x)\), we can explicitly invert the density of \(p_{1}(x)\) with respect to \(p_{0}(x)\), which constructs a new distribution which assigns high likelihood to points in \(p_{0}(x)\) that are not in \(p_{1}(x)\)(Du et al., 2020), where \(\alpha\) controls the degree we invert \(p_{1}(x)\) (we use \(\alpha=0.5\) in our experiments).
\[q^{\text{neg}}(x)\propto\frac{q^{0}(x)}{q^{1}(x)^{\alpha}}. \tag{10}\]
We can combine negation with our previous operators, in a nested manner to construct complex combinations of distributions (Figure 6).
In section 2.3, we showed how diffusion models can be interpreted as approximating the gradient \(\nabla_{x}\log q(x)\), but do not learn an explicit model of the log-likelihood \(\log q(x)\). This means with the standard \(\epsilon_{\theta}(x,t)\)-parameterization we can, in theory, utilize product and negation composition, but _not_ mixture composition.
## 4 Scaling Compositional generation with Diffusion Models
While highly compositional, EBMs present many challenges. The lack of a normalized probability function makes training, evaluation, and sampling very difficult. Much progress has been made to scale these models (Du and Mordatch, 2019; Nijkamp et al., 2020; Grathwohl et al., 2019; Du et al., 2020; Grathwohl et al., 2021; Gao et al., 2021), but EBMs still lag behind other approaches in terms of efficiency and scalability. In contrast, diffusion models have demonstrated very impressive scalability. Fortuitously, diffusion models have similarities to EBMs, such as their training objective and their score-based interpretation, which makes many forms of composition readily applicable.
Unfortunately, when two diffusion models are composed into, for example, a product model \(q^{\text{prod}}(x)\propto q^{1}(x)q^{2}(x)\), issues arise if the model which reverses the diffusion uses a score estimate obtained by adding the score estimates of the two models as done in prior work (Liu et al., 2022). We see in Figure 2 that composing two models in such a way leads indeed to sub-par samples. This is because to sample from this product distribution using standard reverse diffusion (Song et al., 2021), one would need to compute instead the score of the diffused target product distribution given by
\[\nabla_{x}\log\hat{q}_{t}^{\text{prod}}(x_{t})=\nabla_{x}\log\left(\int dx_{0 }q^{1}(x_{0})q^{2}(x_{0})\,q(x_{t}|x_{0})\right). \tag{11}\]
For \(t>0\), this quantity is _not_ equal to the sum of the scores of the two models which is given by
\[\nabla_{x}\log q_{t}^{\text{prod}}(x_{t}) =\nabla_{x}\log\left(\int dx_{0}q^{1}(x_{0})q(x_{t}|x_{0})\right) \tag{12}\] \[+\nabla_{x}\log\left(\int dx_{0}q^{2}(x_{0})q(x_{t}|x_{0})\right).\]
Therefore, plugging the composed score function into the standard diffusion ancestral sampling procedure discussed in Section 2.1, which we refer to as "reverse diffusion," does not correspond to sampling from the composed model, and thus reverse diffusion sampling will generate incorrect samples from composed distributions. This effect can be seen in Figure 2, with details in Appendix D.
The score of the distribution \(q_{t}^{\text{prod}}(x_{t})\) in Equation 12 is easy to compute, unlike that of \(\hat{q}_{t}^{\text{prod}}(x_{t})\) from Equation 11.
In addition, \(q_{t}^{\text{prod}}(x_{t})\) describes a sequence of distributions which smoothly interpolate between \(q^{\text{prod}}(x)\) at \(t=0\) and \(\mathcal{N}(0,I)\) at \(t=T\), though this sequence of distributions does not correspond to the distributions that result from the standard forward diffusion process described in Section 2.1, leading the reverse diffusion sampling to generate poor samples. We discuss how we may utilize MCMC samplers, which use our knowledge of \(\nabla_{x}\log q_{t}^{\text{prod}}(x_{t})\), to correctly sample from intermediate distributions \(\tilde{q}_{t}^{\text{prod}}(x_{t})\), leading to accurate composed sample generation.
### Improving Sampling with MCMC
In order to sample from \(q^{\text{prod}}(x)\) using the combined score function from Equation 12, we can use annealed MCMC sampling, described below in Algorithm 1. This method applies MCMC transition kernels to a sequence of distributions which begins with a known, tractable distribution and concludes at our target distribution. Annealed MCMC has a long history enabling sampling from very complex distributions (Neal, 2001; Song and Ermon, 2019).
```
Input: Transition kernels \(k_{t}(\cdot|\cdot)\), Initial distribution \(p_{T}(\cdot)\), Number of steps \(N\) \(x_{T}\sim p_{T}(x)\) # Initialize. for\(t=T,\dots,0\)do for\(i=1,\dots,N\)do \(x_{t}\sim k_{t}(\cdot|x_{t})\) for\(x_{t-1}=x_{t}\) endfor return\(x_{0}\)
```
**Algorithm 1** Annealed MCMC
We explore two types of transition kernels \(k_{t}(\cdot|\cdot)\) based on Langevin Dynamics (Equation 4) and HMC. When using the standard \(\epsilon_{\theta}(x,t)\)-parameterization, we do not have access to an explicitly defined energy-function meaning we cannot utilize any MCMC sampler with Metropolis corrections. Thus, we only utilize the ULA and U-HMC samplers described in Section 2.2. These samplers are not exact, but can in practice generate good results. In the next section we detail how Metropolis corrections may be incorporated. Full details of our samplers can be found in Appendix C.1. While continuous time sampling in diffusion models (Song et al., 2021) is also referred to as ULA, the MCMC sampling procedure is run across time (and is the same sampling procedure as discretized diffusion in Section 2.1), as opposed to being used to sample from each intermediate distribution \(\tilde{q}_{t}^{\text{prod}}(x_{t})\). Thus applying continuous sampling gives the same issues as reverse diffusion sampling.
We can see again in Figure 2 that applying this MCMC sampling procedure allows samples from the composed distribution to be faithfully generated with no modification to the underlying diffusion models. Quantitative results can be found in Table 1 which further imply that the choice of sampler may be responsible for prior failures in compositional generation with diffusion models.
### Energy-Based Parameterization
As noted in Section 3, we are unable to use mixture composition without an explicit likelihood function. But, if we parameterize a potential function \(f_{\theta}(x,t)\) and implicitly define \(\epsilon_{\theta}(x,t)=\nabla_{x}f_{\theta}(x,t)\) we can recover an explicit estimate of the (unnormalized) log-likelihood - enabling us to utilize all presented forms for model composition.
Additionally, an explicit estimate of log-likelihood enables the use of more accurate samplers. As explained above, with the standard \(\epsilon_{\theta}(x,t)\)-parameterization we can only utilize unadjusted samplers. While they can perform well in practice, there exist many distributions from which they cannot generate decent samples (Roberts and Tweedie, 1996) such
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Model** & **Sampler** & \multicolumn{4}{c}{**Product**} & \multicolumn{4}{c}{**Mixture**} \\ & & **RAISE** & \(\mathbb{LL}\) & \(\mathbb{Var}\) & \(\mathbb{ln}(\text{MMD})\) & \(\mathbb{LL}\) & \(\mathbb{Var}\) & \(\mathbb{L}\) \\ \hline \multirow{4}{*}{Score} & Reverse & 1.55 & -6.47 & 0.063 & - & - & - \\ & ULA & 2.37 & 1.79 & 0.026 & - & - & - \\ & U-HMC & **2.52** & **2.40** & **0.021** & - & - & - \\ & Reverse (equal steps) & 2.27 & -2.92 & 0.046 & - & - & - \\ \hline \multirow{4}{*}{EM} & Reverse & 1.37 & -6.03 & 0.064 & -3.84 & -2.17 & 0.020 \\ & ULA & 2.36 & 1.84 & 0.027 & -4.21 & 0.57 & 0.013 \\ \cline{1-1} & MALA & 2.64 & 2.73 & 0.013 & -4.38 & 1.29 & 0.008 \\ \cline{1-1} & U-HMC & 2.63 & 2.45 & 0.022 & **-4.69** & 1.03 & 0.010 \\ \cline{1-1} & HMC & **2.71** & **2.72** & **0.009** & -4.48 & **1.30** & **0.007** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on 2D composition. Energy based parameterization enables mixture compositional models, and MCMC sampling leads to better samples from compositional diffusion models.
Figure 2: **An illustration of product and mixture compositional models, and the improved sampling performance of MCMC in both cases.** Left to right: Component distributions, ground truth composed distribution, reverse diffusion samples, HMC samples. Top: product, bottom: mixture. Reverse diffusion fails to sample from composed models.
as targets with lighter-than-Gaussian tails where the ULA chain is transient. Additionally, for an accurate approximation to the Langevin SDE, ULA will need increasingly small stepsizes as the curvature of the log-likelihood gradient increases which can lead to arbitrarily slow mixing (Durmus and Moulines, 2019). In these settings a Metropolis correction can greatly improve sample quality and convergence. Again this issue can be solved by defining \(\epsilon_{\theta}(x,t)=\nabla_{x}f_{\theta}(x,t)\) for some explicitly defined scalar potential function \(f_{\theta}(x,t)\).
Energy-based parameterizations have been explored in the past (Salimans and Ho, 2021) and were found to perform comparably to score-based models for unconditional generative modeling. In that setting the score parameterization is then preferable as computing the gradient of the energy requires more computation. In the compositional setting, however, the additional flexibility enabled by explicit (unnormalized) log-probability estimation motivates a re-exploration of the energy-parameterization.
We explored a number of energy-based parameterizations for diffusion models and ran a pilot study on ImageNet. In this study we found it best to parameterize the log probability as \(f_{\theta}(x,t)=-||s_{\theta}(x,t)||\)2, where \(s_{\theta}(x,t)\) is a vector-output neural network, like those used in \(\epsilon_{\theta}(x,t)\)-parameterized diffusion models. Full details on our study can be found in Appendix E. From here on, all energy-based diffusion models take the above form.
Footnote 2: For the mixture we use MMD to replace RAISE likelihood based evaluation as we encountered numerical stability issues with RAISE when applying to the mixture.
Our energy-parameterized models enable us to use MALA and HMC samplers which produce our best compositional generation results by a large margin. An additional benefit of these samplers is that, through monitoring their acceptance rates, we are able to derive an effective automated method for tuning their hyper-parameters (a notoriously difficult task prior) which is not available for unadjusted samplers. Details of our samplers and tuning procedures can be found in Appendix C.1.
## 5 Experiments
We experiment with various model parameterizations and sampling schemes for compositional generation with diffusion models. We first investigate these ideas on some illustrative 2D datasets, then move to the image domain with an artificial dataset of shapes. Here, we compose a model conditioned on the location of a single shape with itself to condition on the location of all of the shapes in the image. After this we experiment with classifier guidance on the ImageNet dataset. Finally, we self-compose text-to-image models to generate from compositions of various text prompts and image tapestries. Full details of all experiments can be found in Appendix G. Throughout we compare our proposed improvements with a score-parameterized model using standard reverse diffusion sampling. We note that this baseline is exactly the approach of Liu et al. (2022).
### 2D densities
We train diffusion models using both parameterizations and study the impact of various sampling approaches for compositional generation. Samples are evaluated using RAISE (Burda et al., 2015) (which gives lower bounds on log-likelihood) and MMD\({}^{2}\), LL (log-likelihood of generated samples under composed distribution), and Var (L2 difference of variance of GMMs fit on generated samples compared to GMMs of the composed distribution). Results can be found in Table 1 and visualizations can be seen in Figure 2. All MCMC sampling methods improve sample quality and likelihood, with Metropolis adjusted methods performing the best. All MCMC experiments use the same number of score function evaluations. We include a baseline, labeled "Reverse (equal steps)" which is a diffusion model trained with more steps such that reverse diffusion sampling has the same cost as our MCMC samplers. We see that simply adding more time-steps does not solve compositional sampling.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Sampler**} & \multicolumn{5}{c}{**Combinations**} \\ & & 1 & 2 & 3 & 4 & 5 \\ \hline \multirow{3}{*}{Score} & Reverse & 70.8 & 68.2 & 66.3 & 64.1 & 57.4 \\ & ULA & 75.0 & 73.4 & 71.8 & 67.9 & 60.2 \\ & U-LHC & **79.1** & **76.0** & **73.6** & **71.1** & **62.3** \\ \hline \multirow{6}{*}{EBM} & Reverse & 71.0 & 67.1 & 62.5 & 58.1 & 51.0 \\ & ULA & 81.3 & 71.8 & 66.6 & 59.6 & 54.8 \\ \cline{1-1} & MALA & 85.4 & 74.4 & 71.1 & 65.6 & 63.9 \\ \cline{1-1} & U-LHC & 84.5 & 81.3 & 79.2 & 74.2 & 68.1 \\ \cline{1-1} & HMC & **91.6** & **82.9** & **80.1** & **76.5** & **72.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative performance (accuracy) of composing multiple cubes positions on the CLEVR dataset.
Figure 3: **Composition enables the positions of multiple shapes to be simultaneously controlled, while training only conditions on the location of one object per image.** Reverse diffusion samples place shapes in incorrect locations. MCMC generates samples that satisfy all constraints.
### Composing Cubes
Next, we train models on a dataset of images containing between 1 and 5 examples of various shapes taken from CLEVR Johnson et al. (2017). We train our models to fit \(p(x|y)\) where \(y\) is the location of _one_ of the shapes in the image. We then compose this conditional model with itself to create a product model which defines the distribution of images conditioned on \(c\) shapes as a distribution \(\log p_{\theta}(x|y_{1},\dots,y_{c})\) equal to
\[\log p_{\theta}(x)+\sum_{i=1}^{c}\left(\log p_{\theta}(x|y_{i})-\log p_{\theta }(x)\right). \tag{13}\]
We then sample using various methods, where for each number of combination of cubes, the same number of score function evaluations are used, and evaluate each by the fraction of samples which have all objects placed in the correct location (as determined by a learned classifier). Results can be found in Table 2, where we see MCMC sampling leads to improvements and the Metropolis adjustment enabled by the energy-based parameterization leads to further improvements. We qualitatively illustrate results in Figure 3, and see more accurate generations with more steps of sampling, with more substantial increases with Metropolis adjustment.
### Classifier conditioning
Next, we train unconditional diffusion models and a noise-conditioned classifier on ImageNet. We compose these models as
\[\nabla_{x}\log p_{\theta}(x|y,t)=\nabla_{x}\log p_{\theta}(x|t)+\nabla_{x} \log p_{\theta}(y|x,t). \tag{14}\]
and sample using the corresponding score functions. We compare various samplers and model parameterizations on classifier accuracy, FID Heusel et al. (2017) and Inception Score. Quantitative results can be seen in Table 3 and qualitative results seen in Figure 4. We find that MCMC improves performance over reverse sampling, with further improvements from Metropolis corrections.
### Text-2-Image
Perhaps the most well-known results achieved with diffusion models are in text-to-image generation Ramesh et al. (2022); Saharia et al. (2022). Here we model \(p_{\theta}(x_{\text{image}}|y_{\text{text}})\). While generated images generated are photo-realistic, they can fail to generate images from prompts which specify multiple concepts at a time Liu et al. (2022) such as \(y_{\text{text}}=\) "A horse on a sandy beach or a grass plain on a not sunny day". To deal with these issues we can dissect the prompt into smaller components \(y_{1},\dots,y_{c}\), parameterize models conditioned on each component \(p_{\theta}(x|y_{i})\) and compose these models using our introduced operators. We can parse the above example into
\[\frac{p_{\theta}(x|\text{``A horse''})\big{[}\frac{1}{2}p_{\theta}(x|\text{``A sandy beach''})+\frac{1}{2}p_{\theta}(x|\text{``Grass plains''})\big{]}}{p_{\theta}(x|\text{``Sunny''})^{\alpha}}\]
which can be used to define the following (unnormalized) distribution \(p_{\theta}^{\text{comp}}(x|y_{\text{text}})\)
Figure 4: Classifier-guided generation on ImageNet. HMC leads to higher fidelity and more class-identified images than reverse diffusion sampling.
Figure 5: **Metropolis adjustment significantly improves generation performance across sampling steps.** As more MCMC steps are run (at each timestep), generation accuracy of combinations of 5 cubes improves significantly.
\begin{table}
\begin{tabular}{c l c c c} \hline \hline
**Model** & **Sampler** & Inception Score \(\uparrow\) & FID \(\downarrow\) & Accuracy \(\uparrow\) \\ \hline \multirow{3}{*}{Score} & Reverse & 29.10 & 30.46 & 18.64 \\ & LA & 29.35 & 30.49 & 65.81 \\ & U-HMC & **32.19** & **26.89** & **89.93** \\ \hline \multirow{3}{*}{Energy} & Reverse & 28.05 & 33.58 & 18.60 \\ & LA & 28.12 & 33.45 & 66.28 \\ \cline{1-1} & MALA & 30.43 & 32.22 & 83.65 \\ \cline{1-1} & U-HMC & 31.39 & 32.08 & 90.83 \\ \cline{1-1} & HMC & **33.46** & **30.52** & **94.61** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **MCMC Sampling enables better classifier guidance on 128x128 ImageNet dataset.**
Liu et al. (2022) demonstrated that composing models this way can improve the efficacy of these kinds of generations, but was restricted to composition using classifier-free guidance. We train a energy-parameterized diffusion model for text conditional 64x64 image generation and illustrate composed results in Figure 6 (upsampled to 1024x1024). We find that composition enables more faithful generations of scenes in Figure 7 with more results in Appendix A.
### Tapestries of Energy Functions
While text-to-image models generate images given natural language prompts, it is difficult to control the spatial location of different content, and difficult to generate images at higher resolutions than used during training. By composing multiple overlapping text-to-image models, at a variety of scales, we may construct an image tapestry with different specified content at different locations and scales. We illustrate in Figure 8 using this approach to generate images with content at specified spatial locations and scales. See Appendix G for details.
## 6 Discussion
**Limitations.** Our work demonstrates that diffusion models, in combination with MCMC-based sampling procedures, can be composed in novel ways capable of generating high-quality samples. However, our proposed solutions have a number of drawbacks. First, more sophisticated MCMC samplers come at a higher cost than the standard sampling approach and can take 5-times longer to generate samples than typical diffusion sampling. Second, we have shown that energy-parameterized models enable the use of more sophisticated sampling techniques, garnering further improvements. Unfortunately, this requires a second backward-pass through the model to compute the derivative implicitly, leading them to have double the memory and compute cost of score-parameterized models.
While these are considerable drawbacks, we note the focus of this work is to demonstrate that such things are possible within the framework of diffusion models. We believe there is much that can be done to achieve the benefits of our sampling procedures at less cost such as distillation Salimans and Ho (2022) and easier-to-differentiate neural networks Chen and Duvenaud (2019).
**Conclusion.** We have explored the ways that pretrained diffusion models can be composed to model new distributions. We demonstrate ways that naive implementations fail, and present two ways that performance can be improved: MCMC sampling and energy-parameterized diffusion models. Our approach leads to notable improvement across a variety of domains, scales, and compositional operators.
Figure 8: **Composition enables controllable image tapestries. Captions are shortened, see Appendix G for full text.**
Figure 6: **Energy based parameterization enables high-resolution compositional text-to-image synthesis.** |
2310.08212 | Operator formalism for discretely holomorphic parafermions of the
two-color Ashkin-Teller, loop $\mathrm{O} ( 1 )$, staggered eight-vertex, odd
eight-vertex, and abelian sandpile models | We extend an operator formalism, developed by Hongler, Kyt\"ol\"a, and Zahabi
in 2012 for the Ising model, to the Ashkin-Teller and loop $\mathrm{O} ( n ) $
models. The formalism is primarily dependent upon notions of massive, and
massless, s-holomorphicity of the Ising model, which are respectively satisfied
at, and below, the critical temperature, hence allowing for rigorous analyses
of the transfer matrix, correlation functions, and lattice fermion operators.
From results of another paper, by Tanhayi-Ahari and Rouhani in 2012, which
demonstrates that parafermionic observables for the staggered eight-vertex
model coincide with those of the Ashkin-Teller model, as well as a 2003 paper,
by Wu and Kunz, which establishes a correspondence between the staggered
eight-vertex and odd eight-vertex models, we establish further associations. | Pete Rigas | 2023-10-12T11:02:30Z | http://arxiv.org/abs/2310.08212v3 | Operator formalism for discretely homlomorphic parafermions of the two-color Ashkin-Teller, loop O(1), staggered eight-vertex, odd eight-vertex, and abelian sandpile models
###### Abstract
We extend an operator formalism, developed by Hongler, Kytola, and Zahabi in 2012 for the Ising model, to the Ashkin-Teller and loop O(\(n\)) models. The formalism is primarily dependent upon notions of massive, and massless, s-holomorphicity of the Ising model, which are respectively satisfied at, and below, the critical temperature, hence allowing for rigorous analyses of the transfer matrix, correlation functions, and lattice fermion operators. From results of another paper, by Tanhayi-Ahari and Rouhani in 2012, which demonstrates that parafermionic observables for the staggered eight-vertex model coincide with those of the Ashkin-Teller model, as well as a 2003 paper, by Wu and Kunz, which establishes a correspondence between the staggered eight-vertex and odd eight-vertex models, we establish further associations. 1
Footnote 1: _Keywords_: Ising model, random-cluster model, parafermionic observable, Cauchy-Rieman equations, massless/massive s-holomorphicity
## 1 Introduction
### Overview
Disorder parameters, and variables, which share connections with fermionic and parafermionic observables [2, 6], have attracted great interest, with studies examining topics ranging from divergence of the correlation length for FK percolation [6], behavior away from criticality [2], connective constant of the honeycomb lattice [12], operator formalism for the Ising model [13], near-criticality of the FK-Ising model [8], critical temperature [4], connection probabilities [10], as well as several other potential applications to other models, ranging from sharp thresholds [7], self-avoiding walks [9], and conformal invariance [11]. In [6], Duminil-Copin raised an open question pertaining to whether information from parafermionic observables, and related objects, can be extracted from other models, from arguments demonstrating that the correlation length for planar FK percolation diverges. To explore one such possible direction of research, we further examine the operator formalism for the Ising model, proposed by Hongler, Kytola and Zahabi, which provides a characterization of massive, and massless, s-holomorphicity which is related to the Cauchy-Riemann equations which have been known to vanish for parafermionic observables, from connections with Morera's Theorem. Within the model operator framework, we examine discrete holomorphic parafermions which have been previously computed in [15], in which Tanhayi-Ahari and Rouhani demonstrated that holomorphic parafermions exist in the eight-vertex model, from utilizing computations surrounding a parameter redefinition, allow the authors to deduce the forms of observables in the staggered six-vertex, Ashkin-Teller, and loop O(\(n\)) models [14]. The operator formalism proposed by Hongler, Kytola and Zahabi states that the parafermionic observable for the Ising model is a kernel of the convolution operator. For other models, we make use of this framework to obtain the kernel of other convolution operators, within the context of the staggered eight-vertex and odd eight-vertex models [16].
### Random cluster model objects
We provide a brief overview of properties of the random-cluster and Ising models from [2]. Fix \(G\equiv\big{(}V,E\big{)}\). Over the support of this graph, denote the random cluster measure as,
\[\phi_{p,q,G}^{\xi}\big{(}\omega\big{)}=\frac{p^{\text{\tiny{\rm{O}}}(\omega )}\big{(}1-p\big{)}^{\text{\tiny{\rm{C}}}(\omega)}q^{k(\omega)}}{Z^{\xi} \big{(}p,q,G\big{)}}\ \,\]
in which the probability of sampling a _random cluster configuration_, \(\omega\), under \(\phi\) with boundary conditions \(\xi\) is dependent upon the product of the \(p\), \(1-p\), and the number of clusters \(q\). In the denominator of the probability measure above, the partition function \(Z\) ensures that \(\phi_{p,q,G}^{\xi}\big{(}\cdot\big{)}\) is a probability measure. From the random cluster measure, there exista bijection to Eulerian loop configurations, in which the probability measure, under such a transformation, instead takes the form,
\[\phi_{p,\sqrt{2},G}^{a,b}(\omega)=\frac{\sqrt{2}^{\rm[loops]}x\big{(}p\big{)}^ {\rm[open\;bonds]}}{\widetilde{Z}^{a,b}\big{(}p,G\big{)}}\ \,\]
which can be obtained by Euler's formula, for the normalizing constant \(\widetilde{Z}\), and for,
\[x\big{(}p\big{)}\equiv\frac{p}{\sqrt{2}\big{(}1-p\big{)}}\ \.\]
Aside from introducing the random-cluster model and its probability measure, the Ising model is another celebrated model of statistical mechanics, which can be defined first with the Hamiltonian,
\[\mathcal{H}\big{(}\sigma,\Lambda\big{)}\equiv\mathcal{H}\big{(}\sigma\big{)} \equiv\sum_{\begin{subarray}{c}i\sim j\\ i,j\in\Lambda\end{subarray}}J_{ij}\sigma_{i}\sigma_{j}+\sum_{i\in\partial \Lambda}\!h_{i}\sigma_{i}\ \,\]
with the corresponding probability measure,
\[\mathbf{P}_{G}^{\chi}\big{(}\sigma\big{)}\equiv\frac{\exp\big{(}-\beta \mathcal{H}\big{(}\sigma\big{)}\big{)}}{Z^{\chi}\big{(}\sigma,G\big{)}}\ \,\]
for boundary conditions \(\chi\) at inverse temperature \(\beta\), and partition function \(Z\) so that the expression above is a probability measure. With respect to \(\mathbf{P}_{G}^{\chi}\big{(}\cdot\big{)}\), there exists a relation between the probability measures of the Ising and random-cluster models, in which,
\[\mathbf{E}_{G}^{\rm free}\big{[}\sigma\big{(}0\big{)}\sigma\big{(}a\big{)} \big{]}=\phi_{p,2,G}^{0}\big{(}0\longleftrightarrow a\big{)}\ \,\]
under free boundary conditions, in which the spin-spin correlations at the origin and point \(a\) under the expectation of the Ising model are equivalent to the probability of a connectivity event, \(\big{\{}0\longleftrightarrow a\big{\}}\), between the points \(0\) and \(a\), occurring under free boundary conditions in the random-cluster model.
The notion of duality has played a significant role in several models of statistical physics. For the random-cluster model, the self-dual point, \(p_{\rm sd}\big{(}q\big{)}\), takes the form,
\[p_{\rm sd}\big{(}q\big{)}\equiv\frac{\sqrt{q}}{\sqrt{q}+1}\ \,\]
arises from the conditions,
\[p^{*}\big{(}p,q\big{)}\equiv\frac{\big{(}1-p\big{)}q}{\big{(}1-p\big{)}q+p}\ \,\]
and,
\[\frac{p^{*}p}{\big{(}1-p^{*}\big{)}\big{(}1-p\big{)}}=q\ \,\]
being satisfied. Under the assumption that \(q\equiv 2\) in the random-cluster model, the fermionic observable takes the form,
\[F\big{(}e\big{)}\equiv\phi_{p,G}^{a,b}\big{[}\exp\big{(}\frac{i}{2}W_{\gamma} \big{(}e,e_{b}\big{)}\big{)}\mathbf{1}_{e\in\gamma}\big{]}\ \,\]
where the quantity \(W_{\gamma}\big{(}\cdot,\cdot\big{)}\) in the exponential denotes the winding number of the path \(\gamma\), namely the number of left and right turns of the exploration path, ie the total rotation of the path measured in radians between \(e\) and \(e_{b}\).
Observables of the form, and similar forms, above have many applications. In [6], it was shown that the correlation length diverges as \(p\longrightarrow p_{c}\big{(}q\big{)}\) from above, which is given by the reciprocal of the infimum,
\[\frac{1}{\xi\big{(}p\big{)}}\equiv-\inf_{n>0}\,\frac{1}{n}\,\log\big{[}\phi_{ \mathbf{Z}^{2},p,q}^{0}\big{(}0\longleftrightarrow\big{(}n,0\big{)}\big{)} \big{]}\longrightarrow 0\ \.\]
On the other hand, in the Ising model, an observable can be defined, which shares connections with massive, and massless s-holomorphicity, [13], which is a function of the form,
\[f\big{(}a,z\big{)}\equiv\frac{1}{Z^{\chi}}\sum_{\gamma:a\longrightarrow z}\exp \big{[}-2\beta\big{|}\text{edges}\big{(}\gamma\big{)}\big{|}-\frac{i}{2}W_{ \gamma}\big{(}a,z\big{)}\big{]}\ \,\]
over a discrete domain \(\Omega\), the union of faces of \(\mathbf{Z}^{2}\), for the midpoints of edges \(a\) and \(z\). In addition to the observables \(F\big{(}e\big{)}\) and \(f\big{(}a,z\big{)}\) introduced above, it continues to remain of interest to determine properties of observables for other models [6].
### Observables for other models of statistical physics
Under the assumption that such observables for other models exist, in [15] it was demonstrated that observables exist for the staggered six-vertex, eight-vertex, and Ashkin-Teller models, which we recount below. The fact that observables exist for the three other models mentioned follows from the fact that contour integrals can be expressed, discretely, as,
\[\sum_{(i,j)\in C}F\big{(}z_{ij}\big{)}\big{(}z_{j}-z_{i}\big{)}\equiv 0\ \,\]
for a contour \(C\), and \(F\big{(}z_{ij}\big{)}\) a complex-valued function defined over the midpoints \(z_{ij}\) of the edges \(\big{(}ij\big{)}\). From the condition above, discrete parafermions can be defined for the self-dual Ising, Potts, \(\text{O}\big{(}n\big{)}\), Ashkin-Teller, and eight-vertex models. For the Ashkin-Teller model, the parafermionic observable, \(\psi\), takes the form,
\[\psi=\exp\bigl{(}iW_{\gamma}\big{)}\sigma\mu_{\tau^{\prime}}\ \,\]
where, as in the definitions of the parafermionic observables for random-cluster and FK percolation, the power of the exponential is proportional to the winding number, in addition to the factors \(\sigma\) and \(\mu_{\tau^{\prime}}\), which respectively denote the cluster between the points \(z_{1}\) and \(z_{2}\) of the square lattice, and a \(\tau^{\prime}\) domain wall, while for the staggered eight-vertex model, the parafermionic observable takes the same form.
For the \(\text{O}\big{(}n\big{)}\) model, the parafermionic observable was shown to take the form, for \(\Omega\subsetneq\mathbf{H}\),[12],
\[F^{\text{loop}}\big{(}a,z,x,\sigma\big{)}\equiv F^{\text{loop}}\big{(}z\big{)} \equiv\sum_{\begin{subarray}{c}\gamma:a\longrightarrow z\\ \gamma\subset\Omega\end{subarray}}\exp\big{(}-i\sigma W_{\gamma}\big{(}a,z \big{)}\big{)}x^{l(\gamma)}\ \,\]
where \(\sigma\) denotes a real parameter appearing in the winding term of the path in the power of the exponential. Lastly, for the \(Z_{N}\) model, the observable takes a similar form as that of the parafermionic observable for the Ashkin-Teller model, in which, [1],
\[\psi\big{(}r,\vec{r}\big{)}\equiv\psi\big{(}r\big{)}\equiv\exp\big{(}-i\sigma _{m}\theta\big{(}r,\vec{r}\big{)}\big{)}s_{m}\big{(}r\big{)}\mu_{m}\big{(}r \big{)}\ \,\]
where in the definition of the observable above, \(\mu_{m}\big{(}r\big{)}\) denotes a disorder variable, and \(m\in\big{\{}1,\cdots,N-1\big{\}}\), and \(s_{m}\big{(}r\big{)}\) denotes a function of the spin of the observable. To apply the operator formalism developed for the Ising model at the critical temperature \(\beta_{c}\), it is also of importance to identify the critical parameters for the \(\mathrm{O}\big{(}n\big{)}\) and Ashkin-Teller models, which are respectively given by, [12],
\[x_{c}\big{(}n\big{)}\equiv\frac{1}{\sqrt{2+\sqrt{2-n}}}\ \,\]
for \(0\leq n<2\), which has a probability measure of the form,
\[\mathbf{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{[}\sigma\big{]}= \frac{x^{n(\sigma)}n^{l(\sigma)}}{Z_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi} \big{(}\sigma\big{)}}\ \,\]
under boundary conditions \(\xi\). Later, in \(1\), we will apply the operator framework to the high-temperature expansion of the loop probability measure above, which takes the form,
\[\mathbf{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{[}\sigma\big{]} \stackrel{{\mathrm{LTE}}}{{\sim}}\frac{n^{k(\sigma)}x^{c(\sigma) }\mathrm{exp}\big{(}hr\big{(}\sigma\big{)}+h^{\prime}r^{\prime}\big{(}\sigma \big{)}\big{)}}{Z_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma \big{)}}\xrightarrow[n\equiv 1,b^{\prime}\equiv 0]}\mathbf{P}_{\Lambda_{\mathbf{Z} ^{2}}}^{\mathrm{lsing},\chi}\big{[}\sigma^{\mathrm{lsing}}\big{]}\equiv \mathrm{O}\big{(}1\big{)}\ \mathrm{measure}\ \,\]
The second critical point, along a line of critical points, for the Ashkin-Teller model takes the form, [5],
\[J\equiv U\equiv\frac{1}{4}\mathrm{log}(3)\ \,\]
from the fact that the Ashkin-Teller and Ising models belong to the same universality class, where \(J\) denotes one of the coupling constants appearing in the Ashkin-Teller Hamiltonian,
\[\mathcal{H}\equiv\mathcal{H}^{\mathrm{AT}}\equiv\sum_{i\sim j}\big{[}J\big{(} \tau\big{(}i\big{)}\tau\big{(}j\big{)}+\tau^{\prime}\big{(}i\big{)}\tau^{\prime }\big{(}j\big{)}\big{)}+U\big{(}\tau\big{(}i\big{)}\tau\big{(}j\big{)}\tau^{ \prime}\big{(}i\big{)}\tau^{\prime}\big{(}j\big{)}\big{)}\big{]}\ \,\]
for two sites \(i,j\) on the square lattice, with \(\big{(}\tau,\tau^{\prime}\big{)}\in\big{\{}\pm 1\big{\}}^{V}\times\big{\{}\pm 1 \big{\}}^{V}\), with the corresponding probability measure,
\[\mathbf{P}_{\Lambda}^{\mathrm{AT},\xi}\big{[}\omega\big{]}\equiv\frac{ \mathrm{exp}\big{(}\mathcal{H}\big{(}\omega\big{)}\big{)}}{Z_{\Lambda}^{\xi} \big{(}\omega\big{)}}\ \,\]
under boundary conditions \(\xi\), \(\Lambda\subsetneq\mathbf{Z}^{2}\), an Ashkin-Teller configuration \(\omega\), and partition function,
\[Z_{\Lambda}^{\xi}\big{(}\omega\big{)}\equiv\sum_{\omega\in\Omega}\mathrm{exp} \big{(}\mathcal{H}\big{(}\omega\big{)}\big{)}\ \.\]
To make the statement of several definitions the easiest in future sections (including, the function \(\tau_{x}\) in _2.1_, and the transfer matrix construction in _2.2.2_, which is dependent upon the construction of several bases spanning an analogue of the Clifford algebra identified in [13]), we restrict ourselves to coupling two Potts models together with two colors in \(\mathbf{P}_{\Lambda}^{\mathrm{AT},\xi}\big{[}\cdot\big{]}\).
### Paper overview
From the background of the operator formalism for the Ising model, in the next section we introduce the notions of massive, and massless, s-holomorphicity. In previous work, [13], such notions were applied to study the Ising model, in which the parafermionic observable satisfied the massless s-holomorphic equations at criticality, and the massive s-holomorphic equations away from criticality. Similar notions of holomorphicity are available not only for the Askin-Teller model, but also for the loop model. More specifically, in the next section, before discussing the staggered eight-vertex and odd eight-vertex models in \(3\), has the following breakdown:
* _2.1_: Holomorphicity at, and below, the critical temperature \(\beta_{c}\),
* _2.2_: Transfer matrices,
* _2.3_: Fock representation,
* _2.4_: Operator correlations and observables,
* _2.5_: Low temperature expansions of parafermionic observables,
* _2.6_: Riemann-Poincare-Steklov operators.
The main result of this effort culminates in two convolution operators, denoted \(\big{(}U^{\mathbf{b}}_{\Omega}\big{)}^{\mathrm{AT}}\), and \(\big{(}U^{\mathbf{b}}_{\Omega}\big{)}^{\mathrm{loop}}\), introduced in _2.6_, whose kernels are the parafermionic observables for the Ashkin-Teller and loop O\(\big{(}1\big{)}\) models.
## 2 Massive, and massless, s-holomorphicity
As a beginning case, we restate results of s-holomorphicity for the Ising model.
### Holomorphicity at, and below, the critical temperature \(\beta_{c}\)
For the Ising model, it is know that the critical temperature \(\beta_{c}\equiv\frac{1}{2}\mathrm{log}\big{(}\sqrt{2}+1\big{)}\). The statement of the two results below describes how massless s-holomorphicity occurs at the critical temperature, while below the critical temperature, in the subcritical regime, massive s-holomorphicity occurs.
**Definition 1** (_massive s-holomorphicity below the critical temperature_, [13]).: Fix \(\beta>0\). For \(\beta<\beta_{c}\), for a complex number \(\nu\equiv\bar{\lambda}^{3}\frac{a+i}{\alpha-i}\), with \(\alpha\equiv\exp\big{(}-2\beta\big{)}\) and \(\lambda\equiv\exp\big{(}\frac{i\pi}{4}\big{)}\), the function \(F:\Omega\longrightarrow\mathbf{C}\) is said to be massive s-holomorphic if,
\[F\big{(}N\big{)}+\nu^{-1}\lambda F\big{(}N\big{)}=\nu^{-1}F \big{(}E\big{)}+\lambda F\bar{\big{(}E\big{)}}\ \,\] \[F\big{(}N\big{)}+\nu\lambda^{-1}F\big{(}N\big{)}=\nu F\big{(}W \big{)}+\lambda^{-1}F\big{(}W\big{)}\ \,\] \[F\big{(}S\big{)}+\nu\lambda^{3}F\big{(}S\big{)}=\nu F\big{(}E \big{)}+\lambda^{3}F\big{(}E\big{)}\ \,\] \[F\big{(}S\big{)}+\nu^{-1}\lambda^{-3}F\big{(}S\big{)}=\nu^{-1}F \big{(}W\big{)}+\lambda^{-3}F\big{(}W\big{)}\ \,\]
for any face of \(\Omega\) with edges E,N,W,S.
A similar definition holds for massless s-holomorphicity at the critical temperature, which is stated through a second definition below.
**Definition 2** (_massless s-holomorphicity at the critical temperature_, [13]).: Fix \(\beta>0\). For \(\beta\equiv\beta_{c}\), given the same quantities \(\alpha\) and \(\lambda\), the function \(F:\Omega\longrightarrow\mathbf{C}\) is said to be massless s-holomorphic if,
\[F\big{(}N\big{)}+\lambda F\big{(}N\big{)}=F\big{(}E\big{)}+ \lambda F\bar{\big{(}E\big{)}}\ \,\] \[F\big{(}N\big{)}+\lambda^{-1}F\big{(}N\big{)}=F\big{(}W\big{)}+ \lambda^{-1}F\big{(}W\big{)}\ \,\] \[F\big{(}S\big{)}+\lambda^{3}F\bar{\big{(}S\big{)}}=F\big{(}E \big{)}+\lambda^{3}F\bar{\big{(}E\big{)}}\ \,\] \[F\big{(}S\big{)}+\lambda^{-3}F\bar{\big{(}S\big{)}}=F\big{(}W \big{)}+\lambda^{-3}F\bar{\big{(}W\big{)}}\ \,\]
for any face of \(\Omega\) with edges E,N,W,S.
Besides the relations above which hold about a single face of \(\Omega\), one must also define the propagation operators for \(\beta<\beta_{c}\), and also for \(\beta\equiv\beta_{c}\).
**Lemma 1** (_propagation mechanism for the Ising model away from the critical temperature_, **Lemma 6**, [13]).: For \(\beta<\beta_{c}\), and the interval,
\[\mathbf{I}^{*}\equiv\left\{a+\frac{1}{2},a+\frac{3}{2},\cdots,b-\frac{1}{2}\right\}\ \,\]
with left, and right, endpoints respectively given by \(k_{L}\equiv a+\frac{1}{2}\) and \(k_{R}\equiv b-\frac{1}{2}\), dual to an interval of \(\mathbf{Z}\),
\[\mathbf{I}\equiv\left\{a,a+1,\cdots,b\right\}\ \,\]
the s-holomorphic propagator \(P:\left(\mathbf{R}^{2}\right)^{\mathbf{I}^{*}}\longrightarrow\left(\mathbf{R} ^{2}\right)^{\mathbf{I}^{*}}\) satisfies, for \(\lambda\equiv\exp\bigl{(}\frac{i\pi}{4}\bigr{)}\), and \(k\) such that \(k\in\mathbf{I}^{*}\backslash\left\{k_{L},k_{R}\right\}\),
\[\bigl{(}Pf\bigr{)}\bigl{(}k\bigr{)}\equiv\frac{1}{\sqrt{2} \lambda^{3}}f\bigl{(}k-1\bigr{)}+2f\bigl{(}k\bigr{)}+\frac{\lambda^{3}}{\sqrt {2}}f\bigl{(}k+1\bigr{)}+\frac{1}{\sqrt{2}}f\bigl{(}k\overset{-}{-}1\bigr{)}- \sqrt{2}f\bigl{(}k\bigr{)}+\frac{1}{\sqrt{2}}f\bigl{(}k\overset{-}{+}1\bigr{)} \ \,\] \[\bigl{(}Pf\bigr{)}\bigl{(}k_{L}\bigr{)}\equiv\bigl{(}1+\frac{1}{ \sqrt{2}}\bigr{)}f\bigl{(}k_{L}\bigr{)}+\frac{\lambda^{3}}{\sqrt{2}}f\bigl{(} k_{L}+1\bigr{)}+\bigl{(}\lambda^{3}+\frac{1}{\sqrt{2}\lambda^{3}}\bigr{)}f \bigl{(}\bar{k}_{L}\bigr{)}+\frac{1}{\sqrt{2}}f\bigl{(}\bar{k}_{L}\overset{-}{+} 1\bigr{)}\ \,\] \[\bigl{(}Pf\bigr{)}\bigl{(}k_{R}\bigr{)}\equiv\frac{1}{\sqrt{2} \lambda^{3}}f\bigl{(}k_{R}-1\bigr{)}+\bigl{(}1+\frac{1}{\sqrt{2}}\bigr{)}f \bigl{(}k_{R}\bigr{)}+\frac{1}{\sqrt{2}}f\bigl{(}\bar{k}_{R}\overset{-}{-}1 \bigr{)}+\bigl{(}\frac{1}{\lambda^{3}}+\frac{\lambda^{3}}{\sqrt{2}}\bigr{)}f \bigl{(}\bar{k}_{R}\bigr{)}\ \.\]
**Lemma 2**: (_propagation mechanism for the Ising model at the critical temperature_, **Lemma 6**, [13]). For \(\beta\equiv\beta_{c}\), and the interval \(\mathbf{I}^{*}\) defined in **Lemma 1**, the s-holomorphic propagator \(P_{\beta}:\left(\mathbf{R}^{2}\right)^{\mathbf{I}^{*}}\longrightarrow\left( \mathbf{R}^{2}\right)^{\mathbf{I}^{*}}\) satisfies, for \(k\) such that \(k\in\mathbf{I}^{*}\backslash\left\{k_{L},k_{R}\right\}\),
\[\bigl{(}P_{\beta}f\bigr{)}\bigl{(}k\bigr{)}\equiv\bigl{(}\frac{-S- i}{2S}\bigr{)}f\bigl{(}k-1\bigr{)}+\frac{C^{2}}{S}f\bigl{(}k\bigr{)}+\frac{i-S}{2S}f \bigl{(}k+1\bigr{)}+\frac{C}{2S}f\bigl{(}k\overset{-}{-}1\bigr{)}-Cf\bigl{(} k\bigr{)}+\frac{C}{2S}f\bigl{(}k\overset{-}{+}1\bigr{)}\ \,\] \[\bigl{(}P_{\beta}f\bigr{)}\bigl{(}k_{L}\bigr{)}\equiv\frac{\bigl{(} S+C\bigr{)}C}{2S}f\bigl{(}k_{L}\bigr{)}+\frac{i-S}{2S}f\bigl{(}k_{L}+1\bigr{)}+f \bigl{(}\bar{k}_{L}\bigr{)}+\frac{C}{2S}f\bigl{(}k_{L}\overset{-}{+}1\bigr{)}\ \,\] \[\bigl{(}P_{\beta}f\bigr{)}\bigl{(}k_{R}\bigr{)}\equiv-\frac{S_{i}}{2 S}f\bigl{(}k_{R}-1\bigr{)}+\frac{\bigl{(}S+C\bigr{)}C}{2S}f\bigl{(}k_{R}\bigr{)}+ \frac{C}{2S}f\bigl{(}k_{R}\overset{-}{-}1\bigr{)}+\frac{-\bigl{(}S+C\bigr{)}S+i \bigl{(}C-S\bigr{)}}{2S}f\bigl{(}\bar{k}_{R}\bigr{)}\ \,\]
with \(S\equiv\sinh\bigl{(}2\beta\bigr{)}\) and \(C\equiv\cosh\bigl{(}2\beta\bigr{)}\).
Over the boundary \(\partial\Omega\) of the finite volume, boundary conditions satisfy the following property.
**Definition 3**: (_Riemann boundary conditions for the Ising model_, **Definition 3**, [13]). A function \(f:\Omega\longrightarrow\mathbf{C}\) is said to satisfy Riemann boundary conditions, if, for an edge \(z\in\partial\Omega\),
\[f\bigl{(}z\bigr{)}\big{|}\big{|}\frac{1}{\sqrt{\tau_{\mathrm{cw}}\bigl{(}z \bigr{)}}}\ \,\]
namely that \(f\bigl{(}z\bigr{)}\) is a multiple of \(\tau_{\mathrm{cw}}\bigl{(}z\bigr{)}\), in which,
\[e\ \mathrm{vertical} \iff\tau_{\mathrm{cw}}\bigl{(}e\bigr{)}\in\bigl{\{}\pm 1 \bigr{\}}\ \,\] \[e\ \mathrm{horizontal} \iff\tau_{\mathrm{cw}}\bigl{(}e\bigr{)}\in\bigl{\{}\pm i\bigr{\}}\ \.\] ( ** )
With \(P\) and \(P_{\beta}\), another component of the operator formalism for the Ising model is the transfer matrix. As a mapping from the state space of the Ising model into itself, the transfer matrix takes the following form.
**Definition 4**: (_transfer matrix of the Ising model_, [13]). For the Ising model, the transfer matrix takes the form,
\[V\equiv\bigl{(}V^{h}\bigr{)}^{\frac{1}{2}}V^{V}\bigl{(}V^{h}\bigr{)}^{\frac{1} {2}}\ \,\]
for,
\[\big{(}V^{h}\big{)}^{\frac{1}{2}}\equiv\exp\bigl{(}\beta\!\sum_{a \leq i\leq b}\!\!\sigma_{i}\rho_{i}\bigr{)}\ \,\] \[V^{V}\equiv\exp\bigl{(}\frac{\beta}{2}\!\sum_{a\leq i\leq b-1}\!\! \sigma_{i}\sigma_{i+1}\bigr{)}\ \,\]
under the basis \(\bigl{\{}e_{\sigma}\bigr{\}}\), in which \(\bigl{(}V^{h}\bigr{)}^{\frac{1}{2}}\equiv 0\) if \(\sigma\neq\rho\), and similarly, \(V^{V}\equiv 0\) if \(\sigma_{a}\neq\rho_{a}\) and \(\sigma_{b}\neq\rho_{b}\).
The fact that the Askin-Teller model, and Ising model, belong to the same universality class allows for us to make use of the notions of massive, and massless, s-holomorphicity above, with the following.
**Definition 5** (_massive s-holomorphicity below the critical temperature of the Ashkin-Teller model_).: Fix \(J,U\in\mathbf{R}\). Over the square grid \(\Omega\), the Ashkin-Teller observable \(\psi\bigl{(}r\bigr{)}\), for couplings \(J,U<\frac{1}{4}\mathrm{log}\bigl{(}3\bigr{)}\), satisfies,
\[\psi\bigl{(}N\bigr{)}+\nu^{-1}\lambda\psi\bigl{(}N\bigr{)}=\nu^{ -1}\psi\bigl{(}E\bigr{)}+\lambda\psi\bigl{(}E\bigr{)}\ \,\] \[\psi\bigl{(}N\bigr{)}+\nu\lambda^{-1}\psi\bigl{(}N\bigr{)}=\nu\psi \bigl{(}W\bigr{)}+\lambda^{-1}\psi\bigl{(}W\bigr{)}\ \,\] \[\psi\bigl{(}S\bigr{)}+\nu\lambda^{3}\psi\bigl{(}S\bigr{)}=\nu\psi \bigl{(}E\bigr{)}+\lambda^{3}\psi\bigl{(}E\bigr{)}\ \,\] \[\psi\bigl{(}S\bigr{)}+\nu^{-1}\lambda^{-3}\psi\bigl{(}S\bigr{)}= \nu^{-1}\psi\bigl{(}W\bigr{)}+\lambda^{-3}\psi\bigl{(}W\bigr{)}\ \,\]
for parameters \(\lambda\) and \(\nu\) provided in **Definition 1**, and any face of \(\Omega\) with edges E,N,W,S.
**Definition 6** (_massless s-holomorphicity at the critical temperature of the Ashkin-Teller model_).: Fix \(J,U\in\mathbf{R}\). Over the square grid \(\Omega\), the Ashkin-Teller observable \(\psi\bigl{(}r\bigr{)}\), for couplings \(J\equiv U\equiv\frac{1}{4}\mathrm{log}\bigl{(}3\bigr{)}\), satisfies,
\[\psi\bigl{(}N\bigr{)}+\lambda\psi\bigl{(}N\bigr{)}=\psi\bigl{(} E\bigr{)}+\lambda\psi\bigl{(}E\bigr{)}\ \,\] \[\psi\bigl{(}N\bigr{)}+\lambda^{-1}\psi\bigl{(}N\bigr{)}=\psi \bigl{(}W\bigr{)}+\lambda^{-1}\psi\bigl{(}W\bigr{)}\ \,\] \[\psi\bigl{(}S\bigr{)}+\lambda^{3}\psi\bigl{(}S\bigr{)}=\psi \bigl{(}E\bigr{)}+\lambda^{3}\psi\bigl{(}E\bigr{)}\ \,\] \[\psi\bigl{(}S\bigr{)}+\lambda^{-3}\psi\bigl{(}S\bigr{)}=\psi \bigl{(}W\bigr{)}+\lambda^{-3}\psi\bigl{(}W\bigr{)}\ \,\]
for the parameter \(\lambda\) provided in **Definition 1**, and any face of \(\Omega\) with edges E,N,W,S.
For the loop \(\mathrm{O}\bigl{(}n\bigr{)}\) model, notions of discrete holomorphicity have already been employed by Duminil-Copin and Smirnov to demonstrate that the connective constant of the honeycomb lattice is \(\sqrt{2+\sqrt{2}}\), [12], at the critical point, which is a weaker form of s-holomorphicity.
**Lemma** (_discrete holomorphicity of the loop parafermionic observable for the connective constant of the honeycomb lattice_, **Lemma 1**, [12]).: Fix \(\sigma\equiv\frac{5}{8}\). For the loop \(\mathrm{O}\bigl{(}n\bigr{)}\) observable, at the critical point \(x_{c}\bigl{(}n\bigr{)}\), given any vertex within a finite volume of the hexagonal lattice,
\[\bigl{(}p-v\bigr{)}F^{\mathrm{loop}}\bigl{(}p\bigr{)}+\bigl{(}q-v\bigr{)}F^{ \mathrm{loop}}\bigl{(}q\bigr{)}+\bigl{(}r-v\bigr{)}F^{\mathrm{loop}}\bigl{(} r\bigr{)}=0\ \,\]
for the midpoints of edges \(p,q,r\) adjacent to \(v\).
From the notion of discrete holomorphicity above at the critical point of the loop \(\mathrm{O}\bigl{(}n\bigr{)}\) model, to introduce notions of massive, and massless, s-holomorphicity for the loop \(\mathrm{O}\bigl{(}n\bigr{)}\) model, one needs to not only make use of different parameters \(\lambda\) and \(\alpha\) than those which are introduced in **Definition 1** for the Ising model, but also to account for the total number of terms appearing in the Cauchy-Riemann equations. Immediately, one can observe that the Cauchy-Riemann equations for the Ising and Ashkin-Teller models differ from that of the loop model from the fact that in the former there are four terms appearing for edges oriented along the E,N,W,S portions of each face of the square lattice, in comparison to there only being three edges oriented along each face of the triangular lattice in the latter.
For the following definition of s-holomorphicity, along the lines of previous comments above, we make use of the relation originally provided for midpoints of the square lattice, [13],
\[f\big{(}e_{v}\big{)}+\frac{i}{\theta}f\big{(}\widetilde{e}_{v}\big{)}=f\big{(}e _{w}\big{)}+\frac{i}{\theta}f\big{(}\widetilde{e}_{w}\big{)}\ \,\]
for some function \(f\), angle,
\[\theta\equiv\frac{2u-v-w}{\big{|}2u-v-w\big{|}}\ \,\]
and edges,
\[e_{v}\equiv vu\ \,\] \[e_{w}\equiv wu\ \.\]
For the following definitions of s-holomorphicity below for the loop \(\mathrm{O}\big{(}n\big{)}\) model, introduce,
\[e_{1}\equiv\bar{e_{1}}\equiv-1\ \,\] \[e_{2}\equiv\exp\!\big{(}i\frac{4\pi}{3}\big{)}\ \,\] \[\bar{e_{2}}\equiv\exp\!\big{(}-i\frac{4\pi}{3}\big{)}\ \,\] \[e_{3}\equiv\bar{e_{3}}\equiv-1\ \,\] \[e_{4}\equiv\exp\!\big{(}i\frac{\pi}{3}\big{)}\ \,\] \[\bar{e_{4}}\equiv\exp\!\big{(}-i\frac{\pi}{3}\big{)}\ \.\]
We postpone introducing the construction of the transfer matrix for the loop model to _2.2.2_. For the definition of the observable below that is defined in terms of the loop transfer matrix, this quantity is equivalent to the expected value of the observable,
\[\mathscr{O}\equiv K^{|\gamma|}\lambda^{t_{r}}\bar{\lambda}^{t_{l}}\ \,\]
for the number of paths in a loop configuration, \(\big{|}\gamma\big{|}\), and the number of left, and right, turns of the path, \(t_{r}\) and \(t_{l}\), respectively.
**Definition 7** (_massive s-holomorphicity of the loop parafermionic observable below Nienhuis' critical point_, [14]). For \(x<x_{c}(n)\), the observable,
\[F\big{(}k,m\big{)}\equiv\frac{1}{Z}\left\langle\alpha\right|T_{3}^{M-k}T_{2} \big{(}m\big{)}T_{1}^{k}\left|\beta\right\rangle\ \,\]
defined in terms of the transfer matrix \(T\), \(m\in\big{\{}1,2,\cdots,M\big{\}}\), and \(\alpha,\beta>0\), satisfies,
\[F\big{(}z_{1}\big{)}+\bar{e_{1}}^{2s}F\big{(}\bar{z}_{1}\big{)} =F\big{(}z_{2}\big{)}+\bar{e_{1}}^{2s}F\big{(}\bar{z}_{2}\big{)}\ \,\] \[F\big{(}z_{2}\big{)}+\bar{e_{2}}^{2s}F\big{(}\bar{z}_{2}\big{)} =F\big{(}z_{3}\big{)}+\bar{e_{2}}^{2s}F\big{(}\bar{z}_{3}\big{)}\ \,\] \[F\big{(}z_{3}\big{)}+\bar{e_{3}}^{2s}F\big{(}\bar{z}_{3}\big{)} =F\big{(}z_{4}\big{)}+\bar{e_{3}}^{2s}F\big{(}\bar{z}_{4}\big{)}\ \,\] \[F\big{(}z_{4}\big{)}+\bar{e_{4}}^{2s}F\big{(}\bar{z}_{4}\big{)} =F\big{(}z_{1}\big{)}+\bar{e_{4}}^{2s}F\big{(}\bar{z}_{1}\big{)}\ \,\]
for any face of \(\mathbf{H}\) with edges \(e_{1},e_{2},e_{3},e_{4}\).
**Definition 8**: (_massless s-holomorphicity of the loop parafermionic observable at Nienhuis' critical point_, [14]). For \(x\equiv x_{c}\big{(}n\big{)}\), the observable,
\[F\big{(}k,m\big{)}\equiv\frac{1}{Z}\left\langle\alpha\right|T_{3}^{M-k}T_{2} \big{(}m\big{)}T_{1}^{k}\left|\beta\right\rangle\ \,\]
defined in terms of the transfer matrix \(T\), \(m\in\big{\{}1,2,\cdots,M\big{\}}\), and \(\alpha,\beta>0\), given a parameter,
\[\nu^{\text{loop}}\equiv\bar{\lambda}^{2}\frac{\alpha^{\prime}+i}{\alpha^{ \prime}-i}\equiv\left(\exp\big{(}-i\frac{\pi}{4}\big{)}\right)^{2}\frac{n+i}{ n-i}\ \,\]
for \(0\leq n<2\), satisfies,
\[F\big{(}z_{1}\big{)}+\big{(}\nu^{\text{loop}}\big{)}^{-1}\bar{e }_{1}{}^{2s}F\big{(}\bar{z}_{1}\big{)} =\big{(}\nu^{\text{loop}}\big{)}^{-1}F\big{(}z_{2}\big{)}+\bar{e }_{1}{}^{2s}F\big{(}\bar{z}_{2}\big{)}\ \,\] \[F\big{(}z_{2}\big{)}+\big{(}\nu^{\text{loop}}\big{)}^{-1}\bar{e }_{2}{}^{2s}F\big{(}\bar{z}_{2}\big{)} =\big{(}\nu^{\text{loop}}\big{)}^{-1}F\big{(}z_{3}\big{)}+\bar{e }_{2}{}^{2s}F\big{(}\bar{z}_{3}\big{)}\ \,\] \[F\big{(}z_{3}\big{)}+\big{(}\nu^{\text{loop}}\big{)}^{-1}\bar{e }_{3}{}^{2s}F\big{(}\bar{z}_{3}\big{)} =\big{(}\nu^{\text{loop}}\big{)}^{-1}F\big{(}z_{4}\big{)}+\bar{e }_{3}{}^{2s}F\big{(}\bar{z}_{4}\big{)}\ \,\] \[F\big{(}z_{4}\big{)}+\big{(}\nu^{\text{loop}}\big{)}^{-1}\bar{e }_{4}{}^{2s}F\big{(}\bar{z}_{4}\big{)} =\big{(}\nu^{\text{loop}}\big{)}^{-1}F\big{(}z_{1}\big{)}+\bar{e }_{4}{}^{2s}F\big{(}\bar{z}_{1}\big{)}\ \,\]
for any face of \(\mathbf{H}\) with edges \(e_{1},e_{2},e_{3},e_{4}\).
In the setting of the Ising model, the two generators of the Clifford algebra,
\[p_{k}\ \,\] \[q_{k}\ \,\]
act on the basis elements \(e_{\sigma}\), through the action,
\[p_{k}\big{(}e_{\sigma}\big{)}=\sigma_{k+\frac{1}{2}}e_{\tau}\ \,\] \[q_{k}\big{(}e_{\sigma}\big{)}=i\sigma_{k-\frac{1}{2}}e_{\tau}\ \,\] (*)
for a configuration indexed with \(k\) over the dual interval \(\mathbf{I}^{*}\), namely over the sample space over \(\mathbf{I}\), \(\big{\{}\pm 1\big{\}}^{\mathbf{I}}\), where \(\tau\) is given by,
\[\tau_{x}\equiv\ \left\{\begin{array}{ll}\sigma_{x}&,\,x>k\ \,\\ -\sigma_{x}&,\,x<k\ \,\end{array}\right.\]
in which, from the definition of \(\tau_{x}\) above, for \(x<k\) the spin is flipped to any of the \(q-1\) remaining colors of the Potts model. From definitions of s-holomorphicity for the Askin-Teller and \(\mathrm{O}\big{(}n\big{)}\) models, one must also encode boundary conditions for the discrete boundary value problems, which is achieved with the following.
**Definition 9**: (_Riemann boundary conditions for the Ashkin-Teller model_). For the Ashkin-Teller model, a function \(f:\Omega\longrightarrow\mathbf{C}\) is said to satisfy Riemann boundary conditions, if for an edge \(z\) incident to the boundary,
\[f\big{(}z\big{)}\big{|}\big{|}\frac{1}{\sqrt{\tau_{\text{cw}}\big{(}z\big{)}}}\ \,\]
under the convention provided in (**).
**Definition 10**: (_Riemann boundary conditions for the loop model_). For the loop model, a function \(f:\Omega\longrightarrow\mathbf{C}\) is said to satisfy Riemann boundary conditions, if for an edge \(z\) incident to the boundary of some finite volume of the hexagonal lattice,
\[f(z)\big{|}\big{|}\frac{1}{\sqrt{\tau_{\rm cw}\big{(}z\big{)}}}\ \,\]
under the convention provided in (**).
Equipped with massive, and massless, s-holomorphicity, as well as boundary conditions for discrete Riemann boundary value problems, below we also provide properties of the propagation matrices for the Ashkin-Teller and loop models.
**Lemma 3** (_propagation mechanism for the Ashkin-Teller model at the critical \(\frac{1}{4}{\rm log}\big{(}3\big{)}\)_ threshold). For \(J,U<\frac{1}{4}{\rm log}\big{(}3\big{)}\) and some function \(f\equiv f^{\rm AT}\), the propagation mechanism \(P^{\rm AT}:\big{(}{\bf R}^{2}\big{)}^{\bf I^{\prime}}\longrightarrow\big{(}{ \bf R}^{2}\big{)}^{\bf I^{\prime}}\) satisfies, for \(k\in{\bf I^{\prime}}\backslash\{k_{L},k_{R}\}\),
\[\big{(}P^{\rm AT}f\big{)}\big{(}k\big{)}\equiv\frac{\lambda^{-3}}{\sqrt{2}}f \big{(}k-1\big{)}+2f\big{(}k\big{)}+\frac{\lambda^{3}}{\sqrt{2}}f\big{(}k+1 \big{)}+\frac{1}{\sqrt{2}}f\big{(}\bar{k-1}\big{)}-\sqrt{2}f\big{(}\bar{k} \big{)}+\frac{1}{\sqrt{2}}f\big{(}\bar{k+1}\big{)}\ \,\]
\[\big{(}P^{\rm AT}f\big{)}\big{(}k_{L}\big{)}\equiv\big{(}1+\frac{1}{\sqrt{2}} \big{)}f\big{(}k_{L}\big{)}+\frac{\lambda^{3}}{\sqrt{2}}f\big{(}k_{L}+1\big{)} +\big{(}\lambda^{3}+\frac{\lambda^{-3}}{\sqrt{2}}\big{)}f\big{(}\bar{k}_{L} \big{)}+\frac{1}{\sqrt{2}}f\big{(}\bar{k}_{L}^{-}+1\big{)}\ \,\]
\[\big{(}P^{\rm AT}f\big{)}\big{(}k_{R}\big{)}\equiv\frac{\lambda^{-3}}{\sqrt{2 }}f\big{(}k_{R}-1\big{)}+\big{(}1+\frac{1}{\sqrt{2}}\big{)}f\big{(}k_{R}\big{)} +\frac{1}{\sqrt{2}}f\big{(}\bar{k}_{R}^{-}-1\big{)}+\big{(}\lambda^{-3}+ \frac{\lambda^{3}}{\sqrt{2}}\big{)}f\big{(}\bar{k}_{R}\big{)}\ \.\]
**Lemma 4** (_propagation mechanism for the Ashkin-Teller model below the critical \(\frac{1}{4}{\rm log}\big{(}3\big{)}\) threshold_). For \(J\equiv U\equiv\frac{1}{4}{\rm log}\big{(}3\big{)}\) and some function \(f\equiv f^{\rm AT}\), the propagation mechanism \(P^{\rm AT}_{\frac{1}{4}{\rm log}\big{(}3\big{)}}:\big{(}{\bf R}^{2}\big{)}^{ \bf I^{\prime}}\longrightarrow\big{(}{\bf R}^{2}\big{)}^{\bf I^{\prime}}\) satisfies, for \(k\in{\bf I^{\prime}}\backslash\{k_{L},k_{R}\}\),
\[\big{(}P^{\rm AT}_{\frac{1}{4}{\rm log}\big{(}3\big{)}}f\big{(} k_{L}\big{)}\equiv\frac{(S+C)C}{2S}f\big{(}k_{L}\big{)}+\frac{i-S}{2S}f\big{(}k_{L}+1 \big{)}+\frac{C}{2S}f\big{(}\bar{k-1}\big{)}+\frac{C}{2S}f\big{(}\bar{k+1} \big{)}\ \,\] \[\big{(}P^{\rm AT}_{\frac{1}{4}{\rm log}\big{(}3\big{)}}f\big{(} k_{L}\big{)}\equiv\frac{(S+C)C}{2S}f\big{(}k_{L}\big{)}+\frac{i-S}{2S}f\big{(}k_{L}+1 \big{)}+f\big{(}\bar{k}_{L}\big{)}+\frac{C}{2S}f\big{(}\bar{k_{L}+1}\big{)}\ \,\] \[\big{(}P^{\rm AT}_{\frac{1}{4}{\rm log}\big{(}3\big{)}}f\big{(}k_{ R}\big{)}\equiv-\frac{S_{i}}{2S}f\big{(}k_{R}-1\big{)}+\frac{\big{(}S+C \big{)}C}{2S}f\big{(}k_{R}\big{)}+\frac{C}{2S}f\big{(}\bar{k_{R}-1}\big{)}+ \frac{-\big{(}S+C\big{)}S+i\big{(}C-S\big{)}}{2S}f\big{(}\bar{k}_{R}\big{)}\ \,\]
with \(S\equiv\sinh\big{(}2\beta\big{)}\) and \(C\equiv\cosh\big{(}2\beta\big{)}\).
For the propagation mechanisms for the loop model, we make use of a nonempty interval of the hexagonal lattice, which we denote with \({\bf I^{*}}\) (see the figure above for a visual representation of \({\bf I^{*}}\)).
Figure 1: A depiction of the interval \({\bf I^{**}}\), with endpoints at the middle of edges, \(k_{L}^{*}\) and \(k_{R}^{*}\).
**Lemma 5** (_propagation mechanism for the loop model at Nienhuis' critical point_).: For \(x<x_{c}(n)\) and some function \(f\equiv f^{\text{loop}}\), the propagation mechanism \(P^{\text{loop}}_{\sqrt{2+\sqrt{3}^{2}-n}}\equiv P^{\text{loop}}_{x_{c}(n)} \equiv P^{\text{loop}}:\left(\mathbf{H}\right)^{\mathbf{I}^{**}}\longrightarrow \left(\mathbf{H}\right)^{\mathbf{I}^{**}}\) satisfies, for \(k\in\mathbf{I}^{**}\backslash\left\{k_{L}^{*},k_{R}^{*}\right\}\equiv\mathbf{I }^{**}\backslash\left\{k_{L},k_{R}\right\}\),
\[\big{(}P^{\text{loop}}f\big{)}\big{(}k\big{)}\equiv\frac{\lambda^{-2}}{\sqrt{ 3}}f\big{(}k-1\big{)}+2f\big{(}k\big{)}+\frac{\lambda^{2}}{\sqrt{3}}f\big{(}k+1 \big{)}+\frac{1}{\sqrt{3}}f\big{(}\overset{-}{k}-1\big{)}-\sqrt{3}f\big{(} \overset{-}{k}\big{)}+\frac{1}{\sqrt{3}}f\big{(}\overset{-}{k}+1\big{)}\ \,\] \[\big{(}P^{\text{loop}}f\big{)}\big{(}k_{L}\big{)}\equiv\big{(}1+ \frac{1}{\sqrt{3}}\big{)}f\big{(}k_{L}\big{)}+\frac{\lambda^{2}}{\sqrt{3}}f \big{(}k_{L}+1\big{)}+\big{(}\lambda^{2}+\frac{\lambda^{-2}}{\sqrt{3}}\big{)}f \big{(}\overset{-}{k}_{L}\big{)}+\frac{1}{\sqrt{3}}f\big{(}\overset{-}{k}+1 \big{)}\ \,\] \[\big{(}P^{\text{loop}}f\big{)}\big{(}k_{R}\big{)}\equiv\frac{ \lambda^{-2}}{\sqrt{3}}f\big{(}k_{R}-1\big{)}+\big{(}1+\frac{1}{\sqrt{3}}f \big{(}k_{R}\big{)}+\frac{1}{\sqrt{3}}f\big{(}\overset{-}{k_{R}}-1\big{)}+ \big{(}\lambda^{-2}+\frac{\lambda^{3}}{\sqrt{3}}\big{)}f\big{(}\overset{-}{k}_ {R}\big{)}\ \.\]
For the remaining definition of the other loop propagator below Nienhuis' critical point, we make use of the following fact. Independently of the number of loops, recall that the loop probability measure can be expressed possesses a high-temperature expansion, [6],
\[\mathbf{P}^{\text{loop},\xi}_{\Lambda_{\mathbf{H}}}\big{[}\sigma\big{]} \overset{\text{HTE}}{\sim}\frac{n^{k(\sigma)}x^{e(\sigma)}\text{exp}\big{(} hr\big{(}\sigma\big{)}+h^{\prime}r^{\prime}\big{(}\sigma\big{)}\big{)}}{Z^{ \text{loop},\xi}_{\Lambda_{\mathbf{H}}}\big{(}\sigma\big{)}}\xrightarrow[n=1,h ^{\prime}=0]{\beta=\frac{1}{2}\lfloor\log^{2}\rfloor}\mathbf{P}^{\text{ Ising},\chi}_{\Lambda_{\mathbf{Z}^{2}}}\big{[}\sigma^{\text{ Ising}}\big{]}\equiv\text{O}\big{(}1\big{)}\ \text{measure}\ \,\]
for an Ising model configuration \(\sigma^{\text{Ising}}\), under the corresponding probability measure supported over \(\mathbf{Z}^{2}\) with boundary conditions \(\chi\), for the number of connected components \(k\big{(}\sigma\big{)}\) of the loop configuration, and also for the factors,
\[r\big{(}\sigma\big{)}\equiv\sum_{u\in G}\sigma_{u}\ \,\] \[r^{\prime}\big{(}\sigma\big{)}\equiv\sum_{t\equiv\left\{u,v,w \right\}}\mathbf{1}_{\left\{\sigma_{u}\equiv\sigma_{v}\equiv\sigma_{w}\right\} \,\]
corresponding to the two external fields, which respectively represent the summation over all spins in finite volume, and the number of monochromatically colored triangles. Hence we can further apply results for the propagator matrix for the loop \(\text{O}\big{(}1\big{)}\) model in the absence of one external field. To distinguish the inverse temperature of the Ising model that is in correspondence with the \(\text{O}\big{(}1\big{)}\) model, denote the following inverse temperature,
\[\beta^{\text{loop}}\equiv\frac{1}{2}\big{|}\text{log}x\big{|}\ \.\]
**Lemma 6** (_propagation mechanism for the loop model below Nienhuis' critical point_).: For \(x\equiv x_{c}\big{(}n\big{)}\) and some function \(f\equiv f^{\text{loop}}\), the propagation mechanism \(P^{\text{loop}}_{x}:\big{(}\mathbf{H}\big{)}^{\mathbf{I}^{**}}\longrightarrow \big{(}\mathbf{H}\big{)}^{\mathbf{I}^{**}}\) satisfies, for \(k\in\mathbf{I}^{**}\backslash\left\{k_{L}^{*},k_{R}^{*}\right\}\equiv\mathbf{I }^{**}\backslash\left\{k_{L},k_{R}\right\}\),
\[\big{(}P^{\text{loop}}_{x}f\big{)}\big{(}k\big{)}\equiv\frac{-S^{ \text{loop}}-i}{2S^{\text{loop}}}f\big{(}k-1\big{)}+\frac{\big{(}C^{\text{ loop}}\big{)}^{2}}{S^{\text{loop}}}f\big{(}k\big{)}+\frac{-S^{\text{loop}}+i}{2S^{ \text{loop}}}f\big{(}k+1\big{)}+\frac{C^{\text{loop}}}{2S^{\text{loop}}}f \big{(}k-1\big{)}-\cdots\\ C^{\text{loop}}f\big{(}\overset{-}{k}\big{)}+\frac{C^{\text{loop}}}{2S^ {\text{loop}}}f\big{(}\overset{-}{k}+1\big{)}\ \,\] \[\big{(}P^{\text{loop}}_{x}f\big{)}\big{(}k_{L}\big{)}\equiv\frac{ \big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}C^{\text{loop}}}{2S^{\text{loop}}}f \big{(}k_{L}\big{)}+\frac{i-S^{\text{loop}}}{2S^{\text{loop}}}f\big{(}k_{L}+1 \big{)}+\cdots\] \[\big{(}\frac{-\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}S^{ \text{loop}}+i\big{(}C^{\text{loop}}-S^{\text{loop}}\big{)}}{2S^{\text{loop}}} \big{)}f\big{(}\overset{-}{k}_{L}\big{)}+\cdots\] \[\big{(}P^{\text{loop}}_{x}f\big{)}\big{(}k_{R}\big{)}\equiv\frac{ -S^{\text{loop}}-i}{2S^{\text{loop}}}f\big{(}k_{R}-1\big{)}+\big{(}\frac{\big{(}S ^{\text{loop}}+C^{\text{loop}}\big{)}C^{\text{loop}}}{2S^{\text{loop}}}\big{)} f\big{(}k_{R}\big{)}+\frac{C^{\text{loop}}}{2S^{\text{loop}}}f \big{(}k_{R}-1\big{)}+\cdots\] \[\big{(}\frac{-\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}S^{ \text{loop}}+i\big{(}C^{\text{loop}}-S^{\text{loop}}\big{)}}{2S^{\text{loop}}} \big{)}f\big{(}\overset{-}{k}_{R}\big{)}\ \,\]
given constants satisfying,
\[S^{\text{loop}}\equiv\sinh\bigl{(}2\beta^{\text{loop}}\bigr{)}\ \,\] \[C^{\text{loop}}\equiv\cosh\bigl{(}2\beta^{\text{loop}}\bigr{)}\ \.\]
Below, we state results regarding the propagation mechanisms \(P^{\text{AT}}\), \(P^{\text{AT}}_{\frac{1}{2}\log(3)}\), \(P^{\text{loop}}\) and \(P^{\text{loop}}_{x}\).
**Proposition 1** (_propagation mechanism of the Ashkin-Teller model at criticality_, **Proposition 7**, [13]). The matrix \(P^{\text{AT}}_{\frac{1}{2}\log(3)}\) is symmetric, with eigenvalues \(\lambda^{\text{AT},\pm}_{\alpha}\), where each \(\bigl{\{}\lambda_{\alpha}\bigr{\}}_{1\leq\alpha<|\Gamma^{*}|}\) is distinct and with magnitude strictly larger than \(1\).
_Proof of Proposition 1._ This is an application of the argument for **Proposition 7** contained in [13].
**Proposition 2** (_propagation mechanism of the loop model at Nienhuis' critical point_, **Proposition 7**, [13]). The matrix \(P^{\text{loop}}_{x_{\alpha}(n)}\) is symmetric, with eigenvalues \(\lambda^{\text{loop},\pm}_{\alpha}\), where each \(\bigl{\{}\lambda^{\text{loop}}\bigr{\}}_{1\leq\alpha<|\Gamma^{*}|}\) is distinct and with magnitude strictly larger than \(1\).
_Proof of Proposition 2._ We present more dramatic changes to the arguments rather than those provided in **Proposition 7** contained in [13]. To begin, observe,
\[\biggl{(}\bigl{(}P^{\text{loop}}_{x}f\bigr{)}\bigl{(}k\bigr{)} \biggr{)}^{-1}\equiv\biggl{(}\frac{-S^{\text{loop}}-i}{2S^{\text{loop}}}f \bigl{(}k-1\bigr{)}+\frac{\bigl{(}C^{\text{loop}}\bigr{)}^{2}}{S^{\text{loop }}}f\bigl{(}k\bigr{)}+\frac{-S^{\text{loop}}+i}{2S^{\text{loop}}}f\bigl{(}k+1 \bigr{)}+\frac{C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k-1\bigr{)}+\cdots\] \[C^{\text{loop}}f\bigl{(}k\bigr{)}+\frac{C^{\text{loop}}}{2S^{ \text{loop}}}f\bigl{(}k+1\bigr{)}\biggr{)}^{-1}\] \[\equiv\frac{-S^{\text{loop}}-i}{2S^{\text{loop}}}f\bigl{(}k-1 \bigr{)}^{-1}+\frac{\bigl{(}C^{\text{loop}}\bigr{)}^{2}}{S^{\text{loop}}}f \bigl{(}k\bigr{)}^{-1}+\frac{-S^{\text{loop}}+i}{2S^{\text{loop}}}f\bigl{(}k+ 1\bigr{)}^{-1}+\frac{C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k-1\bigr{)}^{ -1}+\cdots\] \[\frac{C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k\bigr{)}\ \,\]
corresponding to backpropagation from \(k\),
\[\biggl{(}\bigl{(}P^{\text{loop}}_{x}f\bigr{)}\bigl{(}k_{L}\bigr{)} \biggr{)}^{-1}\equiv\biggl{(}\frac{\bigl{(}S^{\text{loop}}+C^{\text{loop}} \bigr{)}C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k_{L}\bigr{)}+\frac{i-S^{ \text{loop}}}{2S^{\text{loop}}}f\bigl{(}k_{L}+1\bigr{)}+\cdots\] \[\qquad\qquad\qquad\qquad\equiv\frac{\bigl{(}S^{\text{loop}}+C^{ \text{loop}}\bigr{)}C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k_{L}\bigr{)}^{ -1}+\frac{i-S^{\text{loop}}}{C^{\text{loop}}}f\bigl{(}k_{L}+1\bigr{)}^{-1}+\cdots\] \[\qquad\qquad\qquad\qquad\equiv\frac{\bigl{(}S^{\text{loop}}+C^{ \text{loop}}\bigr{)}C^{\text{loop}}}{2S^{\text{loop}}}f\bigl{(}k_{L}-1\bigr{)} +\frac{i-S^{\text{loop}}}{C^{\text{loop}}}f\bigl{(}k_{L}\bigr{)}+\cdots\] \[\bigl{(}\frac{-\bigl{(}S^{\text{loop}}+C^{\text{loop}}\bigr{)} S^{\text{loop}}+i\bigl{(}C^{\text{loop}}-S^{\text{loop}}\bigr{)}}{2S^{\text{loop}}} \bigr{)}f\bigl{(}k_{L}-1\bigr{)}+\frac{C^{\text{loop}}}{2S^{\text{loop}}}f \bigl{(}k_{L}\bigr{)}\ \,\]
corresponding to backpropagation from \(k_{L}\),
\[\Big{(}\big{(}P_{x}^{\text{loop}}f\big{)}\big{(}k_{R}\big{)}\Big{)}^{ -1}\equiv\Big{(}-\frac{i+S^{\text{loop}}}{2S^{\text{loop}}}f\big{(}k_{R}-1 \big{)}+\big{(}\frac{\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}C^{\text{loop }}}{2S^{\text{loop}}}f\big{(}k_{R}\big{)}+\cdots\] \[\equiv-\frac{i+S^{\text{loop}}}{2S^{\text{loop}}}f\big{(}k_{R}-1 \big{)}^{-1}+\big{(}\frac{\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}C^{ \text{loop}}}{2S^{\text{loop}}}f\big{(}k_{R}\big{)}^{-1}+\frac{\big{(}S^{\text {loop}}+C^{\text{loop}}\big{)}C^{\text{loop}}}{2S^{\text{loop}}}f\big{(}k_{R} \big{)}^{-1}+\cdots\] \[\equiv-\frac{i+S^{\text{loop}}}{2S^{\text{loop}}}f\big{(}k_{R}- 2\big{)}+\big{(}\frac{\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}C^{\text {loop}}}{2S^{\text{loop}}}\big{)}f\big{(}k_{R}-1\big{)}+\frac{C^{\text{loop}} }{2S^{\text{loop}}}f\big{(}\bar{k}_{R}\big{)}+\cdots\] \[\big{(}\frac{-\big{(}S^{\text{loop}}+C^{\text{loop}}\big{)}S^{ \text{loop}}+i\big{(}C^{\text{loop}}-S^{\text{loop}}\big{)}}{2S^{\text{loop }}}\big{)}f\big{(}k_{R}-1\big{)}\ \,\]
corresponding to backpropagation from \(k_{R}\), in which the entries of the inverse matrix, \(\big{(}P_{x(n)}^{\text{loop}}\big{)}^{-1}\) are defined from the expressions provided above. Moreover, below Nienhuis' critical point, from **Definition 7**, the set of equations about \(z_{1},z_{2},z_{3},z_{4}\) being discretely holomorphic implies the set of relations transform to,
\[i\big{(}P_{x}^{\text{loop}}\bar{f}\big{)}\big{(}k\big{)}\equiv \frac{-S^{\text{loop}}-i}{2S^{\text{loop}}}f\big{(}\bar{k}-1\big{)}+\frac{ \big{(}C^{\text{loop}}\big{)}^{2}}{S^{\text{loop}}}f\big{(}\bar{k}\big{)}+ \frac{-S^{\text{loop}}+i}{2S^{\text{loop}}}f\big{(}\bar{k}+1\big{)}+\frac{C^{ \text{loop}}}{2S^{\text{loop}}}f\big{(}\bar{k}-1\big{)}-\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
Also, the fact that \(P_{x}^{\text{loop}}\equiv\big{(}P_{x}^{\text{loop}}\big{)}^{\text{T}}\) comes from the observation, from the loop propagator at the critical loop parameter \(x_{c}\big{(}n\big{)}\), that,
\[\bigg{(}\big{(}P^{\text{loop}}f\big{)}\big{(}\eta\bar{k}\big{)} \bigg{)}^{\text{T}}\equiv\bigg{(}\frac{\lambda^{-2}}{\sqrt{3}}f\big{(}\eta_{ \bar{k}-1}\big{)}+2f\big{(}\eta_{\bar{k}}\big{)}+\frac{\lambda^{2}}{\sqrt{3}}f \big{(}\eta_{k+1}\big{)}+\frac{1}{\sqrt{3}}f\big{(}\eta_{\bar{k}^{\prime}}-1 \big{)}-\sqrt{3}f\big{(}\bar{\eta}_{k^{\prime}}\big{)}+\frac{1}{\sqrt{3}}f \big{(}\bar{\eta}_{k^{\prime}+1}\big{)}\bigg{)}^{\text{T}}\] \[\equiv\bigg{(}\big{(}P^{\text{loop}}f\big{)}\big{(}k\big{)}\bigg{)} ^{\text{T}}\enspace,\] \[\bigg{(}\big{(}P^{\text{loop}}f\big{)}\big{(}\eta\bar{k_{L}} \big{)}\bigg{)}^{\text{T}}\equiv\bigg{(}\frac{\lambda^{-2}}{\sqrt{3}}f\big{(} \eta_{\bar{k}_{R}-1}\big{)}+\big{(}1+\frac{1}{\sqrt{3}}\big{)}f\big{(}\eta_{ \bar{k}_{R}}\big{)}+\frac{1}{\sqrt{3}}f\big{(}\bar{\eta}_{(k_{R})^{\prime}-1} \big{)}+\big{(}\lambda^{-2}+\frac{\lambda^{3}}{\sqrt{3}}\big{)}f\big{(}\bar{ \eta}_{(k_{R})^{\prime}}\big{)}\bigg{)}^{\text{T}}\] \[\equiv\bigg{(}\big{(}P^{\text{loop}}f\big{)}\big{(}k_{L}\big{)} \bigg{)}^{\text{T}}\enspace,\] \[\equiv\bigg{(}\big{(}P^{\text{loop}}f\big{)}\big{(}k_{R}\big{)} \bigg{)}^{\text{T}}\enspace,\]
for parameters,
\[\eta_{\bar{k}-1}\equiv\eta\big{(}\bar{k}-1\big{)}\enspace,\] \[\eta_{\bar{k}}\equiv\eta\bar{k}\enspace,\] \[\eta_{k^{\prime}}-1\equiv\eta\big{(}\bar{k}^{\prime}-1\big{)}\enspace,\] \[\eta_{k^{\prime}}\equiv\eta\bar{k}\enspace,\] \[\eta_{k^{\prime}+1}\equiv\eta\big{(}\bar{k}^{\prime}+1\big{)}\enspace,\] \[\eta_{\bar{k}_{R}-1}\equiv\eta\big{(}\bar{k}_{R}-1\big{)}\enspace,\] \[\eta_{\bar{k}_{R}}\equiv\eta\bar{k_{R}}\enspace,\] \[\eta_{(k_{R})^{\prime}-1}\equiv\eta\big{(}k_{R}-1\big{)}\enspace,\] \[\eta_{\bar{k}_{R}}\equiv\eta_{(k_{R})^{\prime}}\enspace,\] \[\eta_{\bar{k}_{R}-1}\equiv\eta\big{(}\bar{k}_{R}-1\big{)}\enspace,\] \[\eta_{\bar{k}_{R}}\equiv\eta\bar{k_{R}}\enspace,\] \[\eta_{(k_{R})^{\prime}}\equiv\eta\bar{k_{R}}\enspace.\]
Also, observe that two maps, one from \(\mathbf{I}_{\frac{1}{2}}^{**}\xrightarrow{\varphi_{1}}\mathbf{C}\), and the other from \(\mathbf{C}\xrightarrow{\varphi_{2}}\mathbf{I}_{0}^{**}\equiv\mathbf{I}^{**}\), implies, that the propagator applied to either \(\varphi_{1}\), or to \(\varphi_{2}\); takes the form,
\[A\varphi_{1}\enspace,\] \[A\varphi_{2}\enspace,\]
for,
\[A:\big{(}\mathbf{R}^{2}\big{)}^{\text{T}}\longrightarrow\big{(}\mathbf{R}^{2} \big{)}^{\text{T}}\enspace.\]
From these objects, one has,
\[\begin{bmatrix}\big{(}P^{\text{loop}}\big{)}_{11}&\cdots&\big{(}P^{\text{loop }}\big{)}_{n1}\\ \vdots&\vdots&\vdots\\ \big{(}P^{\text{loop}}\big{)}_{1n}&\cdots&\big{(}P^{\text{loop}}\big{)}_{nn} \end{bmatrix}\begin{bmatrix}\big{(}P^{\text{loop}}\big{)}_{11}&\cdots&\big{(}P ^{\text{loop}}\big{)}_{n1}\\ \vdots&\vdots&\vdots\\ \big{(}P^{\text{loop}}\big{)}_{1n}&\cdots&\big{(}P^{\text{loop}}\big{)}_{nn} \end{bmatrix}^{\text{T}}\geq 0\enspace.\]
Away from the critical point, similar arguments hold which demonstrate that the propagation matrix is continuous in \(x_{c}\big{(}n\big{)}\), and hence does not have any eigenvalues in its spectrum which are zero.
To demonstrate that 1 cannot be an eigenvalue of \(P_{x}^{\rm loop}\), or of \(P^{\rm loop}\), argue by contradiction. That is, if 1 were can eigenvalue of either propagation matrix at, or below the critical parameter \(x_{c}\big{(}n\big{)}\), then one would either have that \(P_{x}^{\rm loop}f=f\), or that \(P^{\rm loop}=f\). For the first case at criticality, to show that one arrives to a contradiction and that \(f\equiv 0\), define the extension of the observable with the action,
\[i\big{[}h\big{(}x+\frac{1}{2}\big{)}+h\big{(}x-\frac{1}{2}\big{)}\big{]}=h\big{(} x+\frac{i}{2}\big{)}+h\big{(}x-\frac{i}{2}\big{)}\equiv 0\ \,\]
from the mapping,
\[h^{\rm loop}\equiv h\equiv\varphi_{2}\cdot\varphi_{1}:\mathbb{I}_{0,\frac{1} {2},1}^{**}\longrightarrow\mathbf{C}\ \,\]
for the codomain,
\[\mathbb{I}_{0,\frac{1}{2},1}^{**}\equiv\mathbb{I}_{0}^{**}\cup\mathbb{I}_{ \frac{1}{2}}^{**}\cup\mathbb{I}_{1}^{**}\ \.\]
Therefore \(h\) is identically zero, and vanishes, implying that 1 is not an eigenvalue of \(P_{x}^{\rm loop}\), from the fact that the loop observable \(F^{\rm loop}\equiv 0\). To show that the same observation holds for \(P^{\rm loop}\) for another contradiction, for \(x\neq x_{c}\big{(}n\big{)}\), observe that, for the interval \(\mathbb{I}_{\frac{1}{2}}^{**}\),
\[h\big{(}x+1\big{)}\equiv h\big{(}x\big{)}+\frac{1}{i}\big{[}h\big{(}x+\frac{i} {2}+\frac{1}{2}\big{)}-h\big{(}x-\frac{i}{2}+\frac{1}{2}\big{)}\big{]}\equiv h \big{(}x\big{)}\,\]
or, similarly, that,
\[h\big{(}x\big{)}\equiv h\big{(}x-\frac{1}{4}\big{)}+\frac{1}{i}\big{[}h\big{(} x+\frac{i}{2}-\frac{1}{2}\big{)}-h\big{(}x-\frac{i}{2}-\frac{1}{2}\big{)}\big{]} \equiv h\big{(}x+1\big{)}\,\]
each of which satisfy the relations,
\[F\big{(}z_{1}\big{)}+\big{(}\nu^{\rm loop}\big{)}^{-1}\bar{e}_{ 1}{}^{2s}F\big{(}\bar{z}_{1}\big{)} =\big{(}\nu^{\rm loop}\big{)}^{-1}F\big{(}z_{2}\big{)}+\bar{e}_{ 1}{}^{2s}F\big{(}\bar{z}_{2}\big{)}\ \,\] \[F\big{(}z_{2}\big{)}+\big{(}\nu^{\rm loop}\big{)}^{-1}\bar{e}_{ 2}{}^{2s}F\big{(}\bar{z}_{2}\big{)} =\big{(}\nu^{\rm loop}\big{)}^{-1}F\big{(}z_{3}\big{)}+\bar{e}_{ 2}{}^{2s}F\big{(}\bar{z}_{3}\big{)}\ \,\] \[F\big{(}z_{3}\big{)}+\big{(}\nu^{\rm loop}\big{)}^{-1}\bar{e}_{ 3}{}^{2s}F\big{(}\bar{z}_{3}\big{)} =\big{(}\nu^{\rm loop}\big{)}^{-1}F\big{(}z_{4}\big{)}+\bar{e}_{ 3}{}^{2s}F\big{(}\bar{z}_{4}\big{)}\ \,\] \[F\big{(}z_{4}\big{)}+\big{(}\nu^{\rm loop}\big{)}^{-1}\bar{e}_{ 4}{}^{2s}F\big{(}\bar{z}_{4}\big{)} =\big{(}\nu^{\rm loop}\big{)}^{-1}F\big{(}z_{1}\big{)}+\bar{e}_{ 4}{}^{2s}F\big{(}\bar{z}_{1}\big{)}\ \,\]
which implies, from the function \(h\), that,
\[h\big{(}x+1\big{)}+i\sqrt{{\rm Im}\big{(}\nu\big{)}}h\big{(}x\bar{+}1\big{)}=h \big{(}x\big{)}-i\sqrt{{\rm Im}\big{(}\nu\big{)}}h\big{(}x\big{)}\ \,\]
from which we conclude that for the remaining case \(h\) is constant, so 1 is not an eigenvalue of \(P^{\rm loop}\) either, for parameters,
\[\lambda\ \,\] \[\nu\ \,\]
appearing in the definitions of the propagator matrices at, and away, from \(x_{c}\big{(}n\big{)}\).
With regards to Riemann boundary conditions, we impose boundary conditions at \(k_{L}\), and at \(k_{R}\), with the following. For the leftmost boundary \(k_{L}\), the fact that the function \(h\) attains a value of \({\cal C}^{\rm loop}{\rm exp}\big{(}-i\frac{\pi}{3}\big{)}\), for real \({\cal C}^{\rm loop}\), implies that it also satisfies the relation,
\[h\big{(}x_{L}\big{)}+i\sqrt{{\rm Im}\big{(}\nu\big{)}}h\big{(}x_{L}\big{)}=h \big{(}x_{L}-1\big{)}-i\sqrt{{\rm Im}\big{(}\nu\big{)}}h\big{(}x_{L}^{-}-1\big{)}\,\]
For the rightmost boundary \(k_{R}\), the fact that the function \(h\) attains a value of \(\big{(}\mathcal{L}^{\text{loop}}\big{)}^{\prime}\text{exp}\big{(}-i\frac{\pi}{3} \big{)}\), for another real \(\big{(}\mathcal{L}^{\text{loop}}\big{)}^{\prime}\), implies that it also satisfies the relation,
\[h\big{(}x_{R}\big{)}+i\sqrt{\text{Im}(\nu)}h\big{(}\tilde{x_{R}}\big{)}=h\big{(} x_{R}-1\big{)}-i\sqrt{\text{Im}(\nu)}h\big{(}x_{R}^{-}-1\big{)}\.\]
To conclude the argument, we show that the eigenvalues of the spectrum are distinct. To show that the eigenspace is one-dimensional, if \(f\) denotes an eigenvector over \(\big{(}\mathbf{H}\big{)}^{\mathbf{T}^{*}}\), then with a massive s-holomorphic extension \(h\) of \(f\) (as discussed above), over \(\mathbf{I}_{\frac{1}{2}}\cup\mathbf{I}_{0,1}^{**}\),
\[\big{(}P^{\text{loop}}h\big{)}\big{(}k\big{)}\equiv\frac{\lambda ^{-2}}{\sqrt{3}}h\big{(}k-1\big{)}+2h\big{(}k\big{)}+\frac{\lambda^{2}}{\sqrt {3}}h\big{(}k+1\big{)}+\frac{1}{\sqrt{3}}h\big{(}\tilde{k}-1\big{)}-\sqrt{3}h \big{(}\tilde{k}\big{)}+\frac{1}{\sqrt{3}}h\big{(}\tilde{k}+1\big{)}\ \,\] \[\big{(}P^{\text{loop}}h\big{)}\big{(}k_{L}\big{)}\equiv\frac{ \lambda^{-2}}{\sqrt{3}}h\big{(}k_{R}-1\big{)}+\big{(}1+\frac{1}{\sqrt{3}}\big{)} h\big{(}k_{R}\big{)}+\frac{1}{\sqrt{3}}h\big{(}\tilde{k}_{R}-1\big{)}+ \big{(}\lambda^{-2}+\frac{\lambda^{3}}{\sqrt{3}}\big{)}h\big{(}\tilde{k}_{R} \big{)}\ \,\]
one can solve for \(h\big{(}x+1\big{)}\). From the expression for \(h\) obtained from the massive s-holomorphic equations that are propagated with \(P^{\text{loop}}\) above, we conclude that the eigenspace is one-dimensional, from which we conclude the argument.
### Transfer matrices
#### 2.2.1 Ashkin-Teller model
Generator relations. For the loop \(\text{O}(n)\) and Ashkin-Teller models, to make use of the operator formalism one needs to introduce transfer matrices over each state space of the model. In order to formulate the transfer matrices for these other models, with respective state spaces \(\Omega^{\Lambda\Gamma}\) and \(\Omega^{\text{O}(n)}\) for the Ashkin-Teller and loop \(\text{O}\big{(}n\big{)}\) models, it suffices to demonstrate the existence of several bases for the Ashkin-Teller model, the first of which takes the form,
\[\mathcal{S}_{1}\equiv\text{span}\big{\{}e_{\tau(i)}\ \big{|}\ \tau\big{(}i\big{)} \equiv+1\big{\}}\ \,\]
corresponding to the spins at side \(i\) for which \(\tau\big{(}i\big{)}\equiv+1\), the second of which takes the form,
\[\mathcal{S}_{2}\equiv\text{span}\big{\{}e_{\tau(j)}\ \big{|}\ \tau\big{(}j\big{)} \equiv+1\big{\}}\ \,\]
corresponding to the spins at site \(j\) for which \(\tau\big{(}j\big{)}\equiv+1\), the third of which takes the form,
\[\mathcal{S}_{3}\equiv\text{span}\big{\{}e_{\tau^{\prime}(i)}\ \big{|}\ \tau^{ \prime}\big{(}i\big{)}\equiv+1\big{\}}\ \,\]
corresponding to the spins at site \(i\) for which \(\tau^{\prime}\big{(}i\big{)}\equiv+1\), and finally, the fourth of which takes the form,
\[\mathcal{S}_{4}\equiv\text{span}\big{\{}e_{\tau^{\prime}(j)}\ \big{|}\ \tau^{ \prime}\big{(}j\big{)}\equiv+1\big{\}}\ \.\]
corresponding to the spins at site \(j\) for which \(\tau^{\prime}\big{(}j\big{)}\equiv+1\). From \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3},\mathcal{S}_{4}\) above, observe,
\[\mathcal{S}_{1}\cap\mathcal{S}_{2}\equiv\text{span}\big{\{}e_{\tau (i)},e_{\tau(j)}\ \big{|}\ \tau\big{(}i\big{)}\tau\big{(}j\big{)}\equiv+1\big{\}}\ \,\] \[\mathcal{S}_{3}\cap\mathcal{S}_{4}\equiv\text{span}\big{\{}e_{ \tau(i)},e_{\tau(j)},e_{\tau(j)}\ \big{|}\ \tau^{\prime}\big{(}i\big{)}\tau^{\prime}\big{(}j\big{)}\equiv+1\big{\}}\ \,\] \[\mathcal{S}_{1}\cap\mathcal{S}_{2}\cap\mathcal{S}_{3}\cap \mathcal{S}_{4}\equiv\text{span}\big{\{}e_{\tau(i)},e_{\tau(j)},e_{\tau^{ \prime}(i)},e_{\tau^{\prime}(j)}\ \big{|}\ \tau(i)\tau(j)\tau^{\prime}\big{(}i\big{)}\tau^{\prime}(j \big{)}\equiv+1\big{\}}\ \.\]
Moreover,
\[\begin{array}{c}\dim_{\Omega^{\rm AT}}\big{(}\mathcal{S}_{1}\big{)}=\dim_{ \Omega^{\rm AT}}\big{(}\mathcal{S}_{2}\big{)}=\dim_{\Omega^{\rm AT}}\big{(} \mathcal{S}_{3}\big{)}=\dim_{\Omega^{\rm AT}}\big{(}\mathcal{S}_{4}\big{)}=2^{ \bf[I]}=2^{b-a}\ \,\\ \dim_{\Omega^{\rm AT}}\bigg{(}\bigcap\limits_{1\leq i\leq 4}\mathcal{S}_{i} \bigg{)}=2^{b-a+1}\ \.\end{array}\]
We define similar basics \(\mathcal{S}_{1}^{\prime},\cdots,\mathcal{S}_{4}^{\prime}\) spanning the remaining color of the Potts models in the next section.
As a result, in a similar way to how the transfer matrix \(V\) is formulated for the Ising model, for two coupled Potts models in the Ashkin-Teller model, the transfer matrix takes the form,
\[V^{\rm AT}\equiv\big{(}V^{\rm AT,h}\big{)}^{\frac{1}{2}}V^{\rm AT,V}\big{(}V^ {\rm AT,h}\big{)}^{\frac{1}{2}}\ \,\]
under the assignment, for \(\tau\big{(}i\big{)}\equiv\tau_{i}\), \(\tau\big{(}i+1\big{)}\equiv\tau_{i+1}\), \(\tau^{\prime}\big{(}i\big{)}\equiv\tau^{\prime}_{i}\) and \(\tau^{\prime}\big{(}j\big{)}\equiv\tau^{\prime}_{j}\),
\[V^{\rm AT,V}_{\tau,\tau^{\prime}}\equiv V^{\rm AT,V}\equiv\ \ \bigg{\{}\begin{array}{ll}\sum\limits_{a=i_{0}\sim\cdots\sim i_{n-1}\sim i_{ n}\equiv b}\big{[}J\big{(}\tau_{i}\tau_{i+1}+\tau^{\prime}_{i}\tau^{\prime}_{i+1} \big{)}+U\big{(}\tau_{i}\tau_{i+1}\tau^{\prime}_{i}\tau^{\prime}_{i+1}\big{)} \big{]}&\text{if }\tau_{i_{0}}\equiv a\,\ \tau_{i_{n}}\equiv b\,\\ 0&\text{, otherwise}\ \.\end{array}\]
For the remaining component of the Ashkin-Teller transfer matrix, \(\big{(}V^{\rm AT,h}\big{)}^{\frac{1}{2}}\), one has, for,
\[\big{(}V^{\rm AT,h}_{\tau,\tau^{\prime}}\big{)}^{\frac{1}{2}}\equiv\big{(}V^ {\rm AT,h}\big{)}^{\frac{1}{2}}\ \,\]
which can further be decomposed as,
\[\big{(}V^{\rm AT,h}\big{)}^{\frac{1}{2}}\equiv\big{(}\big{(}V^{\rm AT,h}_{J, \tau,\tau^{\prime}}\big{)}+\big{(}V^{\rm AT,h}_{U,\tau,\tau^{\prime}}\big{)} ^{\frac{1}{2}}\equiv\big{(}V^{\rm AT,h}_{J,\tau,\tau^{\prime}}\big{)}^{\frac{ 1}{2}}+\big{(}V^{\rm AT,h}_{U,\tau,\tau^{\prime}}\big{)}^{\frac{1}{2}}\equiv \big{(}V^{\rm AT,h}_{J}\big{)}^{\frac{1}{2}}+\big{(}V^{\rm AT,h}_{U}\big{)}^{ \frac{1}{2}}\ \,\]
the assignment,
\[V^{\rm AT,h}\equiv\ \ \bigg{\{}\begin{array}{ll}\sum\limits_{a\equiv i_{0}\sim \cdots\sim i_{n-1}\sim i_{n}\equiv b-1}\frac{J}{2}\big{(}\tau_{i}\tau_{i+1}+ \tau^{\prime}_{i}\tau^{\prime}_{i+1}\big{)}+\sum\limits_{a\equiv i_{0}\sim \cdots\sim i_{n-1}\sim i_{n}\equiv b-1}\frac{U}{2}\big{(}\tau_{i}\tau_{i+1}\tau ^{\prime}_{i}\tau^{\prime}_{i+1}\big{)}&\text{if }\tau\equiv\tau^{\prime}\,\\ 0&\text{, otherwise}\ \.\end{array}\]
With \(\big{(}V^{\rm AT,h}_{J}\big{)}^{\frac{1}{2}}+\big{(}V^{\rm AT,h}_{U}\big{)} ^{\frac{1}{2}}\) and \(V^{\rm AT,V}\), we prove the following proposition below. The generators for the Ashkin-Teller model share many similarities with the generators of the Clifford algebra which have been previously studied in [13].
**Proposition 3**: (_generator relations for the Ashkin-Teller model, from generator relations for the Ising model_, **Proposition 8**, [13]). The components of the Ashkin-Teller transfer matrix satisfy the relations,
\[V^{\rm AT,h}\equiv\exp\big{[}\ J\big{(}\sum\limits_{k\in\mathbf{ I}}p_{k}^{\rm AT}q_{k}^{\rm AT}+\big{(}p_{k}^{\rm AT}\big{)}^{\prime}\big{(}q_{k}^{ \rm AT}\big{)}^{\prime}\big{)}\big{]}+\exp\big{[}\ U\big{(}\sum\limits_{k\in\mathbf{ I}}p_{k}^{\rm AT}\big{(}p_{k}^{\rm AT}\big{)}^{\prime}q_{k}^{\rm AT}\big{(}q_{k}^{ \rm AT}\big{)}^{\prime}\big{)}\big{]}\ \,\] \[V^{\rm AT,V}\equiv\mathscr{P}\bigg{(}\exp\big{[}\ J^{*}\big{(}\sum \limits_{k\in\mathbf{I}}p_{k-\frac{1}{2}}^{\rm AT}q_{k-\frac{1}{2}}^{\rm AT}+ \big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{ \rm AT}\big{)}^{\prime}\big{)}\big{]}+\exp\big{[}\ U^{*}\big{(}\sum\limits_{k \in\mathbf{I}}p_{k-\frac{1}{2}}^{\rm AT}\big{(}p_{k-\frac{1}{2}}^{\rm AT} \big{)}^{\prime}q_{k-\frac{1}{2}}^{\rm AT}\big{(}q_{k-\frac{1}{2}}^{\rm AT} \big{)}^{\prime}\big{)}\big{]}\bigg{)}\ \,\]
for generators \(p_{k}^{\rm AT}\), \(\big{(}p_{k}^{\rm AT}\big{)}^{\prime}\), \(q_{k}^{\rm AT}\), and \(\big{(}q_{k}^{\rm AT}\big{)}^{\prime}\), dual couplings \(\big{(}J^{*},U^{*}\big{)}\), obtained from \(\big{(}J,U\big{)}\) under the relation,
\[\frac{\exp\big{(}-2J+2U\big{)}-1}{\exp\big{(}-2J^{*}+2U^{*}\big{)}-1}=\exp \big{(}2U\big{)}\text{sinh}\big{(}2J\big{)}=\frac{1}{\exp\big{(}2U^{*}\big{)} \text{sinh}\big{(}2J^{*}\big{)}}\ \,\]
and the prefactor,
\[\mathscr{P}\equiv\exp\bigl{[}\bigl{(}\exp\bigl{[}U^{*},J,J^{*}\bigr{]}-U\bigr{)} \bigl{(}\,\sum_{k\in\mathbb{P}_{k}^{\rm AT}}\bigl{(}p_{k}^{\rm AT}\bigr{)}^{ \prime}q_{k}^{\rm AT}\bigl{(}q_{k}^{\rm AT}\bigr{)}^{\prime}\bigr{)}\bigr{]}\ \.\]
_Proof of Proposition 3._ Under the bases \(e_{\tau(i)}\), \(e_{\tau^{\prime}(i)}\), \(e_{\tau^{\prime}(j)}\) and \(e_{\tau(j)}\), observe,
\[i\biggl{(}\bigl{(}p_{k}^{\rm AT}\bigl{(}p_{k}^{\rm AT}\bigr{)}^{\prime} \bigr{)}^{\prime}\frac{e_{\tau(i)}e_{\tau(j)}}{i}+\bigl{(}q_{k}^{\rm AT}\bigl{(} q_{k}^{\rm AT}\bigr{)}^{\prime}\bigr{)}^{\prime}\frac{e_{\tau^{\prime}(i)}e_{ \tau^{\prime}(j)}}{i}+\bigl{(}p_{k}^{\rm AT}\bigl{(}p_{k}^{\rm AT}\bigr{)}^{ \prime}q_{k}^{\rm AT}\bigl{(}q_{k}^{\rm AT}\bigr{)}^{\prime}\bigr{)}\frac{e_{ \tau(i)}e_{\tau(j)}e_{\tau^{\prime}(i)}e_{\tau^{\prime}(j)}}{i}\biggr{)}\ \,\]
can be expressed as,
\[\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigr{)} ^{\prime}\bigr{)}e_{\tau(i)}e_{\tau(j)}+\bigl{(}q_{k-\frac{1}{2}}^{\rm AT} \bigl{(}q_{k-\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}\bigr{)}e_{\tau^{\prime}(i) }e_{\tau^{\prime}(j)}+\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigl{(}p_{k+\frac{1} {2}}^{\rm AT}\bigr{)}^{\prime}q_{k-\frac{1}{2}}^{\rm AT}\bigl{(}q_{k-\frac{1 }{2}}^{\rm AT}\bigr{)}^{\prime}\bigr{)}e_{\tau(i)}e_{\tau^{\prime}(j)}\ \,\]
which we write, in shorthand, as,
\[\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigr{)} ^{\prime}\bigr{)}e_{\tau}+\bigl{(}q_{k-\frac{1}{2}}^{\rm AT}\bigl{(}q_{k- \frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}\bigr{)}e_{\tau^{\prime}}+\bigl{(}p_{k +\frac{1}{2}}^{\rm AT}\bigl{(}p_{k+\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}q_{ k-\frac{1}{2}}^{\rm AT}\bigl{(}q_{k-\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime} \bigr{)}e_{\tau}e_{\tau^{\prime}}\ \,\]
for,
\[e_{\tau}\equiv e_{\tau(i)}e_{\tau(j)}\equiv\prod_{k\in\{i,j\}}e _{\tau_{k}}\ \,\] \[e_{\tau^{\prime}}\equiv e_{\tau^{\prime}(i)}e_{\tau^{\prime}(j)} \equiv\prod_{k\in\{i,j\}}e_{\tau_{k}^{\prime}}\ \,\] \[e_{\tau}e_{\tau^{\prime}}\equiv\bigl{(}e_{\tau(i)}e_{\tau(j)} \bigr{)}\ \bigl{(}e_{\tau^{\prime}(i)}e_{\tau^{\prime}(j)}\bigr{)}\equiv\bigl{(}\prod_{k \in\{i,j\}}e_{\tau_{k}}\bigr{)}\ \bigl{(}\prod_{z\in\{i,j\}}e_{\tau_{z}^{\prime}}\bigr{)}\equiv \prod_{\begin{subarray}{c}k\in\{i,j\}\\ z\in\{i,j\}\end{subarray}}e_{\tau_{k}}e_{\tau_{z}^{\prime}}\ \.\]
Similarly, under the action of other Ashkin-Teller generators, from the imaginary product of generators,
\[i\biggl{(}\bigl{(}p_{k-\frac{1}{2}}^{\rm AT}\bigl{(}p_{k-\frac{1}{2}}^{\rm AT }\bigr{)}^{\prime}\bigr{)}\frac{e_{\tau(i)}e_{\tau(j)}}{i}+\bigl{(}q_{k+\frac {1}{2}}^{\rm AT}\bigl{(}q_{k+\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}\bigr{)} \frac{e_{\tau^{\prime}(i)}e_{\tau^{\prime}(j)}}{i}+\bigl{(}p_{k-\frac{1}{2}}^{ \rm AT}\bigl{(}p_{k-\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}q_{k+\frac{1}{2}}^{ \rm AT}\bigl{(}q_{k+\frac{1}{2}}^{\rm AT}\bigr{)}^{\prime}\bigr{)}\frac{e_{ \tau(i)}e_{\tau(j)}e_{\tau^{\prime}(i)}e_{\tau^{\prime}(j)}}{i}\biggr{)}\ \,\]
observe,
\[i\biggl{(}\bigl{(}p_{k}^{\rm AT}\bigl{(}p_{k}^{\rm AT}\bigr{)}^{\prime}\bigr{)} \frac{e_{\tau(i)^{\prime}}e_{\tau(j))^{\prime}}}{i}+\bigl{(}q_{k-1}^{\rm AT} \bigl{(}q_{k-1}^{\rm AT}\bigr{)}^{\prime}\bigr{)}\frac{e_{(\tau^{\prime}(i))^{ \prime}}e_{(\tau^{\prime}(j))^{\prime}}}{i}+\bigl{(}p_{k}^{\rm AT}\bigl{(}p_{k }^{\rm AT}\bigr{)}^{\prime}q_{k-1}^{\rm AT}\bigl{(}q_{k-1}^{\rm AT}\bigr{)}^{ \prime}\bigr{)}\frac{e_{(\tau(i))^{\prime}}e_{(\tau(j))^{\prime}}e_{(\tau^{ \prime}(i))^{\prime}}e_{(\tau^{\prime}(j))^{\prime}}}{i}\biggr{)}\ \,\]
in terms of the basis for each color of the Ashkin-Teller model. Besides the basis spanned by \(\mathcal{S}_{1}\), the remaining dual basis takes the form,
\[\mathcal{S}_{1}^{\prime}\equiv\mathrm{span}\bigl{\{}\ e_{(\tau(i))^{\prime}} \bigl{|}\ \tau_{2}^{\prime}\bigl{(}i\bigr{)}=-1\bigr{\}}\ \,\]
corresponding to the dual subspace from that which is spanned by the basis introduced in the previous section for \(\mathcal{S}_{1}\), while the remaining dual subspaces take the form,
\[\mathcal{S}_{2}^{\prime}\equiv\mathrm{span}\bigl{\{}\ e_{(\tau^{\prime}(i))^{ \prime}}\bigl{|}\ \tau^{\prime}\bigl{(}i\bigr{)}=-1\bigr{\}}\ \,\] \[\mathcal{S}_{3}^{\prime}\equiv\mathrm{span}\bigl{\{}\ e_{(\tau(j))^{ \prime}}\bigl{|}\ \tau\bigl{(}j\bigr{)}=-1\bigr{\}}\ \,\] \[\mathcal{S}_{4}^{\prime}\equiv\mathrm{span}\bigl{\{}\ e_{(\tau^{\prime}(j)) ^{\prime}}\bigl{|}\ \tau^{\prime}\bigl{(}i\bigr{)}=-1\bigr{\}}\ \,\]
\[\bigl{(}\tau\bigr{)}^{\prime}\equiv\ \ \begin{cases}\ \tau\bigl{(}\sigma_{x}\bigr{)} &\ \
corresponding to flipping spins \(\tau\) at site \(j\), and,
\[\big{(}\tau^{\prime}\big{)}^{\prime}\equiv\ \ \left\{\begin{array}{ll}\tau^{ \prime}\big{(}\sigma_{x}\big{)}&\mbox{ if }x>k\,\\ -\tau^{\prime}\big{(}x\big{)}&\mbox{ if }x<k\ \ \.\end{array}\right.\]
corresponding to flipping spins \(\tau^{\prime}\) at site \(j\).
On the other hand, to demonstrate that the remaining identity holds for \(V^{\mathrm{AT},h}\), under a choice of suitably defined parameters \(J^{*}\) and \(U^{*}\) for which,
\[\frac{\exp\bigl{(}-2J+2U\bigr{)}-1}{\exp\bigl{(}-2J^{*}+2U^{*}\bigr{)}-1}=\exp \bigl{(}2U\bigr{)}\mathrm{sinh}\bigl{(}2J\bigr{)}=\frac{1}{\exp\bigl{(}2U^{*} \bigr{)}\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}}\ \,\]
write,
under the duality relation for the Ashkin-Teller model, captured by the correspondence \(\big{(}J,U\big{)}\longleftrightarrow\big{(}J^{*},U^{*}\big{)}\),
\[\exp\biggl{[}J\big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT} }q_{k}^{\mathrm{AT}}+\big{(}p_{k}^{\mathrm{AT}}\big{)}^{\prime}\big{(}q_{k}^{ \mathrm{AT}}\big{)}^{\prime}\bigr{)}+\biggl{(}\exp\biggl{[}\biggl{|}\mathrm{ log}\bigl{|}\exp\bigl{(}2U^{*}\bigr{)}\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}\bigr{|}^{-1}- \mathrm{log}\bigl{|}\mathrm{sinh}\bigl{(}2J\bigr{)}\bigr{|}\biggr{|}\biggr{]} \biggr{)}\times\cdots\\ \big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT}}\big{(}p_{k}^{ \mathrm{AT}}\big{)}^{\prime}q_{k}^{\mathrm{AT}}\big{(}q_{k}^{\mathrm{AT}} \big{)}^{\prime}\bigr{)}\biggr{]}e_{\tau(i)}\ \,\]
from the fact that the duality relation implies,
\[U^{*}\longleftrightarrow U\equiv\exp\biggl{[}\biggl{|}\mathrm{log}\bigl{(} \bigl{|}\exp\bigl{(}2U^{*}\bigr{)}\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}\bigr{|} ^{-1}\bigl{|}\mathrm{sinh}\bigl{(}2J\bigr{)}\bigr{|}^{-1}\bigr{)}\biggr{|} \biggr{]}\ \.\]
Rearranging, under the basis \(e_{t(i)}\), multiplying the expression by,
\[\mathcal{I}\equiv\frac{\exp\bigl{[}U\big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{ \mathrm{AT}}\big{(}p_{k}^{\mathrm{AT}}\big{)}^{\prime}q_{k}^{\mathrm{AT}} \big{(}q_{k}^{\mathrm{AT}}\big{)}^{\prime}\bigr{)}\bigr{]}}{\exp\bigl{[}U \big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT}}\big{(}p_{k}^{\mathrm{AT}} \big{)}^{\prime}q_{k}^{\mathrm{AT}}\big{(}q_{k}^{\mathrm{AT}}\big{)}^{\prime }\big{)}\bigr{]}}\ \,\]
yields,
\[\mathcal{I}\ \exp\bigl{[}J\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT} }q_{k}^{\mathrm{AT}}\big{(}p_{k}^{\mathrm{AT}}\big{)}^{\prime}\big{(}q_{k}^{ \mathrm{AT}}\big{)}^{\prime}\bigr{]}\ \ \exp\biggl{[}\biggl{(}\exp\biggl{[}\biggl{|}\mathrm{ log}\bigl{|}\exp\bigl{(}2U^{*}\bigr{)}\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}\bigr{|}^{-1}- \mathrm{log}\bigl{|}\mathrm{sinh}\bigl{(}2J\bigr{)}\bigr{|}\biggr{|}\biggr{]} \biggr{)}\times\cdots\\ \big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT}}\big{(}p_{k}^{ \mathrm{AT}}\big{)}^{\prime}q_{k}^{\mathrm{AT}}\big{(}q_{k}^{\mathrm{AT}} \big{)}^{\prime}\big{)}\biggr{]}\ \.\]
Concluding, for \(V^{\mathrm{AT},V}\), the desired expression preceding the exponetial after the duality transformation is,
\[\exp\biggl{[}\ \biggl{[}\biggl{(}\exp\biggl{[}\biggl{|}\mathrm{log}\bigl{|} \exp\bigl{(}2U^{*}\bigr{)}\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}\bigr{|}^{-1}- \mathrm{log}\bigl{|}\mathrm{sinh}\bigl{(}2J\bigr{)}\bigr{|}\biggr{|}\biggr{]} -U\biggr{]}\biggr{)}\big{(}\sum_{k\in\mathbf{I}}\!\!p_{k}^{\mathrm{AT}}\big{(}p_ {k}^{\mathrm{AT}}\big{)}^{\prime}q_{k}^{\mathrm{AT}}\big{(}q_{k}^{\mathrm{AT}} \big{)}^{\prime}\bigr{)}\biggr{]}\ \,\]
which is equivalent to,
\[\exp\bigl{[}\bigl{(}\exp\bigl{[}U^{*},J,J^{*}\bigr{]}-U\bigr{)}\big{(}\sum_{k \in\mathbf{I}}\!\!p_{k}^{\mathrm{AT}}\big{(}p_{k}^{\mathrm{AT}}\big{)}^{\prime }q_{k}^{\mathrm{AT}}\big{(}q_{k}^{\mathrm{AT}}\big{)}^{\prime}\bigr{)}\bigr{]}\ \,\]
from which we conclude the argument. \(\qed\)
Induced rotations. Under the correspondence \(\big{(}J,U\big{)}\longleftrightarrow\big{(}J^{*},U^{*}\big{)}\), below we state a result for conjugation by \(V^{\Lambda\Upsilon,h}\).
**Lemma 7** (_Ashkin-Teller transfer matrix conjugation_, **Lemma** in _3.2_, [13]). The action,
\[\big{(}V^{\Lambda\Upsilon,h}\big{)}^{-\frac{1}{2}}\cdot p_{k}^{ \Lambda\Upsilon}\cdot\big{(}V^{\Lambda\Upsilon,h}\big{)}^{\frac{1}{2}} =cp_{k}^{\Lambda\Upsilon}-isq_{k}^{\Lambda\Upsilon}\ \,\] \[\big{(}V^{\Lambda\Upsilon,h}\big{)}^{-\frac{1}{2}}\cdot q_{k}^{ \Lambda\Upsilon}\cdot\big{(}V^{\Lambda\Upsilon,h}\big{)}^{\frac{1}{2}} =isp_{k}^{\Lambda\Upsilon}+cq_{k}^{\Lambda\Upsilon}\ \,\]
under the composition operator, \(\cdot\), \(V^{\Lambda\Upsilon,h}\) takes the form above, while the action,
\[\big{(}V^{\Lambda\Upsilon,V}\big{)}^{-1}\cdot p_{k}^{\Lambda\Upsilon }\cdot\big{(}V^{\Lambda\Upsilon,V}\big{)}^{1} =\frac{C}{S}p_{k}^{\Lambda\Upsilon}+\frac{i}{S}q_{k+1}^{\Lambda \Upsilon}\ \,\] (*) \[\big{(}V^{\Lambda\Upsilon,V}\big{)}^{-1}\cdot q_{k}^{\Lambda \Upsilon}\cdot\big{(}V^{\Lambda\Upsilon,V}\big{)}^{1} =-\frac{i}{S}p_{k-1}^{\Lambda\Upsilon}+\frac{C}{S}q_{k}^{\Lambda \Upsilon}\ \,\] (**)
under the composition operator \(\cdot\), \(V^{\Lambda\Upsilon,V}\) takes the form above, where _(*)_ holds for \(k\neq k_{R}\), and where _(**)_ holds for \(k\neq k_{L}\). Additionally,
\[\big{(}V^{\Lambda\Upsilon,V}\big{)}^{-1}\cdot p_{k}^{\Lambda \Upsilon}\cdot V^{\Lambda\Upsilon,V} =p_{k}^{\Lambda\Upsilon}\ \,\] (***) \[\big{(}V^{\Lambda\Upsilon,V}\big{)}^{-1}\cdot q_{k}^{\Lambda \Upsilon}\cdot V^{\Lambda\Upsilon,V} =q_{k}^{\Lambda\Upsilon}\ \,\] (****)
where _(***)_ holds for \(k\equiv k_{R}\), and where _(****)_ holds for \(k\equiv k_{L}\).
For the arguments of the item above, we must introduce additional structure rather than that of the Ashkin-Teller generators, which is given by the following. In particular, for a subspace, \(\mathcal{W}\), from the set of all endomorphisms of \(\Omega^{\Lambda\Upsilon}\), conjugation by an induced rotation of the subspace, which we denote with a transfer matrix \(V\), is given by,
\[T_{V}:\mathcal{W}\longrightarrow\mathcal{W}\ \.\]
From \(T_{V}\) above, in the item below, we state another result with respect to \(\cdot\).
**Lemma \(\delta\)** (_commutation rule_, **Lemma** \(9\), [13]). For two other maps, the identities,
\[T_{V}\cdot R=R\cdot T_{V}\ \,\] \[T_{V}\cdot J=J\cdot T_{V}\ \,\]
for the rotation hold, where the maps \(R\) and \(J\) satisfy,
\[R\big{(}p_{k}\big{)}=iq_{a+b-k}^{\Lambda\Upsilon}\ \,\] \[R\big{(}q_{k}^{\Lambda\Upsilon}\big{)}=-ip_{a+b-k}^{\Lambda \Upsilon}\ \,\] \[J\big{(}p_{k}^{\Lambda\Upsilon}\big{)}=ip_{k}^{\Lambda\Upsilon}\ \,\] \[J\big{(}q_{k}^{\Lambda\Upsilon}\big{)}=-iq_{k}^{\Lambda \Upsilon}\ \,\]
for the bases \(\psi_{x}\) and \(\bar{\psi_{x}}\) spanning,
\[\mathcal{V}_{\psi_{x}}=\big{\{}x\ \big{|}\ \psi_{x}\in\mathcal{V} \big{\}}\ \,\] \[\mathcal{V}_{\bar{\psi_{x}}}=\big{\{}x\ \big{|}\ \bar{\psi_{x}}\in\mathcal{V} \big{\}}\ \,\]
which have the following images under \(R\) and \(J\),
\[R\big{(}\psi_{x}\big{)}=\psi_{a+b-k}\enspace,\] \[R\big{(}\bar{\psi_{x}}\big{)}=\psi_{b+a-k}\enspace,\] \[J\big{(}\psi_{x}\big{)}=\bar{\psi_{x}}\enspace,\] \[J\big{(}\bar{\psi_{x}}\big{)}=\psi_{k}\enspace,\]
for the elements,
\[\psi_{x}\equiv\frac{i}{\sqrt{2}}\big{(}p_{k}^{\rm AT}+q_{k}^{\rm AT }\big{)}\enspace,\] \[\bar{\psi_{x}}\equiv\frac{1}{\sqrt{2}}\big{(}p_{k}^{\rm AT}-q_{k} ^{\rm AT}\big{)}\enspace.\]
_Proof of Lemma 8._ To demonstrate that the commutation rules hold, write,
\[\big{(}V^{\rm AT,h}\big{)}^{\frac{1}{2}}\cdot R\enspace,\] \[\big{(}V^{\rm AT,V}\big{)}^{\frac{1}{2}}\cdot R\enspace,\]
corresponding to conjugation by the first component of the Ashkin-Teller transfer matrix, and,
\[\big{(}V^{\rm AT,h}\big{)}^{-\frac{1}{2}}\cdot J\enspace,\] \[\big{(}V^{\rm AT,V}\big{)}^{-\frac{1}{2}}\cdot J\enspace,\]
corresponding to conjugation by the second component of the Ashkin-Teller transfer matrix.
Next, observe, from each expression,
\[\big{(}V^{\rm AT,h}\big{)}^{\frac{1}{2}}\cdot R\equiv\left(\exp \big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k}^{\rm AT}q_{k}^{\rm AT}+\big{(} p_{k}^{\rm AT}\big{)}^{\prime}\big{(}q_{k}^{\rm AT}\big{)}^{\prime}\big{)} \big{]}+\exp\big{[}\ U\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k}^{\rm AT}\big{(}p_ {k}^{\rm AT}\big{)}^{\prime}q_{k}^{\rm AT}\big{(}q_{k}^{\rm AT}\big{)}^{ \prime}\big{)}\big{]}\right)^{\frac{1}{2}}\cdot R\] \[\equiv R\cdot\left(\exp\big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma} }p_{k}^{\rm AT}q_{k}^{\rm AT}+\big{(}p_{k}^{\rm AT}\big{)}^{\prime}\big{(}q_ {k}^{\rm AT}\big{)}^{\prime}\big{)}\big{]}+\exp\big{[}\ U\big{(}\sum_{k\in \mathbf{\Gamma}}p_{k}^{\rm AT}\big{(}p_{k}^{\rm AT}\big{)}^{\prime}q_{k}^{ \rm AT}\big{(}q_{k}^{\rm AT}\big{)}^{\prime}\big{)}\big{]}\right)^{\frac{1}{ 2}}\enspace,\] \[\big{(}V^{\rm AT,V}\big{)}^{\frac{1}{2}}\cdot R\equiv\left[ \mathscr{P}\bigg{(}\exp[\ J^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k-\frac{1} {2}}^{\rm AT}q_{k-\frac{1}{2}}^{\rm AT}+\big{(}p_{k-\frac{1}{2}}^{\rm AT} \big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{)} \big{]}+\exp[\ U^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k-\frac{1}{2}}^{\rm AT }\big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}q_{k-\frac{1}{2}}^{\rm AT }\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{)}\big{]}\right]^{ \frac{1}{2}}\cdot R\] \[\equiv R\cdot\left[\mathscr{P}\bigg{(}\exp[\ J^{*}\big{(}\sum_{k \in\mathbf{\Gamma}}p_{k-\frac{1}{2}}^{\rm AT}q_{k-\frac{1}{2}}^{\rm AT}+ \big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{ \rm AT}\big{)}^{\prime}\big{)}\big{]}+\exp[\ U^{*}\big{(}\sum_{k\in\mathbf{ \Gamma}}p_{k-\frac{1}{2}}^{\rm AT}\big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{ \prime}q_{k-\frac{1}{2}}^{\rm AT}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{ \prime}\big{)}\big{]}\right]^{\frac{1}{2}}\enspace,\]
and,
\[\big{(}V^{\rm AT,h}\big{)}^{-\frac{1}{2}}\cdot J\equiv\left(\exp \big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k}^{\rm AT}q_{k}^{\rm AT}+ \big{(}p_{k}^{\rm AT}\big{)}^{\prime}\big{(}q_{k}^{\rm AT}\big{)}^{\prime} \big{)}\big{]}+\exp[\ U\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k}^{\rm AT}\big{(}p_ {k}^{\rm AT}\big{)}^{\prime}q_{k}^{\rm AT}\big{(}q_{k}^{\rm AT}\big{)}^{ \prime}\big{)}\big{]}\right)^{-\frac{1}{2}}\cdot J\] \[\equiv J\cdot\left(\exp[\ J\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k}^{ \rm AT}q_{k}^{\rm AT}+\big{(}p_{k}^{\rm AT}\big{)}^{\prime}\big{(}q_{k}^{\rm AT }\big{)}^{\prime}\big{)}\big{]}+\exp[\ U\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k} ^{\rm AT}\big{(}p_{k}^{\rm AT}\big{)}^{\prime}q_{k}^{\rm AT}\big{(}q_{k}^{ \rm AT}\big{)}^{\prime}\big{)}\big{]}\right)^{-\frac{1}{2}}\enspace,\] \[\equiv\bigg{[}\mathscr{P}\bigg{(}\exp[\ J^{*}\big{(}\sum_{k\in \mathbf{\Gamma}}p_{k-\frac{1}{2}}^{\rm AT}q_{k-\frac{1}{2}}^{\rm AT}+\big{(}p_{k- \frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{ \prime}\big{)}\big{]}+\exp[\ U^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k- \frac{1}{2}}^{\rm AT}\big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}q_{k- \frac{1}{2}}^{\rm AT}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{)} \big{]}\bigg{)}\bigg{]}^{-\frac{1}{2}}\cdot J\] \[\equiv J\cdot\left[\mathscr{P}\bigg{(}\exp[\ J^{*}\big{(}\sum_{k\in \mathbf{\Gamma}}p_{k-\frac{1}{2}}^{\rm AT}q_{k-\frac{1}{2}}^{\rm AT}+\big{(}p_{k- \frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{ \prime}\big{)}\big{]}+\exp[\ U^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k- \frac{1}{2}}^{\rm AT}\big{(}p_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}q_{k- \frac{1}{2}}^{\rm AT}\big{(}q_{k-\frac{1}{2}}^{\rm AT}\big{)}^{\prime}\big{)} \big{]}\right)\bigg{]}^{-\frac{1}{2}}\enspace.\]
Hence, as desired,
\[\left(V^{\text{AT},h}\right)^{\frac{1}{2}}\cdot R=R\cdot\left(V^{ \text{AT},h}\right)^{\frac{1}{2}}\ \,\] \[\left(V^{\text{AT},V}\right)^{-\frac{1}{2}}\cdot R=R\cdot\left(V^{ \text{AT},V}\right)^{-\frac{1}{2}}\ \,\]
and,
\[\left(V^{\text{AT},h}\right)^{\frac{1}{2}}\cdot J=J\cdot\left(V^{ \text{AT},h}\right)^{\frac{1}{2}}\ \,\] \[\left(V^{\text{AT},V}\right)^{-\frac{1}{2}}\cdot J=J\cdot\left(V^{ \text{AT},V}\right)^{-\frac{1}{2}}\ \,\]
from which we conclude the argument.
_Proof of Lemma 7_. By direct computation, write,
\[\left(\left(V^{\text{AT},h}\right)^{-\frac{1}{2}}\cdot p_{k}^{\text{AT}} \right)\cdot\left(V^{\text{AT},h}\right)^{\frac{1}{2}}\]
is equivalent to,
\[\left(\big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}^{\prime}}p_{k- \frac{1}{2}}^{\text{AT}}q_{k-\frac{1}{2}}^{\text{AT}}+\big{(}p_{k-\frac{1}{2} }^{\text{AT}}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{ \prime}\big{)}\big{]}+\ \big{[}\ U\big{(}\sum_{k\in\mathbf{\Gamma}^{\prime}}p_{k- \frac{1}{2}}^{\text{AT}}\big{(}p_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime} q_{k-\frac{1}{2}}^{\text{AT}}\big{(}q_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{ \prime}\big{)}\big{]}\right)\cdot p_{k}^{\text{AT}}\cdot\left(V^{\text{AT},h} \right)^{\frac{1}{2}}\] \[\equiv\left(\big{[}\ \big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}^{ \prime}}p_{k-\frac{1}{2}}^{\text{AT}}q_{k-\frac{1}{2}}^{\text{AT}}+\big{(}p_{ k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\text{AT}} \big{)}^{\prime}\big{)}\big{]}p_{k}^{\text{AT}}\big{]}-i\ \big{[}\ U\big{(}\sum_{k\in\mathbf{\Gamma}^{ \prime}}p_{k-\frac{1}{2}}^{\text{AT}}\big{(}p_{k-\frac{1}{2}}^{\text{AT}} \big{)}^{\prime}q_{k-\frac{1}{2}}^{\text{AT}}\big{(}q_{k-\frac{1}{2}}^{\text{AT }}\big{)}^{\prime}\big{)}\big{]}p_{k}^{\text{AT}}\big{]}\right)\cdot\left(V^{ \text{AT},h}\right)^{\frac{1}{2}}\] \[\equiv cp_{k}^{\text{AT}}-isq_{k}^{\text{AT}}\ \,\]
and,
\[\left(\big{(}V^{\text{AT},h}\big{)}^{-\frac{1}{2}}\cdot q_{k}^{\text{AT}} \right)\cdot\left(V^{\text{AT},h}\right)^{\frac{1}{2}}\]
is equivalent to,
\[\left(\big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k-\frac{1}{2 }}^{\text{AT}}q_{k-\frac{1}{2}}^{\text{AT}}+\big{(}p_{k-\frac{1}{2}}^{\text{AT }}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime}\big{)} \big{]}+\ \big{[}\ U\big{(}\sum_{k\in\mathbf{\Gamma}^{\prime}}p_{k-\frac{1}{2 }}^{\text{AT}}\big{(}p_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime}q_{k-\frac{1}{ 2}}^{\text{AT}}\big{(}q_{k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime}\big{)} \big{]}\right)\cdot q_{k}^{\text{AT}}\cdot\left(V^{\text{AT},h}\right)^{\frac{1} {2}}\] \[\equiv\left(\big{[}\ \big{[}\ J\big{(}\sum_{k\in\mathbf{\Gamma}^{ \prime}}p_{k-\frac{1}{2}}^{\text{AT}}q_{k-\frac{1}{2}}^{\text{AT}}+\big{(}p_{ k-\frac{1}{2}}^{\text{AT}}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\text{AT}} \big{)}^{\prime}\big{)}\big{]}q_{k}^{\text{AT}}\big{]}+\ \big{[}\ U\big{(}\sum_{k\in\mathbf{\Gamma}^{ \prime}}p_{k-\frac{1}{2}}^{\text{AT}}\big{(}p_{k-\frac{1}{2}}^{\text{AT}} \big{)}^{\prime}q_{k-\frac{1}{2}}^{\text{AT}}\big{(}q_{k-\frac{1}{2}}^{\text{AT }}\big{)}^{\prime}\big{)}\big{]}q_{k}^{\text{AT}}\big{]}\right)\cdot\left(V^{ \text{AT},h}\right)^{\frac{1}{2}}\] \[\equiv isp_{k}^{\text{AT}}+cq_{k}^{\text{AT}}\ \.\]
Proceeding, to show that the remaining identities hold, write,
\[\left(\big{(}V^{\text{AT},V}\big{)}^{-\frac{1}{2}}\cdot p_{k}^{\text{AT}}\right) \cdot\left(V^{\text{AT},V}\right)^{\frac{1}{2}}\ \,\]
which is equivalent to,
\[\left(\mathscr{P}\bigg{(}\big{[}\ J^{*}\big{(}\sum_{k\in\mathbf{ \Gamma}}p_{k-1}^{\text{AT}}q_{k-1}^{\text{AT}}+\big{(}p_{k-1}^{\text{AT}}\big{)}^ {\prime}\big{(}q_{k-1}^{\text{AT}}\big{)}^{\prime}\big{)}\big{]}+\big{[}\ U^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k-1}^{\text{AT}} \big{(}p_{k-1}^{\text{AT}}\big{)}^{\prime}q_{k-1}^{\text{AT}}\big{(}q_{k-1}^{ \text{AT}}\big{)}^{\prime}\big{)}\big{]}\right)\cdot p_{k}^{\text{AT}}\cdot \left(V^{\text{AT},V}\right)^{\frac{1}{2}}\] \[\equiv\left(\mathscr{P}\bigg{(}\big{[}\ \big{[}J^{*}\big{(}\sum_{k\in\mathbf{ \Gamma}}p_{k-1}^{\text{AT}}q_{k-1}^{\text{AT}}+\big{(}p_{k-1}^{\text{AT}} \big{)}^{\prime}\big{(}q_{k-1}^{\text{AT}}\big{)}^{\prime}\big{)}\big{]}p_{k}^{ \text{AT}}\big{]}+\big{[}\ U^{*}\big{(}\sum_{k\in\mathbf{\Gamma}}p_{k-1}^{ \text{AT}}\big{(}p_{k-1}^{\text{AT}}\big{)}^{\prime}q_{k-1}^{\text{AT}} \big{(}q_{k-1}^{\text{AT}}\big{)}^{\prime}\big{)}\big{]}p_{k}^{\text{AT}} \big{]}\right)\cdot\left(V^{\text{AT},V}\right)^{\frac{1}{2}}\] \[\equiv\frac{C}{S}p_{k}^{\text{AT}}+\frac{i}{S}q_{k+1}^{\text{AT}}\ \,\]
and,
\[\left(\left(V^{\mathrm{AT},V}\right)^{-\frac{1}{2}}\cdot q_{k}^{\mathrm{AT}} \right)\cdot\left(V^{\mathrm{AT},V}\right)^{\frac{1}{2}}\enspace,\]
which is equivalent to,
\[\left(\mathscr{P}\bigg{(}\left[\right.J^{*}(\sum\limits_{k\in \mathbf{I}}p_{k-1}^{\mathrm{AT}}q_{k-1}^{\mathrm{AT}}+\left(p_{k-1}^{ \mathrm{AT}}\right)^{\prime}(q_{k-1}^{\mathrm{AT}})^{\prime})\right]+\left[ \right.U^{*}(\sum\limits_{k\in\mathbf{I}}p_{k-1}^{\mathrm{AT}}\left(p_{k-1}^{ \mathrm{AT}}\right)^{\prime}q_{k-1}^{\mathrm{AT}}\left(q_{k-1}^{\mathrm{AT}} \right)^{\prime})\right]\bigg{)}\cdot q_{k}^{\mathrm{AT}}\cdot\left(V^{ \mathrm{AT},V}\right)^{\frac{1}{2}}\] \[\equiv -\frac{i}{S}p_{k-1}^{\mathrm{AT}}+\frac{C}{S}q_{k}^{\mathrm{AT}}\enspace,\]
for,
\[\mathscr{P}\big{(}U^{*},J,J^{*}\big{)}\equiv\mathscr{P}\equiv\exp\bigl{[} \left(\exp\bigl{[}U^{*},J,J^{*}\bigr{]}-U\right)\bigl{(}\sum\limits_{k\in \mathbf{I}}p_{k}^{\mathrm{AT}}\bigl{(}p_{k}^{\mathrm{AT}}\bigr{)}^{\prime}q_{k }^{\mathrm{AT}}\bigl{(}q_{k}^{\mathrm{AT}}\bigr{)}^{\prime}\bigr{)}\bigr{]}\enspace,\]
from which we conclude the argument.
With the final item of the subsection, we obtain a conjugation representation for \(T_{V}\).
**Theorem** (_conjugation action of_ \(T_{V}\), **Theorem**_10_, [13]). For the map \(T_{V}\), there exists a change of basis on the Ashkin-Teller propagation matrix, away from the critical \(\frac{1}{4}\mathrm{log}\bigl{(}3\bigr{)}\) threshold, for which,
\[T_{V}\equiv\rho\cdot\bigl{(}P^{\mathrm{AT}}\bigr{)}^{\mathbf{C}}\cdot\rho^{- 1}\enspace,\]
for the maps,
\[\rho:\bigl{(}\mathbf{C}^{2}\bigr{)}^{\mathbf{I}^{\prime}}\longrightarrow \mathcal{W}\enspace.\]
and the complexification,
\[\bigl{(}P^{\mathrm{AT}}\bigr{)}^{\mathbf{C}}:\bigl{(}\mathbf{R}^{2}\bigr{)}^ {\mathbf{I}^{\prime}}\longrightarrow\bigl{(}\mathbf{C}^{2}\bigr{)}^{\mathbf{ I}^{*}}\enspace.\]
_Proof of Theorem_. Under the basis \(\bigl{(}\psi_{x},\bar{\psi}_{x}\bigr{)}\) introduced earlier, for \(k\in\mathbf{I}^{*}\backslash\partial\mathbf{I}^{*}\),
\[T_{V}^{-1}\bigl{(}\psi_{k}\bigr{)}=\frac{C^{2}}{S}\psi_{k}-\bigl{(}\frac{1}{2 }+\frac{i}{2S}\bigr{)}\psi_{k-1}+\bigl{(}\frac{i}{2S}-\frac{1}{2}\bigr{)}\psi _{k+1}-C\bar{\psi}_{k}+\frac{C}{2S}\psi_{k-1}^{-}+\frac{C}{2S}\psi_{k+1}^{-}\enspace,\]
from which we observe,
\[T_{V}^{-1}\bigl{(}\bar{\psi}_{k}\bigr{)}=\frac{C^{2}}{S}\bar{\psi}_{k}-\bigl{(} \frac{1}{2}+\frac{i}{2S}\bigr{)}\psi_{k-1}^{-}+\bigl{(}\frac{i}{2S}-\frac{1}{2 }\bigr{)}\psi_{k+1}^{-}-C\psi_{k}+\frac{C}{2S}\psi_{k-1}+\frac{C}{2S}\psi_{k+1} \enspace,\]
yields a map for \(T_{V}^{-1}\bigl{(}\bar{\psi}_{k}\bigr{)}\), which satisfies,
\[T_{V}^{-1}\bigl{(}\psi_{k}\bigr{)}\cdot J\equiv J\cdot T_{V}^{-1}\bigl{(}\psi_ {k}\bigr{)}\enspace,\]
and,
\[T_{V}^{-1}\bigl{(}\bar{\psi}_{k}\bigr{)}\cdot J\equiv J\cdot T_{V}^{-1}\bigl{(} \bar{\psi}_{k}\bigr{)}\enspace,\]
by the commutation rules for \(J\).
On the left and right boundaries of the interval besides \(k\in\mathbf{I}^{\star}\backslash\partial\mathbf{I}^{\star}\), to obtain the desired formula for the inverse mapping \(T_{V}^{-1}\), write,
\[T_{V}^{-1}\big{(}\psi_{k_{L}}\big{)}=\frac{C\big{(}S+C\big{)}}{2S}\psi_{k_{L}}+ i\frac{1+iS}{2S}\psi_{k_{L+1}}+\frac{-S\big{(}C+S\big{)}+i\big{(}C-S\big{)}}{2S} \psi_{k_{L}}^{-}+\frac{C}{2S}\psi_{k_{L+1}}^{-}\ \.\]
Similarly, for the rightmost endpoint, write,
\[T_{V}^{-1}\big{(}\psi_{k_{R}}\big{)}=\frac{C\big{(}S+C\big{)}}{2S}\psi_{k_{R}}+ i\frac{1+iS}{2S}\psi_{k_{R+1}}+\frac{-S\big{(}C+S\big{)}+i\big{(}C-S\big{)}}{2S} \psi_{k_{R}}^{-}+\frac{C}{2S}\psi_{k_{R+1}}^{-}\ \.\]
From each of the two expressions above, observe,
\[T_{V}^{-1}\big{(}\psi_{k_{L}}\big{)}\cdot J\equiv J\cdot T_{V}^{-1}\big{(}\psi _{k_{L}}\big{)}\ \,\]
\[T_{V}^{-1}\big{(}\psi_{k_{R}}\big{)}\cdot J\equiv J\cdot T_{V}^{-1}\big{(}\psi _{k_{R}}\big{)}\ \,\]
by the commutation rules for \(J\).
Altogether, by inspection the basis elements of the map \(T_{V}^{-1}\) analyzed above coincide with the coefficients of the complexification of the Ashkin-Teller propagation matrix away from \(\frac{1}{4}\mathrm{log}\big{(}3\big{)}\). Hence \(P^{\mathrm{AT}}\) and \(\big{(}P^{\mathrm{AT}}\big{)}^{-1}\) are conjugate, from which we conclude the argument.
From the results of this subsection, we now transition back to the loop model.
#### 2.2.2 Loop model
For the loop \(\mathrm{O}\big{(}n\big{)}\) model, the desired basis for the transfer matrix, from a construction due to Blote, and Nienhuis, [3], takes the form,
\[Z\equiv\sum_{\mathrm{configurations}}K^{\epsilon}n^{L}\ \,\]
corresponding to the product of the number of edges for each \(K\), for real \(K\), raised to some arbitrary, strictly positive \(\epsilon\), and the number of loops, for some strictly positive, real \(L\), following an expansion of the partition function as given above. For the number of occupied vertices in a loop configuration, the partition above is equivalent to,
\[Z\equiv\sum_{\mathrm{configurations}}K^{N_{V}}n^{L}\ \,\]
corresponding to the number of vertices, \(N_{V}\), which are occupied, under the assumption,
\[\epsilon\approx N_{V}\ \,\]
To consolidate vertices which could belong to the same loop of the configuration, if two lines, \(k\) and \(m\), both of which intersect with the finite volume, then we say that \(k\) and \(m\) are connected. The same observation can be applied to more than \(k\) lines, in particular an arbitrary number of lines, to say that more than two lines of the loop configuration are connected. Iteratively, one can define the partition function of the finite volume over \(\mathbf{H}\), with side length \(N+1\), in terms of the partition function of the finite volume over \(\mathbf{H}\), with side length \(N\), with,
\[Z_{\alpha}^{N+1}\equiv\sum_{\beta\in{\bf N}}\!\!T_{\alpha\beta}Z_{\beta}^{N}\ \,\]
for a connectivity parameter \(\alpha\). For the parameters,
\[\begin{array}{c}N_{v}^{\prime}\equiv N_{v}+n_{v}\ \,\\ N_{l}^{\prime}\equiv N_{l}+n_{l}\ \,\end{array}\]
from the expansion of the partition function over \(\beta\),
\[Z^{(N)}\equiv\sum_{\beta\in{\bf N}}Z_{\beta}^{N}\ \,\]
one can write,
\[Z_{\alpha}^{(N)}\equiv\sum_{S(N+1)\in G}\delta_{\alpha,\phi(S(N+1))}K^{N_{v}^ {\prime}}n^{N_{l}}\ \,\]
and also,
\[Z_{\alpha}^{(N)}\equiv\sum_{S(N)\in G}K^{n_{v}}n^{N_{l}}\!\left(\sum_{g_{N}|S( N)}\delta_{\alpha,\phi(S(N))}K^{n_{v}}n^{n_{l}}\right)\ \,\]
corresponding to the delta function \(\delta\) for the parameters \(\alpha\) and the connectivity \(\phi\big{(}S\big{(}N+1\big{)}\big{)}\) of line \(N\), where the index of summation \(g_{N}|S(N)\) denotes the sum of graphs over line \(N\).
Hence, the partition function of the loop model can be expressed in terms of the transfer matrix, with,
\[Z_{\alpha}^{(N+1)}\equiv\sum_{\beta\in{\bf N}}\!\left(\bigg{(}\sum_{g_{N+1}| \beta}\!\!\delta_{\alpha,\psi(S(N))}K^{n_{v}}n^{n_{l}}\bigg{)}\!\sum_{S(N)\in G }\!\!K^{n_{v}}n^{N_{l}}\!\bigg{(}\sum_{g_{N}|S(N)}\delta_{\beta,\phi(S(N))}K^{ n_{v}}n^{n_{l}}\bigg{)}\right)\ \,\]
from the expansion,
\[Z_{\alpha}^{(N+1)}\equiv\sum_{\beta\in{\bf N}}\!\bigg{(}\sum_{g_{N+1}|\beta} \!\!\delta_{\alpha,\psi(S(N))}K^{n_{v}}n^{n_{l}}\bigg{)}\!Z_{\beta}^{(N)}\ \.\]
Generator relations. Equipped with the loop transfer matrix construction, we establish the following results, akin to **Proposition 3**, **Lemma 8**, and **Theorem** of _2.2.1_. Again, we return to the fact of the HTE for the loop O(1) model, in which,
\[{\bf P}_{\Lambda_{\bf H}}^{\rm loop,\xi}\big{[}\sigma\big{]}\stackrel{{ \rm HTE}}{{\sim}}\frac{n^{k(\sigma)}x^{e(\sigma)}{\rm exp}\!\left(hr \big{(}\sigma\big{)}+h^{\prime}r^{\prime}\big{(}\sigma\big{)}\right)}{Z_{ \Lambda_{\bf H}}^{\rm loop,\xi}\big{(}\sigma\big{)}}\stackrel{{ \beta^{\rm loop}=\frac{1}{2}|{\rm log}|x|}}{{\leftarrow}}{}_{n\equiv 1,h^{ \prime}\equiv 0}{\bf P}_{\Lambda_{{\bf Z}^{2}}}^{\rm Ising,\chi}\big{[}\sigma^{ \rm Ising}\big{]}\equiv{\rm O}\big{(}1\big{)}\ {\rm measure}\ \,\]
introduce loop generators,
\[\frac{p_{u}^{\rm loop}}{\beta}\equiv hr\big{(}\sigma\big{)}\ \,\] \[\frac{q_{u}^{\rm loop}}{\beta}\equiv{\rm exp}\big{(}{\rm log} \big{(}x^{e(\sigma)}\big{)}\big{)}\ \,\]
because,
\[x^{e(\sigma)}\equiv\exp\bigl{(}\log\bigl{(}x^{e(\sigma)}\bigr{)}\bigr{)}\Rightarrow \exp\bigl{[}\log\bigl{(}x^{e(\sigma)}\bigr{)}+h^{\prime}r^{\prime}\bigl{(} \sigma\bigr{)}\bigr{]}\ \.\]
With the Blote-Nienhuis construction of the transfer matrix, state the following result.
**Lemma 8\({}^{*}\)** (_analogue of the Ashkin-Teller generator relation for the loop model_). For \(p_{k}^{\rm loop}\) and \(q_{k}^{\rm loop}\), one has the relations,
\[V^{{\rm loop},h}\equiv\exp\bigl{[}\beta^{\rm loop}p_{u}^{\rm loop}+\beta^{\rm loop }q_{u}^{\rm loop}\bigr{]}\equiv\exp\bigl{[}\beta^{\rm loop}\bigl{(}p_{u}^{\rm loop }+q_{u}^{\rm loop}\bigr{)}\bigr{]}\ \,\]
and,
\[V^{{\rm loop},V}\equiv\biggl{(}\exp\bigl{[}\bigl{(}\beta^{\rm loop}-\bigl{(} \beta^{\rm loop}\bigr{)}^{*}\bigr{)}\bigl{(}p_{k}^{\rm loop}+q_{k}^{\rm loop }\bigr{)}\bigr{]}\biggr{)}\exp\bigl{[}\bigl{(}\beta^{\rm loop}\bigr{)}^{*}p_{u +\frac{1}{2}}^{\rm loop}+\bigl{(}\beta^{\rm loop}\bigr{)}^{*}q_{u+\frac{1}{2 }}^{\rm loop}\bigr{]}\ \,\]
where the dual loop temperature \(\bigl{(}\beta^{\rm loop}\bigr{)}^{*}\) is obtained from the correspondence,
\[\bigl{(}\beta^{\rm loop}\bigr{)}^{*}\longleftrightarrow\beta^{\rm loop}\ \,\]
with the relations,
\[\tanh\bigl{(}\bigl{(}\beta^{\rm loop}\bigr{)}^{*}\bigr{)}=\exp \bigl{(}-2\beta^{\rm loop}\bigr{)}\ \,\] \[S^{\rm loop}\equiv\sinh\bigl{(}2\bigl{(}\beta^{\rm loop} \bigr{)}^{*}\bigr{)}\ \.\]
To prove the **Lemma** above, one needs to introduce several subspaces, similar to those defined in _2.2.1_. We denote these subspaces with similar notation, namely \({\cal S}_{1}^{\rm loop}\equiv{\cal S}_{1},\cdots,{\cal S}_{4}^{\rm loop}\equiv {\cal S}_{4}\). The dual subspaces to each of the four loop subspaces are given by, respectively, \({\cal S}_{1}^{\prime},\cdots,{\cal S}_{4}^{\prime}\).
_Proof of Lemma 8\({}^{*}\)_. By direct computation, from the loop generators,
\[i\biggl{(}p_{u}^{\rm loop}\frac{e_{\tau(u)}}{i}+q_{u}^{\rm loop}\frac{e_{ \tau^{\prime}(u)}}{i}\biggr{)}\equiv p_{u+\frac{1}{2}}^{\rm loop}e_{\tau^{ \prime}(u)}+q_{u+\frac{1}{2}}^{\rm loop}e_{\tau^{\prime}(u))^{\prime}}\ \,\]
corresponding to the generator relation for \(V^{{\rm loop},h}\), and,
\[i\biggl{(}p_{u-\frac{1}{2}}^{\rm loop}\frac{e_{\tau^{\prime}(u)}}{i}+q_{u+ \frac{1}{2}}^{\rm loop}\frac{(e_{\tau^{\prime}(u)})^{\prime}}{i}\biggr{)}\equiv p _{u}^{\rm loop}e_{\tau(u)}+q_{u-1}^{\rm loop}e_{\tau^{\prime}(u)}\ \,\]
corresponding to the generator relation for \(V^{{\rm loop},V}\). The bases for each loop subspace with its dual is obtained from the correspondence,
\[{\cal S}_{1}\stackrel{{(\bullet)}}{{\longleftrightarrow}} \stackrel{{(\bullet)}}{{\longleftrightarrow}}{\cal S}_{1}^{\prime}\]
Furthermore, along the lines of the argument of the previous section for **Proposition 3**, we work towards concluding the argument by observing,
\[\exp\bigl{[}\bigl{(}\beta^{\rm loop}\bigr{)}^{*}p_{u+\frac{1}{2}}^{\rm loop }+q_{u+\frac{1}{2}}^{\rm loop}\bigr{)}\bigr{]}e_{\tau}=\cosh\bigl{(}\bigl{(} \beta^{*}\bigr{)}^{\rm loop}\bigr{)}e_{\tau}+\cosh\bigl{(}\bigl{(}\beta^{*} \bigr{)}^{\rm loop}\bigr{)}e_{\tau^{\prime}}\ \,\]
for the first identity for \(V^{{\rm loop},h}\), and also that the prefactor for the second identity takes the form,
\[\exp\bigl{[}\bigl{(}\beta^{\rm loop}-\bigl{(}\beta^{\rm loop}\bigr{)}^{*} \bigr{)}\bigl{(}p_{k}^{\rm loop}+q_{k}^{\rm loop}\bigr{)}\bigr{]}\ \,\]
corresponding to the second identity for \(V^{\text{loop},V}\) after performing similar computations as given in _2.2.1_, from which we conclude the argument.
Induced rotations. From the correspondence between \(\beta^{\text{loop}}\) and the dual inverse temperature \(\left(\beta^{\text{loop}}\right)^{*}\), below we state a conjugation result.
**Lemma 7\({}^{*}\)** (_loop transfer matrix conjugation, from Ashkin-Teller transfer matrix conjugation_). The action,
\[\left(V^{\text{loop},h}\right)^{-\frac{1}{2}}\cdot p_{u}^{\text{ loop}}\cdot\left(V^{\text{loop},h}\right)^{\frac{1}{2}} =cp_{u}^{\text{loop}}-isq_{u}^{\text{loop}}\ \,\] \[\left(V^{\text{loop},V}\right)^{-\frac{1}{2}}\cdot q_{u}^{\text{ loop}}\cdot\left(V^{\text{loop},V}\right)^{\frac{1}{2}} =isp_{u}^{\text{loop}}+cq_{u}^{\text{loop}}\ \,\]
takes the form above, under the composition \(\cdot\), for the Blote-Nienhuis construction of the loop transfer matrix,
\[T_{\alpha\beta}\equiv\underset{g_{N+1}\mid\beta}{\sum}\delta_{\alpha,\psi(S(N ))}K^{n_{v}}n^{n_{l}}\ \.\]
_Proof of Lemma 7\({}^{*}\)_. Perform similar computations to those used for arguing that **Lemma 7** holds, from _2.2.1_.
**Lemma 8\({}^{*}\)** (_loop commutation rule_). For the same maps \(R,J\) as defined in **Lemma 8**, the same identities hold, for the induced rotations.
_Proof of Lemma 8\({}^{*}\)_. Perform similar computations to those used for arguing that **Lemma 7** holds, from _2.2.1_.
**Theorem\({}^{*}\)** (_loop conjugation action of \(T_{V}\)_). For the loop propagator matrix, away from \(x_{c}\big{(}n\big{)}\), the map \(T_{V}\) admits the representation,
\[T_{V}\equiv\rho\cdot\left(P^{\text{loop}}\right)^{\mathbf{C}}\cdot\rho^{-1}\ \,\]
for the maps,
\[\rho:\left(\mathbf{C}^{2}\right)^{\mathbf{I}^{*}}\longrightarrow\mathcal{W}\ \,\]
and the complexification,
\[\left(P^{\text{loop}}\right)^{\mathbf{C}}:\left(\mathbf{R}^{2}\right)^{ \mathbf{I}^{*}}\longrightarrow\left(\mathbf{C}^{2}\right)^{\mathbf{I}^{*}}\ \.\]
_Proof of \(Theorem^{*}\)_. Perform similar computations to those used for arguing that **Theorem** holds, from _2.2.1_.
In the next section, we discuss Fock representations.
### Fock representation
For both models, we introduce a decomposition of the algebra into a Fock decomposition, from a decomposition of the generators of the algebra, with,
\[\mathcal{W}\equiv\mathcal{W}_{\text{cr}}\oplus\mathcal{W}_{\text{ann}}\ \,\]
and with,
\[\mathcal{W}^{\prime}\equiv\mathcal{W}^{\prime}_{\text{cr}}\oplus\mathcal{W}^{ \prime}_{\text{ann}}\ \,\]
where in the direct sum decomposition above, the subspace, with its bilinear form \(\big{(}\cdot,\cdot\big{)}\), has creation and annihilation operators. These operators satisfy the identities,
\[\big{(}w_{\mathrm{cr}},w^{\prime}_{\mathrm{cr}}\big{)} =0\ \,\] \[\big{(}w_{\mathrm{ann}},w^{\prime}_{\mathrm{ann}}\big{)} =0\ \,\]
for \(w_{\mathrm{cr}},w^{\prime}_{\mathrm{cr}}\in\mathcal{W}_{\mathrm{cr}}\), and \(w_{\mathrm{ann}},w^{\prime}_{\mathrm{ann}}\in\mathcal{W}_{\mathrm{ann}}\). Analogous properties hold for the subspaces \(\mathcal{W}^{\prime}_{\mathrm{cr}}\), and \(\mathcal{W}^{\prime}_{\mathrm{ann}}\). As a result of the bilinear form over \(\mathcal{W}\) being nondegenerate, the subspaces of \(\mathcal{W}\) spanned by the creation and annihilation generators have dimension that is precislety half of that of the dimension of \(\mathcal{W}\). From the subspaces \(\mathcal{W}_{\mathrm{cr}}\) and \(\mathcal{W}_{\mathrm{ann}}\), one can introduce the decomposition of the exterior algebra of \(W_{\mathrm{cr}}\), with,
\[\bigwedge\mathcal{W}_{\mathrm{cr}}=\bigoplus_{0\leq n\leq|\mathbf{I}^{n}|} \big{(}\wedge^{n}\mathcal{W}_{\mathrm{cr}}\big{)}\ \,\]
into components for the \(n\) th wedge product of \(\mathcal{W}_{\mathrm{cr}}\) for the Ashkin-Teller model, or similarly,
\[\bigwedge\mathcal{W}^{\prime}_{\mathrm{cr}}=\bigoplus_{0\leq n\leq|\mathbf{I}^ {n}|}\big{(}\wedge^{n}\mathcal{W}^{\prime}_{\mathrm{cr}}\big{)}\ \,\]
for the loop model. From these wedge product decompositions over a direct sum, for bases \(\big{\{}a^{\dagger}_{\alpha}\big{\}}_{1\leq\alpha\leq|\mathbf{I}^{n}|}\) and \(\big{\{}a^{\prime,\dagger}_{\alpha}\big{\}}_{1\leq\alpha\leq|\mathbf{I}^{n}|}\) of \(\mathcal{W}_{\mathrm{cr}}\) and \(\mathcal{W}^{\prime}_{\mathrm{cr}}\), the action on the Fock spaces, \(\bigwedge\mathcal{W}_{\mathrm{cr}}\) and \(\bigwedge\mathcal{W}^{\prime}_{\mathrm{cr}}\), are given by,
\[a^{\dagger}_{\alpha}\cdot\big{(}\underset{1\leq i\leq n}{\wedge}\alpha^{ \dagger}_{\beta_{i}}\big{)}\equiv\sum_{1\leq j\leq n}\big{(}-1\big{)}^{j-i} \delta_{\alpha,\beta_{j}}\big{(}\underset{1\leq i\leq n}{\wedge}\alpha^{ \dagger}_{\beta_{i}}\big{)}\equiv\sum_{1\leq j\leq n}\big{(}-1\big{)}^{j-i} \delta_{\alpha,\beta_{j}}\big{(}\underset{1\leq i\leq n}{\wedge}\alpha^{ \dagger}_{\beta_{i}}\big{)}\equiv\sum_{1\leq j\leq n}\big{(}-1\big{)}^{j-i} \delta_{\alpha,\beta_{j}}\big{(}\alpha^{\dagger}_{\beta_{1}}\wedge\cdots \wedge\alpha^{\dagger}_{\beta_{j-1}}\wedge\alpha^{\dagger}_{\beta_{j}}\wedge \cdots\wedge\alpha^{\dagger}_{\beta_{n}}\big{)}\ \,\]
corresponding to identities for the product of the Ashkin-Teller basis elements \(a^{\dagger}_{\alpha}\), while for the loop basis elements \(a^{\prime,\dagger}_{\alpha}\), with the wedge product over \(n\) other basis elements, and,
\[a^{\dagger}_{\alpha}\cdot\big{(}\underset{1\leq i\leq n}{\wedge}\alpha^{ \dagger}_{\beta_{i}^{\prime}}\big{)}\equiv\sum_{1\leq j\leq n}\big{(}-1\big{)} ^{j-i}\delta_{\alpha,\beta_{j}^{\prime}}\big{(}\underset{1\leq i\leq n}{ \wedge}\alpha^{\dagger}_{\beta_{i}^{\prime}}\big{)}\equiv\sum_{1\leq j\leq n }\big{(}-1\big{)}^{j-i}\delta_{\alpha,\beta_{j}^{\prime}}\big{(}\alpha^{ \dagger}_{\beta_{1}^{\prime}}\wedge\cdots\wedge\alpha^{\dagger}_{\beta_{j-1}^ {\prime}}\wedge\cdots\wedge\alpha^{\dagger}_{\beta_{n}^{\prime}}\big{)}\ \,\]
corresponding to identities for the product of the basis element with the complex conjugate of \(n\) basis elements. Finally, define,
\[\mathrm{Pf}\big{[}A\big{]}\equiv\frac{1}{2^{m}m!}\sum_{\pi\in\mathbb{S}_{2m}} \mathrm{sgn}\big{(}\pi\big{)}\prod_{1\leq s\leq m}A_{\pi(2s-1),\pi(2s)}\ \,\]
corresponding to the Pfaffian of an antisymmetric matrix, for permutations \(\pi\) belonging to the \(2m\) letter symmetric group.
#### 2.3.1 Ashkin-Teller model
To formalize the Fock representation, introduce the following.
**Lemma 10** (_Ashkin-Teller polarization_, **Lemma 11**, [13]).: For a polarization of \(\mathcal{W}\), one has the following correspondence between irreducible representations of the algebra, and the Fock space representation of the creation subspace of \(\mathcal{W}\), in which,
\[\mathrm{Irr}\big{(}\mathcal{W}\big{)}\longleftrightarrow\bigwedge\mathcal{W}_ {\mathrm{cr}}\ \.\]
The correspondence above implies the existence of an embedding into the irreducible representation space into \(\mathcal{W}\) for which,
\[\bigwedge_{1\leq j\leq n}\alpha_{\beta_{j}}^{\dagger}\mapsto\bigg{(}\prod_{1\leq j \leq n}\alpha_{\beta_{j}}^{\dagger}\bigg{)}v_{\text{vac}}\ \,\]
where \(v_{\text{vac}}\) denotes an element of \(\mathcal{W}\subsetneq\mathcal{V}\) for which \(\mathcal{W}_{\text{ann}}v_{\text{vac}}\equiv 0\).
Proof of Lemma 10.: Begin by considering some representation of the algebra. Irrespective of irreducibility, fix some nonzero \(v^{(0)}\in\mathcal{V}\). Introduce the following action on basis elements of the Fock representation, with,
\[a_{\alpha}v^{(\alpha)}\neq 0\ \,\]
and, otherwise,
\[v^{(\alpha)}\equiv 0\ \,\]
for \(1\leq\alpha\leq\big{|}\mathbf{I}^{*}\big{|}\). From the construction of the vector above, the vacuum vector stated in the **Lemma** is nonzero, and equals \(v^{(|\mathbf{I}^{*}|}\), from which one has that \(\mathcal{W}_{\text{ann}}v_{\text{vac}}\equiv 0\). One can extend this argument to other subrepresentations of the Fock representation \(\bigwedge\mathcal{W}_{\text{cr}}\), in which any subrepresentation of the Fock representation, satisfying,
\[\bigwedge\big{(}\mathcal{W}_{\text{cr}}\big{)}^{\prime}\neq\emptyset\ \,\]
for \(\big{(}\mathcal{W}_{\text{cr}}\big{)}^{\prime}\subsetneq\mathcal{W}_{\text{cr}}\), contains a vacuum vector which we denote with \(1\in\bigwedge\mathcal{W}_{\text{cr}}\). Hence, the existence of the 1 vector in the Fock representation implies irreducibility of the representation. Besides demonstrating irreducibility of the Fock representation, the correspondence given in the statement of the **Lemma** above defines an intertwining operation which is an embedding by the irreducibility of the Fock representation. Furthermore, the intertwining operation between the irreducible representations of the algebra and the Fock representation implies that the operation, which is an embedding, must be surjective, and hence an isomorphism, from which we conclude the argument.
For the next result, we make use of the Pfaffian.
**Lemma 11** (_Ashkin-Teller Wick's Formula_, **Lemma 12**, [13]).: For a polarization of \(\mathcal{W}\) and Fock representation of the creation subspace \(\mathcal{W}_{\text{cr}}\), fix a vacuum vector \(v_{\text{vac}}\in\bigwedge\big{(}\mathcal{W}_{\text{cr}}\big{)}^{\prime}\), and a dual vacuum vector \(v_{\text{vac}}^{*}\in\bigwedge\big{(}\mathcal{W}_{\text{cr}}^{*}\big{)}^{\prime}\). Under the choice of these basis elements such that \(<v_{\text{vac}},v_{\text{vac}}^{*}>\equiv 1\), for elements \(\phi_{1},\cdots,\phi_{n}\in\mathcal{W}\),
\[<v_{\text{vac}}^{*}\big{(}\prod_{1\leq i\leq n}\phi_{i}\big{)}v_{\text{vac}}> \ \ \equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<v_{\text{vac}}^{*}\big{(} \prod_{k\in\{i,j\}}\phi_{k}\big{)}v_{\text{vac}}>\bigg{]}\ \.\]
Proof of Lemma 11.: To demonstrate that the desired expression for the inner product between the dual vacuum vector, product of \(\phi_{i}\), and the vacuum vector can be expressed as a union over Pfaffians, express \(\phi_{i}\) as,
\[\phi_{i}\equiv\alpha_{i}w_{\text{cr}}+\beta_{i}w_{\text{ann}}\ \,\]
for real \(\alpha_{i},\beta_{i}\), with \(w_{\text{cr}}\in\mathcal{W}_{\text{cr}}\) and \(w_{\text{ann}}\in\mathcal{W}_{\text{ann}}\). One can similarly define linear combinations for the remaining fields \(\phi_{2},\cdots,\phi_{n}\). From the linear combination above. in the creation and annihilation bases, write,
\[<v_{\text{vac}}^{*}\big{(}\prod_{1\leq i\leq n}\big{(}\alpha_{i}w_{\text{cr}} +\beta_{i}w_{\text{ann}}\big{)}\big{)}v_{\text{vac}}>\ \,\]
which is equivalent to, after distributing terms to the product over \(1\leq i\leq n\),
\[<\prod_{1\leq i\leq n}\big{(}v_{\text{vac}}^{*}\alpha_{i}w_{\text{cr}}v_{\text{ vac}}+v_{\text{vac}}^{*}\beta_{i}w_{\text{ann}}v_{\text{vac}}\big{)}>\ \,\]
from which we perform the following anticommutation calculation, (AC), which yields,
\[<\prod_{1\leq i\leq n}\big{(}\big{(}\alpha_{i}w_{\text{cr}}v_{\text{vac}} \big{)}^{\dagger}v_{\text{vac}}^{*}+\big{(}\beta_{i}w_{\text{ann}}v_{\text{ vac}}\big{)}^{\dagger}v_{\text{vac}}^{*}\big{)}>\ \,\]
from which we obtain the desired expression, from the union of Pfaffians,
\[\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}\prod_{1\leq i,j\leq n}\bigg{(}<v_{ \text{vac}}^{*}\alpha_{i}w_{\text{cr}}v_{\text{vac}}+v_{\text{vac}}^{*}\beta_ {i}w_{\text{ann}}v_{\text{vac}}>\bigg{)}\bigg{]}\ \,\]
which we rewrite as,
\[\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<\prod_{1\leq i\leq n} \big{(}\big{(}\alpha_{i}w_{\text{cr}}v_{\text{vac}}\big{)}^{\dagger}v_{\text {vac}}^{*}+\big{(}\beta_{i}w_{\text{ann}}v_{\text{vac}}\big{)}^{\dagger}v_{ \text{vac}}^{*}\big{)}>\bigg{]}\] \[\equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<\prod_{1\leq i \leq n}\Big{(}\big{(}\alpha_{i}w_{\text{cr}}v_{\text{vac}}\big{)}^{\dagger}v_{ \text{vac}}^{*}+\big{(}\beta_{i}w_{\text{ann}}v_{\text{vac}}\big{)}^{\dagger}v_ {\text{vac}}^{*}\big{)}>\bigg{]}\] \[\equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<\prod_{1\leq i \leq n}\Big{(}\big{(}\alpha_{i}w_{\text{cr}}v_{\text{vac}}\big{)}^{\dagger}v_ {\text{vac}}^{*}\Big{)}>+<\prod_{1\leq i\leq n}\bigg{(}\big{(}\beta_{i}w_{\text {ann}}v_{\text{vac}}\big{)}^{\dagger}v_{\text{vac}}^{*}\big{)}>\bigg{]}\] \[\equiv\bigcup_{1\leq i,j\leq n}\bigg{[}\text{Pf}\bigg{[}<\prod_{1 \leq i\leq n}\Big{(}\alpha_{i}^{\dagger}w_{\text{cr}}^{\dagger}v_{\text{vac}}^ {\dagger}v_{\text{vac}}^{*}\Big{)}>\bigg{]}+\text{Pf}\bigg{[}<\prod_{1\leq i \leq n}\Big{(}\beta_{i}^{\dagger}w_{\text{cr}}^{\dagger}w_{\text{ann}}^{ \dagger}v_{\text{vac}}^{*}\Big{)}>\bigg{]}\ \bigg{]}\]
which, for the final steps, implies,
\[\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<v_{\text{vac}}^{*} \bigg{(}\big{(}\prod_{1\leq i\leq n}\alpha_{i}\big{)}w_{\text{cr}}\bigg{)}> \bigg{]}+\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<\Big{(}\big{(}\prod_{1\leq i \leq n}\beta_{i}\big{)}w_{\text{ann}}\bigg{)}v_{\text{vac}}>\bigg{]}\] \[\equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<v_{\text{vac}}^{* }\bigg{(}\big{(}\alpha_{1}\times\cdots\times\alpha_{n}\big{)}w_{\text{cr}} \bigg{)}+\bigg{(}(\beta_{1}\times\cdots\times\beta_{n}\big{)}w_{\text{ann}} \bigg{)}v_{\text{vac}}>\bigg{]}\] \[\equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<v_{\text{vac}}^{* }\big{(}\prod_{k\in\{i,j\}}\phi_{k}\big{)}v_{\text{vac}}>\bigg{]}\ \,\]
from which we conclude the argument.
We introduce the final item of the subsection below, for polarization at low temperatures.
**Lemma 12** (_Ashkin-Teller polarization at low temperature_, **Lemma 13**, [13]).: One has, for the Ashkin-Teller polarization, that,
\[\mathcal{W}_{\text{cr}}^{(+)}\equiv\text{span}\big{\{}p_{k}^{\text{ AT}}-iq_{k}^{\text{AT}}\bigm{|}k\in\boldsymbol{\Gamma}^{\star}\big{\}}\ \,\] \[\mathcal{W}_{\text{ann}}^{(+)}\equiv\text{span}\big{\{}p_{k}^{\text{ AT}}+iq_{k}^{\text{AT}}\bigm{|}k\in\boldsymbol{\Gamma}^{\star}\big{\}}\ \.\]
Proof of Lemma 12.: It is straightforward to check, by direct computation, that the creation and annihilation subspaces are spanned by the sets of vectors given above.
**Lemma 13** (_Ashkin-Teller polarization in the vanishing temperature limit_, **Lemma 15_, [13]).: For \(N\in\mathbf{N}\), a polarization is defined from the two subspaces,
\[\mathcal{W}_{\mathrm{cr}}^{(+),N}\equiv\mathrm{span}\big{\{}V^{-N} \big{(}p_{k}^{\mathrm{AT}}-iq_{k}^{\mathrm{AT}}\big{)}V^{N}\bigm{|}k\in\mathbf{ I}^{*}\big{\}}\enspace,\] \[\mathcal{W}_{\mathrm{ann}}^{(+),N}\equiv\mathrm{span}\big{\{}p_{k }^{\mathrm{AT}}+iq_{k}^{\mathrm{AT}}\bigm{|}k\in\mathbf{I}^{*}\big{\}}\enspace,\]
for countably many inverse temperatures \(\beta\), given a wedge product,
\[\bigwedge\mathcal{W}_{\mathrm{cr}}^{(+),N}\enspace,\]
and vacuum vectors,
\[v_{\mathrm{vac}}^{(+),N}\equiv e_{(+)}\enspace,\] \[\big{(}v_{\mathrm{vac}}^{(+),N}\big{)}^{*}\equiv\big{(}e_{(+)} \big{)}^{*}\equiv\frac{e_{(+)}^{\mathrm{T}}V^{N}}{e_{(+)}^{\mathrm{T}}V^{N}e_ {(+)}}\enspace.\]
Proof of Lemma 13.: It is straightforward to check, by direct computation, that the creation and annihilation subspaces are spanned by the sets of vectors given above, in which \(\mathcal{S}_{(+)}\stackrel{{\sim}}{{=}}\wedge\mathcal{W}_{ \mathrm{cr}}^{(+),N}\), and \(\mathcal{S}_{(-)}\stackrel{{\sim}}{{=}}\wedge\mathcal{W}_{ \mathrm{ann}}^{(+),N}\) (refer to the arguments for **Lemma 15** in [13]).
In comparison to the polarization above which establishes an isomorphism between the direct sum of the annihilator and creation subspaces and the Fock representation, we introduce the physical polarization below which is determined by the eigenvectors of \(T_{V}\).
**Lemma 14** (_physical polarization for the Ashkin-Teller model_, **Lemma 16_, [13]).: For the subspace \(\mathcal{W}_{\mathrm{cr}}^{\mathrm{phys}}\subsetneq\mathcal{W}\) spanned by eigenvectors of \(T_{V}\) with eigenvalues \(<1\), and the subspace \(\mathcal{W}_{\mathrm{ann}}^{\mathrm{phys}}\subsetneq\mathcal{W}\) spanned by eigenvectors \(T_{V}\) with eigenvectors \(>1\), we say that the direct sum \(\mathcal{W}_{\mathrm{cr}}^{\mathrm{phys}}\oplus\mathcal{W}_{\mathrm{ann}}^{ \mathrm{phys}}\) is a physical polarization of the Ashkin-Teller model. Hence, \(\mathcal{S}_{\prec}\stackrel{{\sim}}{{=}}\wedge\mathcal{W}_{ \mathrm{cr}}^{\mathrm{phys}}\).
Proof of Lemma 14.: To exhibit that the form of the physical polarization holds, recall from previous arguments that if \(1\) is not an eigenvalue, then for vectors \(u,v\in\mathcal{W}\), from the fact that \(\big{(}T_{V}u,T_{V}v\big{)}\equiv\big{(}u,v\big{)}\), the tuple is nonzero iff \(\big{(}u,v\big{)}\neq 0\Longleftrightarrow u\equiv v^{-1}\Longleftrightarrow u^{- 1}\equiv u\). Hence the bilinear form \(\big{(}\cdot,\cdot\big{)}\) only vanishes when it is defined over \(\mathcal{W}_{\mathrm{cr}}^{\mathrm{phys}}\), or over \(\mathcal{W}_{\mathrm{ann}}^{\mathrm{phys}}\). As a result \(T_{V}\) is diagonalizable, with eigenvalues that are real, and neither of which equal \(1\), from which we conclude the argument.
Besides the polarization and its physical counterpart, we list the properties below.
**Proposition 4** (_properties of the Ashkin-Teller polarization_, **Proposition 17**, [13]).: Given a physical polarization, introduce the basis,
\[\mathrm{span}\big{\{}\alpha_{\alpha}^{|\mathbf{I}^{\mathbf{\Gamma}}|}\bigm{|} 1\leq\alpha\leq\bigm{|}\mathbf{I}^{*}\big{|}\big{\}}\enspace,\]
for \(\mathcal{W}_{\mathrm{ann}}^{\mathrm{phys}}\), which under the induced rotation \(T_{V}\) yields a basis of the form,
\[T_{V}\bigg{(}\mathrm{span}\big{\{}\alpha_{\alpha}^{|\mathbf{\Gamma}^{\mathbf{ \Gamma}}|}\bigm{|}1\leq\alpha\leq\bigm{|}\mathbf{I}^{*}\big{|}\big{\}}\bigg{)}= \mathrm{span}\big{\{}\lambda_{\alpha}a_{\alpha}\bigm{|}1\leq\alpha\leq\bigm{|} \mathbf{I}^{*}\big{|}\big{\}}\enspace,\]
while the basis for the creation subspace takes the form,
\[\bigg{(}\mathrm{span}\big{\{}\alpha_{\alpha}^{|\mathbf{\Gamma}^{\mathbf{ \Gamma}}|}\bigm{|}1\leq\alpha\leq\bigm{|}\mathbf{I}^{*}\big{|}\big{\}}\bigg{)}^ {\dagger}\equiv\mathrm{span}\big{\{}\big{(}\alpha_{\alpha}^{|\mathbf{\Gamma} ^{\mathbf{\Gamma}}|}\big{)}^{\dagger}\bigm{|}1\leq\alpha\leq\bigm{|}\mathbf{I}^{ *}\big{|}\big{\}}\enspace,\]
which under the induced rotation \(T_{V}\) yields a basis of the form,
\[T_{V}\bigg{(}\mathrm{span}\big{\{}\big{(}\alpha_{\alpha}^{|\mathbf{\Gamma}^{ \mathbf{\Gamma}}|}\bigm{|}1\leq\alpha\leq\bigm{|}\mathbf{I}^{*}\big{|}\big{\}} \bigg{)}\bigg{)}=\mathrm{span}\big{\{}\lambda_{\alpha}^{-1}a_{\alpha}^{\dagger} \bigm{|}1\leq\alpha\leq\bigm{|}\mathbf{I}^{*}\big{|}\big{\}}\enspace.\]
With the four bases above, it is possible, for some \(v\in\mathcal{S}\), that:
* Case one: The product \(a_{\alpha}^{\dagger}v\equiv 0\in\mathcal{S}\).
* Case two: The product \(a_{\alpha}^{\dagger}v\neq 0\in\mathcal{S}\).
For the remaining possibility, we also have:
* Case one: The product \(a_{\alpha}v\equiv 0\in\mathcal{S}\).
* Case two: The product \(a_{\alpha}v\neq 0\in\mathcal{S}\).
From the first two cases above, it is possible that the eigenvectors living in \(\mathcal{S}\) take the form:
* Case one: From the product \(a_{\alpha}^{\dagger}v\) given in Case one above, the eigenvalue \(\lambda_{\alpha}^{\dagger}\equiv 0\).
* Case two: From the product \(a_{\alpha}^{\dagger}v\) given in Case two above, the eigenvalue \(\lambda_{\alpha}^{\dagger}\neq 0\).
From the second two cases above, it is possible that the eigenvectors living in \(\mathcal{S}\) take the form:
* Case one: From the product \(a_{\alpha}v\) given in Case one above, the eigenvalue \(\lambda_{\alpha}\equiv 0\).
* Case two: From the product \(a_{\alpha}v\) given in Case two above, the eigenvalue \(\lambda_{\alpha}\neq 0\).
From the four possible cases described above, if we denote \(\Lambda_{0}\) as the largest eigenvalue in the spectrum of \(\mathcal{V}\), with corresponding eigenvector \(v_{\rm vac}^{\rm phys}\in\mathcal{S}_{+}\), then \(\mbox{span}\big{\{}\prod_{1\leq i\leq n}\alpha_{\alpha_{i}}^{\dagger}v_{\rm vac }\bigm{|}v_{\rm vac}\in\mathcal{V}\big{\}}\) forms a basis of \(\mathcal{S}_{+}\), with spectrum,
\[\mbox{spec}\big{(}\mathcal{S}_{+}\big{)}=\Lambda_{0}\prod_{1\leq s\leq n} \lambda_{\alpha_{s}}^{-1}\enspace.\]
Proof of Proposition AT 1.: For an eigenvector \(v\in\mathcal{S}\), which satisfies the eigenvalue-eigenvector equation \(Vv=\Lambda v\), for the basis elements, \(\big{\{}a_{\alpha}\big{\}}\), \(va_{\alpha}v\equiv\big{(}Va_{\alpha}V^{-1}\big{)}Vv\equiv T_{V}\big{(}a_{ \alpha}\big{)}Vv\equiv\lambda_{\alpha}\Lambda a_{\alpha}v\), and similarly, for the dual basis elements \(\big{\{}a_{\alpha}^{\dagger}\big{\}}\), one has, \(va_{\alpha}^{\dagger}v\equiv\big{(}Va_{\alpha}^{\dagger}V^{-1}\big{)}Vv\equiv T _{V}\big{(}a_{\alpha}^{\dagger}\big{)}Vv\equiv\lambda_{\alpha}^{\dagger} \Lambda a_{\alpha}^{\dagger}v\). To demonstrate that the remaining item holds, observe that the fact that \(v_{\rm vac}^{\rm phys}\equiv 0\) when multiplied by any vector of the annihilator subspace \(\mathcal{W}_{\rm ann}\), because,
\[\lambda_{\alpha}\Lambda_{0}>\sup_{v}\big{\{}v:v\in\mbox{spec}\big{(}V\big{)} \big{\}}\enspace,\]
from which we conclude the argument.
We conclude the subsection with the following result.
**Theorem 2** (_isomorphism between the complexified propagation operator and a direct sum of wedge products_, **Theorem 18**, [13]). From the complexified Ashkin-Teller propagator, \(\big{(}P^{\rm AT}\big{)}^{\bf C}\), denote the subspace \(\mathcal{W}\) as the one spanned by eigenvectors of the complexified propagator with magnitude \(<1\). From the exterior algebra,
\[\bigwedge\mathcal{W}.=\bigoplus_{0\leq n\leq|\bf{V}^{\prime}|}\wedge^{n} \mathcal{W}\enspace,\]
introduce,
\[\rho_{+}:\mathcal{S}_{+}\longrightarrow\bigwedge\mathcal{W}\enspace,\]
so that,
\[\rho_{+}\cdot V\cdot\rho_{+}=\mathcal{C}\big{(}\rho_{+},V\big{)}\Gamma\bigg{(} \big{(}P^{\rm AT}\big{)}^{\bf C}\bigg{)}\enspace,\]
for some constant \(\mathcal{C}\big{(}\rho_{+},V\big{)}\equiv\mathcal{C}\), and,
\[\Gamma\big{(}\big{(}P^{\mathrm{AT}}\big{)}^{\mathbf{C}}\big{)}=\bigoplus_{0\leq n \leq|\mathbf{I}^{\mathbf{r}}|}\big{(}\big{(}P^{\mathrm{AT}}\big{)}^{\mathbf{C} }|_{\mathcal{W}_{\cdot}}\big{)}^{\otimes n}\enspace,\]
for parameters in the complexified propagator which are below the \(\frac{1}{4}\!\log\!\big{(}3\big{)}\) threshold.
Proof of Theorem 2.: To demonstrate that the result above holds, observe, from a previous result, that the isomorphism,
\[\mathcal{S}_{+}\stackrel{{\simeq}}{{=}}\bigwedge\left(\mathcal{ W}_{\mathrm{cr}}^{\mathrm{phys}}\oplus\mathcal{W}_{\mathrm{ann}}^{\mathrm{phys}}\right)\enspace,\]
holds. From the previous result, the transfer matrix of the Ashkin-Teller model, in the basis,
\[\mathrm{span}\big{\{}\mathop{\bigwedge}\limits_{1\leq i\leq n}a_{ \alpha_{i}}^{\dagger}\ \big{|}\ 1\leq a\leq|\mathbf{I}^{\mathbf{r}}|\big{\}}\enspace,\]
has the same eigenvalues as the direct sum of transfer matrices raised to the \(n\) th power of the tensor product,
\[\bigoplus_{0\leq n\leq|\mathbf{I}^{\mathbf{r}}|}\big{(}T_{V}|_{W_{ \mathrm{cr}}^{\mathrm{phys}}\otimes W_{\mathrm{ann}}^{\mathrm{phys}}}\big{)}^ {\otimes n}\enspace.\]
Hence, the induced rotation,
\[T_{V}:\mathcal{W}\longrightarrow\mathcal{W}\enspace,\]
satisfies,
\[T_{V}\stackrel{{\sim}}{{=}}\big{(}P^{\mathrm{loop} }\big{)}^{\mathbf{C}}\enspace.\]
Similarly, two other isomorphisms hold, in which,
\[T|_{W_{\mathrm{cr}}^{\mathrm{phys}}}\stackrel{{ \simeq}}{{=}}\big{(}P^{\mathrm{loop}}|_{(\mathcal{W}_{\cdot})_{ \mathrm{cr}}}\big{)}^{\mathbf{C}}\enspace,\] \[T|_{W_{\mathrm{ann}}^{\mathrm{phys}}}\stackrel{{ \simeq}}{{=}}\big{(}P^{\mathrm{loop}}|_{(\mathcal{W}_{\cdot})_{ \mathrm{ann}}}\big{)}^{\mathbf{C}}\enspace.\]
Hence,
\[T|_{W_{\mathrm{cr}}^{\mathrm{phys}}}\bigoplus T|_{W_{\mathrm{ math}}^{\mathrm{phys}}}\stackrel{{\simeq}}{{=}}\big{(}P^{ \mathrm{loop}}|_{(\mathcal{W}_{\cdot})_{\mathrm{cr}}}\big{)}^{\mathbf{C}} \bigoplus\big{(}P^{\mathrm{loop}}|_{(\mathcal{W}_{\cdot})_{\mathrm{ann}}} \big{)}^{\mathbf{C}}\stackrel{{\simeq}}{{=}}\bigoplus_{( \mathcal{W}^{\nu})_{\cdot}\in\{(\mathcal{W}_{\cdot})_{\mathrm{cr}},(\mathcal{ W}_{\cdot})_{\mathrm{ann}}\}}\big{(}P^{\mathrm{loop}}|_{(\mathcal{W}^{\nu})_{ \cdot}}\big{)}^{\mathbf{C}}\stackrel{{\simeq}}{{=}}\big{(}P^{ \mathrm{loop}}|_{(\mathcal{W}_{\cdot})}\big{)}^{\mathbf{C}}\enspace,\]
from which we conclude the argument.
#### 2.3.2 Loop model
To formalize the Fock representation, introduce the following.
**Lemma \(10^{\star}\)** (_loop polarization_). For a polarization of \(\mathcal{W}^{\prime}\), one has the following correspondence between irreducible representations of the algebra, and the Fock space representation of the creation subspace of \(\mathcal{W}^{\prime}\), in which,
\[\mathrm{Irr}\big{(}\mathcal{W}^{\prime}\big{)}\longleftrightarrow\bigwedge \mathcal{W}^{\prime}_{\mathrm{cr}}\enspace.\]
The correspondence above implies the existence of an embedding into the irreducible representation space into \(\mathcal{W}^{\prime}\) for which,
\[\bigwedge_{1\leq j\leq n}\alpha^{\dagger}_{\beta^{\prime}_{j}}\mapsto\Big{(} \prod_{1\leq j\leq n}\alpha^{\dagger}_{\beta^{\prime}_{j}}\Big{)}v_{\text{vac}} \ \,\]
where \(v_{\text{vac}}\) denotes an element of \(\mathcal{W}^{\prime}\subsetneq\mathcal{V}\) for which \(\mathcal{W}^{\prime}_{\text{ann}}v_{\text{vac}}\equiv 0\).
_Proof of Lemma \(10^{*}\)_. Begin by considering some representation of the algebra. Irrespective of irreducibility, fix some nonzero \(v^{(0)}\in\mathcal{V}^{\prime}\). Introduce the following action on basis elements of the Fock representation, with,
\[a_{\alpha}v^{(\alpha)}\neq 0\ \,\]
and, otherwise,
\[v^{(\alpha)}\equiv 0\ \,\]
for \(1\leq\alpha\leq\big{|}\mathbf{I}^{**}\big{|}\). With this recursive definition of the vector above, the same components provided in **Lemma 10** follow, from which we conclude the argument.
For the next result, we make use of the Pfaffian.
**Lemma \(11^{*}\)** (_Loop Wick's Formula_). For a polarization of \(\mathcal{W}^{\prime}\) and Fock representation of the creation subspace \(\mathcal{W}^{\prime}_{\text{cr}}\), fix a vacuum vector \(v_{\text{vac}}\in\bigwedge\big{(}\mathcal{W}^{\prime}_{\text{cr}}\big{)}^{\prime}\), and a dual vacuum vector \(v^{*}_{\text{vac}}\in\bigwedge\big{(}\mathcal{W}^{\prime,*}_{\text{cr}}\big{)}^ {\prime}\). Under the choice of these basis elements such that \(<v_{\text{vac}},v^{*}_{\text{vac}}>\equiv 1\), for elements \(\phi_{1},\cdots,\phi_{n}\in\mathcal{W}^{\prime}\),
\[<v^{*}_{\text{vac}}\big{(}\prod_{1\leq i\leq n}\phi_{i}\big{)}v_{\text{vac}}> \ \ \equiv\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{[}<v^{*}_{\text{vac}}\big{(} \prod_{k\in\{i,j\}}\phi_{k}\big{)}v_{\text{vac}}>\bigg{]}\ \.\]
_Proof of Lemma \(11^{*}\)_. Perform computations along the lines of those provided for **Lemma 11** in the previous subsection.
We introduce the final item of the subsection below, for polarization at low temperatures.
**Lemma \(12^{*}\)**(_loop polarization at low temperature_). One has, for the loop polarization, that,
\[\mathcal{W}^{\prime,(+)}_{\text{cr}} \equiv\text{span}\big{\{}p^{\text{loop}}_{u}-iq^{\text{loop}}_{ u}\ \big{|}\ u\in\mathbf{I}^{**}\big{\}}\ \,\] \[\mathcal{W}^{\prime,(+)}_{\text{ann}} \equiv\text{span}\big{\{}p^{\text{loop}}_{u}+iq^{\text{loop}}_{ u}\ \big{|}\ u\in\mathbf{I}^{**}\big{\}}\ \.\]
_Proof of Lemma \(12^{*}\)_. It is straightforward to check, by direct computation, that the creation and annihilation subspaces are spanned by the sets of vectors given above.
**Lemma \(13^{*}\)** (_loop polarization in the vanishing temperature limit_, **Lemma 15**, [13]). For \(N\in\mathbf{N}\), a polarization is defined from the two subspaces,
\[\mathcal{W}^{(+),N}_{\text{cr}} \equiv\text{span}\big{\{}V^{-N}\big{(}p^{\text{loop}}_{u}-iq^{ \text{loop}}_{u}\big{)}V^{N}\ \big{|}\ u\in\mathbf{I}^{**}\big{\}}\ \,\] \[\mathcal{W}^{(+),N}_{\text{ann}} \equiv\text{span}\big{\{}p^{\text{loop}}_{u}+iq^{\text{loop}}_{ u}\ \big{|}\ u\in\mathbf{I}^{**}\big{\}}\ \,\]
for countably many inverse temperatures \(\beta^{\text{loop}}\), given a wedge product,
\[\bigwedge\mathcal{W}^{\prime,(+),N}_{\text{cr}}\ \,\]
and vacuum vectors,
\[v^{\prime,(+),N}_{\rm vac}\equiv e^{\prime}_{(+)}\ \,\] \[\big{(}v^{\prime,(+),N}_{\rm vac}\big{)}^{*}\equiv\frac{e^{\prime, \rm T}_{(+)}V^{N}}{e^{\prime,\rm T}_{(+)}V^{N}e^{\prime}_{(+)}}\ \.\]
_Proof of Lemma 13\({}^{*}\)._ It is straightforward to check, by direct computation, that the creation and annihilation subspaces are spanned by the sets of vectors given above, in which \({\cal S}^{\prime}_{(+)}\stackrel{{\simeq}}{{=}}\wedge{\cal W}^{ \prime,(+),N}_{\rm cr}\), and \({\cal S}^{\prime}_{(-)}\stackrel{{\simeq}}{{=}}\wedge{\cal W}^{ \prime,(+),N}_{\rm ann}\) (refer to the arguments for **Lemma 15** in [13]).
In comparison to the polarization above which establishes an isomorphism between the direct sum of the annihilator and creation subspaces and the Fock representation, we introduce the physical polarization below which is determined by the eigenvectors of \(T_{V}\).
**Lemma 14\({}^{*}\)** (_physical polarization for the loop model_). For the subspace \({\cal W}^{\prime,\rm phys}_{\rm cr}\subsetneq{\cal W}^{\prime}\) spanned by eigenvectors of \(T_{V}\) with eigenvalues \(<1\), and the subspace \({\cal W}^{\prime,\rm phys}_{\rm ann}\subsetneq{\cal W}\) spanned by eigenvectors \(T_{V}\) with eigenvectors \(>1\), we say that the direct sum \({\cal W}^{\prime,\rm phys}_{\rm cr}\oplus{\cal W}^{\prime,\rm phys}_{\rm ann}\) is a physical polarization of the Ashkin-Teller model. Hence, \({\cal S}^{\prime}_{+}\stackrel{{\simeq}}{{=}}\wedge{\cal W}^{ \prime,\rm phys}_{\rm cr}\).
_Proof of Lemma 14\({}^{*}\)_. To exhibit that the form of the physical polarization holds, appeal to very similar arguments to those provided for **Lemma 14** of the previous subsection.
Besides the polarization and its physical counterpart, we list the properties below.
**Proposition**_Loop 1_ (_properties of the loop polarization_). Given a physical polarization, introduce the basis,
\[{\rm span}\big{\{}\alpha^{[{\bf I}^{**}]}_{\alpha}\bigm{|}1\leq\alpha\leq \bigm{|}{\bf I}^{**}\bigm{|}\bigm{\}}\ \,\]
for \({\cal W}^{\rm phys}_{\rm ann}\), which under the induced rotation \(T_{V}\) yields a basis of the form,
\[T_{V}\bigg{(}{\rm span}\big{\{}\alpha^{[{\bf I}^{**}]}_{\alpha}\bigm{|}1\leq \alpha\leq|{\bf I}^{**}|\bigm{\}}\bigg{)}={\rm span}\big{\{}\lambda_{\alpha}a _{\alpha}\bigm{|}1\leq\alpha\leq|{\bf I}^{**}|\bigm{\}}\ \,\]
while the basis for the creation subspace takes the form,
\[\bigg{(}{\rm span}\big{\{}\alpha^{[{\bf I}^{**}]}_{\alpha}\bigm{|}1\leq \alpha\leq|{\bf I}^{**}|\bigm{\}}\bigg{)}^{\dagger}\equiv{\rm span}\big{\{} \big{(}\alpha^{[{\bf I}^{**}]}_{\alpha}\bigm{|}1\leq\alpha\leq|{\bf I}^{**}| \bigm{\}}\ \,\]
which under the induced rotation \(T_{V}\) yields a basis of the form,
\[T_{V}\bigg{(}{\rm span}\big{\{}\big{(}\alpha^{[{\bf I}^{**}]}_{\alpha}\bigm{|} 1\leq\alpha\leq|{\bf I}^{**}|\bigm{\}}\bigg{)}={\rm span}\big{\{}\lambda^{-1} _{\alpha}a^{\dagger}_{\alpha}\bigm{|}1\leq\alpha\leq\bigm{|}{\bf I}^{**}| \bigm{\}}\ \.\]
With the four bases above, it is possible, for some \(v\in{\cal S}^{\prime}\), that:
* _Case one_: The product \(a^{\prime,\dagger}_{\alpha}v\equiv 0\in{\cal S}^{\prime}\).
* _Case two_: The product \(a^{\prime,\dagger}_{\alpha}v\neq 0\in{\cal S}^{\prime}\).
For the remaining possibility, we also have:
* _Case one_: The product \(a^{\prime}_{\alpha}v\equiv 0\in{\cal S}^{\prime}\).
* _Case two_: The product \(a^{\prime}_{\alpha}v\neq 0\in{\cal S}^{\prime}\).
From the first two cases above, it is possible that the eigenvectors living in \(\mathcal{S}^{\prime}\) take the form:
* Case one: From the product \({a^{\prime}_{\alpha}}^{\dagger}v\) given in Case one above, the eigenvalue \(\lambda^{\prime,\dagger}_{\alpha}\equiv 0\).
* Case two: From the product \({a^{\prime}_{\alpha}}^{\dagger}v\) given in Case two above, the eigenvalue \(\lambda^{\prime,\dagger}_{\alpha}\neq 0\).
From the second two cases above, it is possible that the eigenvectors living in \(\mathcal{S}\) take the form:
* Case one: From the product \(a^{\prime}_{\alpha}v\) given in Case one above, the eigenvalue \(\lambda^{\prime}_{\alpha}\equiv 0\).
* Case two: From the product \(a^{\prime}_{\alpha}v\) given in Case two above, the eigenvalue \(\lambda^{\prime}_{\alpha}\neq 0\).
From the four possible cases described above, if we denote \(\Lambda_{0}\) as the largest eigenvalue in the spectrum of \(\mathcal{V}^{\prime}\), with corresponding eigenvector \(v_{\rm vac}^{\rm phys}\in\mathcal{S}^{\prime}_{+}\), then \(\mbox{span}\big{\{}\prod_{1\leq i\leq n}\alpha^{\dagger}_{\alpha i}v^{\prime}_ {\rm vac}\bigm{|}v_{\rm vac}\in\mathcal{V}^{\prime}\big{\}}\) forms a basis of \(\mathcal{S}^{\prime}_{+}\), with spectrum,
\[\mbox{spec}\big{(}\mathcal{S}^{\prime}_{+}\big{)}=\Lambda_{0}\prod_{1\leq s \leq n}\lambda^{\prime,-1}_{\alpha_{s}}\ \.\]
Proof of Proposition Loop 1.: Identical results stated above for the loop model follow from arguments given in the previous subsection for the Ashkin-Teller model, from **Proposition**_AT 1_.
We conclude the subsection with the following result.
**Theorem \(2^{*}\)** (_isomorphism between the complexified propagation operator and a direct sum of wedge products_). From the complexified loop propagator, \(\big{(}P^{\rm loop}\big{)}^{\bf C}\), denote the subspace \(\mathcal{W}\). as the one spanned by eigenvectors of the complexified propagator with magnitude \(<1\). From the exterior algebra,
\[\bigwedge\mathcal{W}^{\prime}_{+}=\bigoplus_{0\leq n\leq|{\bf I}^{*}|}\wedge^ {n}\mathcal{W}\ \,\]
introduce,
\[\rho^{\prime}_{+}:\mathcal{S}_{+}\longrightarrow\bigwedge\mathcal{W}\ \,\]
so that,
\[\rho^{\prime}_{+}\cdot V\cdot\rho^{\prime}_{+}=\mathcal{C}\big{(}\rho^{\prime }_{+},V\big{)}\Gamma\bigg{(}\big{(}P^{\rm loop}\big{)}^{\bf C}\bigg{)}\ \,\]
for some constant \(\mathcal{C}\big{(}\rho^{\prime}_{+},V\big{)}\equiv\mathcal{C}\), and,
\[\Gamma\big{(}\big{(}P^{\rm loop}\big{)}^{\bf C}\big{)}=\bigoplus_{0\leq n\leq |{\bf I}^{*}|}\big{(}\big{(}P^{\rm loop}\big{)}^{\bf C}{}_{|\mathcal{W}^{ \prime}|}\big{)}^{\otimes n}\ \,\]
for parameters in the complexified propagator below Nienhuis' critical point.
Proof of Theorem \(2^{*}\).: Appeal to similar arguments as those for **Theorem**_2_ in the previous subsection.
### Operator correlations and observables
We analyze correlations and observables for each model in the subsections below.
#### 2.4.1 Quantum correspondence with the Ashkin-Teller model
We define the following. Over a finite volume of the square lattice which can be expressed with a Cartesian product of rows and columns, \(\mathscr{I}\times\mathscr{J}\), introduce, over the state space \(\big{\{}\pm 1\big{\}}^{\mathscr{I}\times\mathscr{J}}\),
\[\mathscr{P}^{\xi}_{\Lambda}\big{(}\sigma\big{)}\equiv\frac{\exp\!\left[\mathscr{ H}^{\mathrm{AT}}\big{(}\tau,\tau^{\prime}\big{)}\right]}{\mathscr{Z}^{\xi}_{ \Lambda}\big{(}\sigma\big{)}}\equiv\frac{\exp\!\left[\sum\limits_{i\sim j}\! \left[J\big{(}\tau\big{(}i\big{)}\tau\big{(}j\big{)}+\tau^{\prime}\big{(}i \big{)}\tau^{\prime}\big{(}j\big{)}\big{)}+U\big{(}\tau\big{(}i\big{)}\tau\big{(} j\big{)}\tau^{\prime}\big{(}i\big{)}\tau^{\prime}\big{(}j\big{)}\big{)}\right] \right]}{\mathscr{Z}^{\xi}_{\Lambda}\big{(}\sigma\big{)}}\enspace,\]
corresponding to the Ashkin-Teller measure, where the partition function can be placed into correspondence with the quantum state,
\[\mathscr{Z}^{\xi}_{\Lambda}\big{(}\sigma\big{)}\equiv\sum\limits_{\tau,\tau^{ \prime}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\exp\!\left[\mathscr{H}^{ \mathrm{AT}}\big{(}\tau,\tau^{\prime}\big{)}\right]\longleftrightarrow\langle f |\,V^{\mathrm{AT},N}\,|i\rangle\enspace,\]
where the states \(\langle f|\) and \(|i\rangle\) respectively denote the spins of the Ashkin-Teller configuration at the leftmost, and rightmost, boundaries, which satisfy,
\[f\equiv i\equiv\big{(}V^{\mathrm{AT},h}\big{)}^{\frac{1}{2}}e_{(+)}\equiv \exp\!\left[J\big{(}\frac{|\mathbf{\Gamma}^{\prime}|}{2}+\frac{|\mathbf{\Gamma }^{\prime}|}{2}\big{)}+U\big{(}\frac{|\mathbf{\Gamma}^{\prime}|}{2}\big{)}^{2 }\right]\equiv\exp\!\left[\frac{|\mathbf{\Gamma}^{\prime}|}{2}\big{(}2J+\frac {U}{2}\big{)}\right]\enspace.\]
Under the stipulation that boundary spins are \(\equiv+1\), this means that for boundary conditions \(\xi\), we take,
\[\xi\equiv\big{\{}\sigma^{\mathrm{AT}}\equiv\big{(}\tau,\tau^{\prime}\big{)} \in\big{\{}\pm 1\big{\}}^{\mathscr{I}\times\mathscr{J}}\times\big{\{}\pm 1 \big{\}}^{\mathscr{I}\times\mathscr{J}}\big{\arrowvert}\ \langle f|\equiv+1,|i\rangle\equiv+1\big{\}}\enspace.\]
Furthermore, there exists a spin operator, \(\hat{\tau}_{j}:\big{\{}\pm 1\big{\}}^{\mathscr{I}\times\mathscr{J}} \longrightarrow\big{\{}\pm 1\big{\}}^{\mathscr{I}\times\mathscr{J}}\) with the action,
\[\hat{\tau_{j}}\big{(}\tau\big{(}i\big{)}\big{)}\mapsto-\tau(i)\enspace,\] \[\hat{\tau_{j}}\big{(}\tau^{\prime}\big{(}i\big{)}\big{)}\mapsto- \tau^{\prime}(i)\enspace,\]
in which the spins of either Potts model at site \(i\) of the lattice are reversed. With respect to the measure \(\mathscr{P}^{\xi}_{\Lambda}\big{(}\cdot\big{)}\), the expectatioon for the Ashkin-Teller model can be written as,
\[\mathscr{E}^{\xi}_{\Lambda}\big{(}\sigma_{z}\big{)}=\sum\limits_{\tau_{z},\tau ^{\prime}_{z}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\sigma_{z}\;\mathrm{d} \mathscr{P}^{\xi}_{\Lambda}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\equiv \frac{\sum\limits_{\tau,\tau^{\prime}\in\{\pm 1\}^{\mathscr{I} \times\mathscr{J}}}\sigma_{z}\mathrm{exp}\!\left[\mathscr{H}^{\mathrm{AT}} \big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}{\mathscr{Z}^{\xi}_{\Lambda} \big{(}\sigma_{z}\big{)}}\equiv\frac{\sum\limits_{\tau,\tau^{\prime}\in\{\pm 1\}^{ \mathscr{I}\times\mathscr{J}}}\sigma_{z}\mathrm{exp}\!\left[\mathscr{H}^{ \mathrm{AT}}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}{\sum\limits_{ \tau_{z},\tau^{\prime}_{z}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\exp\!\left[ \mathscr{H}^{\mathrm{AT}}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}\enspace,\]
for \(z=x+iy\). The final expression above is equivalent to the state,
\[\frac{\langle f|\,V^{\mathrm{AT},N-y}\big{(}\tau\big{(}i\big{)}\tau\big{(}j \big{)}\big{)}V^{\mathrm{AT},y}\,|i\rangle}{\langle f|\,V^{\mathrm{AT},N}\,|i \rangle}\enspace.\]
Generalizing the expectation above to a product over spins instead of a single spin at \(x\) yields the expression,
\[\mathscr{E}^{\xi}_{\Lambda}\!\bigg{(}\prod\limits_{x\in\mathbf{C}}\!\sigma_{x} \bigg{)}=\prod\limits_{x\in\mathbf{C}}\bigg{(}\;\frac{\sum\limits_{\tau,\tau^{ \prime}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\sigma_{x}\mathrm{exp}\!\left[ \mathscr{H}^{\mathrm{AT}}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}{ \sum\limits_{\tau_{z},\tau^{\prime}_{z}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}} \exp\!\left[\mathscr{H}^{\mathrm{AT}}\big{(}\tau,\tau^{\prime}\big{)}\right] }\bigg{)}\equiv\frac{\prod\limits_{x\in\mathbf{C}}\sum\limits_{\tau_{z},\tau^{ \prime}_{z}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\sigma_{x}\mathrm{exp}\!\left[ \mathscr{H}^{\mathrm{AT}}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}{ \sum\limits_{\tau,\tau^{\prime}\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}}\exp\!\left[ \mathscr{H}^{\mathrm{AT}}\big{(}\tau_{z},\tau^{\prime}_{z}\big{)}\right]}\enspace,\]
which is equivalent to,
\[\frac{\langle e_{(+)}|\,V^{\mathrm{AT},N}\prod\limits_{\begin{subarray}{c}1\leq i \leq r\\ x\in\mathbf{C}\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Moreover, from the expressions above, observe,
\[\tau\big{(}x+iy\big{)}\hat{\tau}\big{(}x+iy^{\prime}\big{)}\equiv \tau\big{(}\hat{x}+iy\big{)}\tau\big{(}\hat{x}+iy^{\prime}\big{)}=\bigg{(}V^{ \mathrm{AT},-y}\big{(}\hat{\tau}_{x}\big{)}V^{\mathrm{AT},y}\bigg{)}\bigg{(}V^{ \mathrm{AT},-y^{\prime}}\big{(}\hat{\tau}_{x}\big{)}V^{\mathrm{AT},y^{\prime}} \bigg{)}\] \[\equiv V^{\mathrm{AT},-y}\big{(}\hat{\tau}_{x}\big{)}\bigg{(}\prod_{i \in\{y,-y^{\prime}\}}V^{\mathrm{AT},i}\ \bigg{)}\big{(}\hat{\tau}_{x}\big{)}V^{\mathrm{AT},y^{\prime}}\ \.\]
From the generators \(p_{k}^{\mathrm{AT}}\) and \(q_{k}^{\mathrm{AT}}\) analyzed in _2.2.1_, over the complex plane,
\[\psi\big{(}k+iy\big{)} =V^{\mathrm{AT},-y}\psi_{k}V^{\mathrm{AT},y}\ \,\] \[\bar{\psi}\big{(}k+iy\big{)} =V^{\mathrm{AT},-y}\bar{\psi}_{k}V^{\mathrm{AT},y}\ \,\]
for \(k\in\mathbf{I}^{\star}\). Recall,
\[\psi\big{(}k+iy\big{)} =\frac{i}{\sqrt{2}}\big{(}p_{k}^{\mathrm{AT}}+q_{k}^{\mathrm{AT} }\big{)}\ \,\] \[\bar{\psi}\big{(}k+iy\big{)} =\frac{1}{\sqrt{2}}\big{(}p_{k}^{\mathrm{AT}}-q_{k}^{\mathrm{AT} }\big{)}\ \.\]
Over \(\mathscr{I}\times\mathscr{J}\), for,
\[\psi^{\mathrm{AT}}\equiv\psi\ \,\]
the multipoint correlation function for the fermion operator takes the form, for \(\big{\{}z_{i}\big{\}}_{i\in\mathbf{C}}\),
Under the induced rotation introduced in _2.2.1_, the basis functions \(\phi\big{(}z\big{)}\) and \(\phi\big{(}\bar{z}\big{)}\) have the image,
\[R\big{(}\psi\big{(}z\big{)}\big{)} =\psi\big{(}\bar{r}\big{(}z\big{)}\big{)}\ \,\] \[R\big{(}\psi\big{(}\bar{z}\big{)}\big{)} =\psi\big{(}r\big{(}z\big{)}\big{)}\ \,\]
where in each image under the induced rotation above,
\[r\big{(}z\big{)}=r\big{(}x+iy\big{)} =a+b-\big{(}x-iy\big{)}\ \,\] \[r\big{(}\bar{z}\big{)}=r\big{(}\bar{x}+iy\big{)}=a+b-\big{(}x- iy\big{)}=a+b-\big{(}x+iy\big{)}\ \,\]
correspond to two functions of complex arguments \(x+iy\), and of \(x-iy\).
With the Ashkin-Teller measure and the correspondence with quantum states described in this section for the fermion operator, the following notion of s-holomorphicity holds. To distinguish between previous notions of massive, and massless, s-holomorphicity which holds for the Ising, Ashkin-Teller and loop models, in the statement below we denote the Ashkin-Teller fermion operator with \(\psi\big{(}z\big{)}\equiv\psi^{\mathrm{AT}-\mathrm{F}}\), \(\psi\big{(}\bar{z}\big{)}\equiv\psi^{\mathrm{AT}-\mathrm{F}}\). The complexification of the Ashkin-Teller fermion operator is obtained from entries of the tuple \(\big{(}\psi^{\mathrm{AT}-\mathrm{F}},\psi^{\mathrm{AT}-\mathrm{F}}\big{)}\).
**Theorem 3** (_massive s-holomorphicity for Ashkin-Teller fermions, from massive s-holomorphicity for Ising fermions_, **Theorem 19**, [13]). Fix \(z\in\mathbf{I}^{\star}\times\mathbf{J}\), and the same parameters \(\nu\) and \(\lambda\) provided in **Definition 1**, and **Definition 6**. For the Ashkin-Teller fermion operators \(\psi^{\mathrm{AT}-\mathrm{F}}\big{(}z\big{)}\) and \(\psi^{\mathrm{AT}-\mathrm{F}}\big{(}z\big{)}\), there exists extensions of \(\psi^{\mathrm{AT}-\mathrm{F}}\), and of \(\psi^{\mathrm{AT}-\mathrm{F}}\), to \(\mathbf{I}^{\star}\times\mathbf{J}\), such that,
\[\psi^{\rm AT-F}\big{(}N\big{)}+\nu^{-1}\lambda\psi^{\rm AT-F}\big{(}N \big{)}=\nu^{-1}\psi^{\rm AT-F}\big{(}E\big{)}+\lambda\psi^{\rm AT-F}\big{(}E \big{)}\ \,\] \[\psi^{\rm AT-F}\big{(}N\big{)}+\nu\lambda^{-1}\psi^{\rm AT-F} \big{(}N\big{)}=\nu\psi^{\rm AT-F}\big{(}W\big{)}+\lambda^{-1}\psi^{\rm AT- \bar{F}}\big{(}W\big{)}\ \,\] \[\psi^{\rm AT-F}\big{(}S\big{)}+\nu\lambda^{3}\psi^{\rm AT-\bar{F} }\big{(}S\big{)}=\nu\psi^{\rm AT-F}\big{(}E\big{)}+\lambda^{3}\psi^{\rm AT- \bar{F}}\big{(}E\big{)}\ \,\] \[\psi^{\rm AT-F}\big{(}S\big{)}+\nu^{-1}\lambda^{-3}\psi^{\rm AT- \bar{F}}\big{(}S\big{)}=\nu^{-1}\psi^{\rm AT-F}\big{(}W\big{)}+\lambda^{-3} \psi^{\rm AT-\bar{F}}\big{(}W\big{)}\ \,\]
for any face of the complex plane with edges E,N,W,S. At the left, and right, boundary points of the finite volume, the fermion operators also satisfy,
\[\psi^{\rm AT-F}\big{(}a+iy\big{)}+i\psi^{\rm AT-F}\big{(}a+iy \big{)}=0\ \,\] \[\psi^{\rm AT-F}\big{(}b+iy\big{)}+i\psi^{\rm AT-F}\big{(}b+iy \big{)}=0\ \,\]
for all \(y\in{\bf J}^{*}\).
Proof of Theorem 3.: To exhibit that s-holomorphicity holds for the Ashkin-Teller fermion, we make use of similar notions as those which were used for demonstrating that the extension \(h\) exists, and is unique. That is, if we fix some point in the vertical direction, \(y\in{\bf J}^{*}\), and another point in the interior of the horizontal direction, \(z\in\big{(}{\bf I}_{y}\backslash\partial{\bf I}_{y}\big{)}\equiv I\big{(}{ \bf I}_{y}\big{)}\), then the equations for the Ashkin-Teller fermion about this point would take the form,
\[\psi^{\rm AT-F}\big{(}z\big{)}+\nu^{-1}\lambda\psi^{\rm AT-F} \big{(}z\big{)}=\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda\psi^ {\rm AT-\bar{F}}\big{(}z^{\prime}\big{)}\ \,\] \[\psi^{\rm AT-F}\big{(}z\big{)}+\nu\lambda^{-1}\psi^{\rm AT-F} \big{(}z\big{)}=\nu\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}+\lambda^{-1} \psi^{\rm AT-\bar{F}}\big{(}z^{\prime\prime}\big{)}\ \,\] \[\psi^{\rm AT-F}\big{(}z+1\big{)}+\nu\lambda^{3}\psi^{\rm AT-F} \big{(}z+1\big{)}=\nu\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda^{3} \psi^{\rm AT-\bar{F}}\big{(}z^{\prime}\big{)}\ \,\]
where the points in the above system of equations satisfy,
\[z^{\prime}=\big{\{}z\in{\bf C}\ \big{|}\ -45\ {\rm degree\ rotation\ from\ N}\big{\}}\ \,\] \[z^{\prime\prime}=\big{\{}z\in{\bf C}\ \big{|}\ 225\ {\rm degree\ rotation\ from\ N}\big{\}}\ \,\] \[z+1=\big{\{}z\in{\bf C}\ \big{|}\ +1\ {\rm translation\ of\ }z\ {\rm down}\big{\}}\ \,\]
Hence, the above relations from the system imply,
\[\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda\psi^{\rm AT-\bar{F}} \big{(}z^{\prime}\big{)}-\nu^{-1}\lambda\psi^{\rm AT-\bar{F}}\big{(}z\big{)} \equiv\nu\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}+\lambda^{-1}\psi^{\rm AT -\bar{F}}\big{(}z^{\prime\prime}\big{)}-\nu\lambda^{-1}\psi^{\rm AT-\bar{F}} \big{(}z\big{)}\]
\[\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda\psi^{\rm AT-\bar{F}} \big{(}z^{\prime}\big{)}-\nu\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}- \lambda^{-1}\psi^{\rm AT-\bar{F}}\big{(}z^{\prime\prime}\big{)}=\big{(}\nu^{-1} \lambda-\nu\lambda^{-1}\big{)}\psi^{\rm AT-\bar{F}}\big{(}z\big{)}\]
\[\frac{1}{\big{(}\nu^{-1}\bar{\lambda}-\bar{\nu}\lambda^{-1}\big{)}}\bigg{(} \nu^{-1}\psi^{\rm AT-\bar{F}}\big{(}z^{\prime}\big{)}+\bar{\lambda}\psi^{\rm AT -\bar{F}}\big{(}z^{\prime}\big{)}-\bar{\nu}\psi^{\rm AT-\bar{F}}\big{(}z^{ \prime\prime}\big{)}-\bar{\lambda^{-1}}\psi^{\rm AT-\bar{F}}\big{(}z^{\prime \prime}\big{)}\bigg{)}=\psi^{\rm AT-F}\big{(}z\big{)}\ \,\]
from the first and second equations of the system, and,
\[\nu\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda^{3}\psi^{\rm AT-\bar{F}} \big{(}z^{\prime}\big{)}-\nu\lambda^{3}\psi^{\rm AT-F}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
from the third and fourth equations of the system. Further rearranging the second expression above yields an expression for \(\psi^{\rm AT-F}\big{(}z\big{)}\), which takes the form,
\[\frac{1}{\big{(}\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(}\nu\psi^{\rm AT -F}\big{(}z^{\prime}-1\big{)}+\lambda^{3}\psi^{\rm AT-F}\big{(}z^{\prime}-1 \big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime\prime}-1\big{)}-\lambda^{-3} \psi^{\rm AT-F}\big{(}z^{\prime\prime}-1\big{)}\bigg{)}=\psi^{\rm AT-F}\big{(}z \big{)}\ \.\]
Furthermore, each of the the expressions above can be extended to points lying along the horizontal line \(\mathbf{I}_{\frac{1}{2}}^{*}\), or along the vertical line \(\mathbf{J}_{\frac{1}{2}}^{*}\), in which,
\[\frac{1}{\big{(}\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(}\nu\psi^{ \rm AT-F}\big{(}z^{\prime}-\frac{1}{2}\big{)}+\lambda^{3}\psi^{\rm AT-F}\big{(} z^{\prime}-\frac{1}{2}\big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime\prime}- \frac{1}{2}\big{)}-\cdots\]
and,
\[\frac{1}{\big{(}\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(}\nu\psi^{ \rm AT-F}\big{(}z^{\prime}+\frac{1}{2}\big{)}+\lambda^{3}\psi^{\rm AT-F}\big{(} z^{\prime}+\frac{1}{2}\big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime\prime}+ \frac{1}{2}\big{)}-\cdots\]
corresponding to points which lie on the \(\frac{1}{2}\) interval in the horizontal direction,
\[\frac{1}{\big{(}\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(}\nu\psi^{ \rm AT-F}\big{(}z^{\prime}-\frac{i}{2}\big{)}+\lambda^{3}\psi^{\rm AT-F} \big{(}z^{\prime}-\frac{i}{2}\big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime \prime}-\frac{i}{2}\big{)}-\cdots\]
and,
\[\frac{1}{\big{(}\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(}\nu\psi^{ \rm AT-F}\big{(}z^{\prime}+\frac{i}{2}\big{)}+\lambda^{3}\psi^{\rm AT-F} \big{(}z^{\prime}+\frac{i}{2}\big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime \prime}+\frac{i}{2}\big{)}-\cdots\]
corresponding to points which lie on the \(\frac{1}{2}\) interval in the vertical direction. From each possibility listed above, it is straightforward to verify that an application of \(J\) to each equation yields,
\[\frac{J}{\big{(}\nu^{-1}\bar{\lambda}-\bar{\nu}\lambda^{-1}\big{)}}\bigg{(}\nu ^{-1}\psi^{\rm AT-F}\big{(}z^{\prime})+\bar{\lambda}\psi^{\rm AT-F}\big{(}z^{ \prime}\big{)}-\bar{\nu}\psi^{\rm AT-F}\big{(}z^{\prime\prime}-\lambda^{-1} \psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}\bigg{)}=J\bigg{(}\psi^{\rm AT-F }\big{(}z\big{)}\bigg{)}\]
\[\frac{1}{\big{(}\nu^{-1}\lambda-\nu\lambda^{-1}\big{)}}\bigg{(}\nu^{-1}\psi^{ \rm AT-F}\big{(}z^{\prime}\big{)}+\lambda\{\psi^{\rm AT-F}\big{(}z^{\prime} \big{)}-\nu\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}-\lambda^{-1}\psi^{ \rm AT-F}\big{(}z^{\prime\prime}\big{)}\bigg{)}=\psi^{\rm AT-F}\big{(}z\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\rm AT-F}\big{(}z\big{)}\),
\[\frac{J}{\big{(}\bar{\nu}\bar{\lambda}^{3}-\nu^{-1}\lambda^{-3}\big{)}}\bigg{(} \bar{\nu}\psi^{\rm AT-F}\big{(}z^{\prime}\big{)}+\lambda^{3}\psi^{\rm AT-F} \big{(}z^{\prime}\big{)}-\nu^{-1}\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)} -\lambda^{-3}\psi^{\rm AT-F}\big{(}z^{\prime\prime}\big{)}\bigg{)}=\psi^{\rm AT -F}\big{(}z+1\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\rm AT-F}\big{(}z+1\big{)}\),
\[\frac{J}{\left(\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\right)}\bigg{(} \nu\psi^{\mathrm{AT-F}}\big{(}z^{\prime}-1\big{)}+\lambda^{3}\psi^{\mathrm{AT-F }}\bar{\big{(}}z^{\prime}-1\big{)}-\nu^{-1}\psi^{\mathrm{AT-F}}\big{(}z^{\prime \prime}-1\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}-1 \big{)}\bigg{)}=J\bigg{(}\psi^{\mathrm{AT-F}}\bar{\big{(}}z\big{)}\bigg{)}\] \[\bar{\big{(}}\bar{\nu}\bar{\lambda}^{3}-\nu^{-1}\lambda^{-3} \bigg{)}\bigg{(}\bar{\nu}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}-1\big{)}+ \bar{\lambda}^{3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}-1\big{)}-\nu^{-1 }\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}-1\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}-1 \big{)}\bigg{)}=\psi^{\mathrm{AT-F}}\big{(}z\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\mathrm{AT-F}}\bar{\big{(}}z-\frac{i}{2}\big{)}\),
\[\frac{J}{\left(\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\right)} \bigg{(}\nu\psi^{\mathrm{AT-F}}\big{(}z^{\prime}-\frac{1}{2}\big{)}+\lambda^{ 3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}-\frac{1}{2}\big{)}-\nu^{-1}\psi ^{\mathrm{AT-F}}\big{(}z^{\prime\prime}-\frac{1}{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}- \frac{1}{2}\big{)}\bigg{)}=J\bigg{(}\psi^{\mathrm{AT-F}}\bar{\big{(}}z-\frac{1 }{2}\big{)}\bigg{)}\] \[\bar{\big{(}}\bar{\nu}\bar{\lambda}^{3}-\nu^{-1}\lambda^{-3} \bigg{)}\bigg{(}\bar{\nu}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}-\frac{1}{ 2}\big{)}+\bar{\lambda}^{3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}-\frac{1 }{2}\big{)}-\nu^{-1}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}-\frac{1 }{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}- \frac{1}{2}\big{)}\bigg{)}=\psi^{\mathrm{AT-F}}\big{(}z-\frac{1}{2}\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\mathrm{AT-F}}\bar{\big{(}}z-\frac{1}{2}\big{)}\),
\[\frac{J}{\left(\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\right)} \bigg{(}\nu\psi^{\mathrm{AT-F}}\big{(}z^{\prime}+\frac{1}{2}\big{)}+\lambda^{ 3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}+\frac{1}{2}\big{)}-\nu^{-1}\psi ^{\mathrm{AT-F}}\big{(}z^{\prime\prime}+\frac{1}{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}+ \frac{1}{2}\big{)}\bigg{)}=J\bigg{(}\psi^{\mathrm{AT-F}}\bar{\big{(}}z+\frac{1 }{2}\big{)}\bigg{)}\] \[\bar{\big{(}}\bar{\nu}\bar{\lambda}^{3}-\nu^{-1}\lambda^{-3} \bigg{)}\bigg{(}\bar{\nu}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}+\frac{1}{ 2}\big{)}+\bar{\lambda}^{3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime}+\frac{1 }{2}\big{)}-\nu^{-1}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}+\frac{1 }{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\bar{\big{(}}z^{\prime\prime}+ \frac{1}{2}\big{)}\bigg{)}=\psi^{\mathrm{AT-F}}\big{(}z+\frac{1}{2}\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\mathrm{AT-F}}\bar{\big{(}}z+\frac{1}{2}\big{)}\),
\[\frac{J}{\left(\nu\lambda^{3}-\nu^{-1}\lambda^{-3}\right)}\bigg{(}\nu \psi^{\mathrm{AT-F}}\big{(}z^{\prime}+\frac{i}{2}\big{)}+\lambda^{3}\psi^{ \mathrm{AT-F}}\big{(}z^{\prime}+\frac{i}{2}\big{)}-\nu^{-1}\psi^{\mathrm{AT-F}} \big{(}z^{\prime\prime}+\frac{i}{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\big{(}z^{\prime\prime}+\frac{i} {2}\big{)}\bigg{)}=J\bigg{(}\psi^{\mathrm{AT-F}}\big{(}z+\frac{i}{2}\big{)} \bigg{)}\] \[\frac{1}{\left(\bar{\nu}\lambda^{3}-\nu^{-1}\lambda^{-3}\right)} \bigg{(}\bar{\nu}\psi^{\mathrm{AT-F}}\big{(}z^{\prime}+\frac{i}{2}\big{)}+ \lambda^{3}\psi^{\mathrm{AT-F}}\big{(}z^{\prime}+\frac{i}{2}\big{)}-\nu^{-1} \psi^{\mathrm{AT-F}}\big{(}z^{\prime\prime}+\frac{i}{2}\big{)}-\cdots\] \[\lambda^{-3}\psi^{\mathrm{AT-F}}\big{(}z^{\prime\prime}+\frac{i} {2}\big{)}\bigg{)}=\psi^{\mathrm{AT-F}}\big{(}z+\frac{i}{2}\big{)}\ \,\]
corresponding to the expression obtained from the first and second s-holomorphicity equations for \(\psi^{\mathrm{AT-F}}\big{(}z+\frac{i}{2}\big{)}\). We conclude the argument.
#### 2.4.2 Quantum correspondence with the Loop model
We define the following. Over a finite volume of the hexagonal lattice which can be expressed with a Cartesian product of rows and columns, \(\mathscr{I}_{\mathbf{H}}\equiv\mathscr{I}\) and \(\mathscr{J}_{\mathbf{H}}\equiv\mathscr{J}\), which is embedded into the complex plane, introduce,
\[\mathscr{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{[}\sigma\big{]}= \frac{\exp\!\bigg{[}\frac{1}{2}\big{|}\!\log\!x\big{|}\big{(}hr\big{(}\sigma \big{)}+\log\!\big{(}x^{e(\sigma)}\big{)}\big{)}\bigg{]}}{\mathscr{Z}_{ \Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma\big{)}}\ \,\]
corresponding to the high temperature expansion of the loop O(1) measure, whose the partition function,
\[\mathscr{Z}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma\big{)} \equiv\sum_{\begin{subarray}{c}\sigma\in\{\pm 1\}\mathscr{I}^{\times} \mathscr{J}\\ e\in\Lambda_{\mathbf{H}}\\ \sigma_{\partial\Lambda_{\mathbf{H}}}\equiv+1\end{subarray}}\!\!\exp\!\bigg{[} \frac{1}{2}\big{|}\!\log\!x\big{|}\big{(}hr\big{(}\sigma\big{)}+\log\!\big{(} x^{e(\sigma)}\big{)}\big{)}\bigg{]}\ \,\]
can be placed into correspondence with the state,
\[\mathscr{Z}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma\big{)} \equiv\mathscr{Z}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},+}\big{(}\sigma\big{)} \longleftrightarrow\langle f|\,V^{\mathrm{loop},N}\,|i\rangle\ \,\]
under \(+\) boundary conditions \(\xi\equiv+\). Along the lines of the discussion provided in the previous subsection for the spin inversion operation in the Ashkin-Teller model, there exists a spin inversion operator for the high- temperature expansion of the O(1) model, from which,
\[\hat{\sigma_{j}}e_{\sigma}\equiv\sigma_{j}e_{\sigma}\ \,\]
for the operation from the loop state space into itself,
\[\hat{\sigma_{j}}:\mathcal{S}^{\mathrm{loop}}\longrightarrow\mathcal{S}^{ \mathrm{loop}}\ \,\]
at site \(j\), and,
\[i \equiv\big{(}V^{\mathrm{loop},h}\big{)}^{\frac{1}{2}}e_{(+)}\ \,\] \[f \equiv i \equiv\big{(}V^{\mathrm{loop},h}\big{)}^{\frac{1}{2}}e_{(+)}\ \.\]
Furthermore, the expectation of a single spin with respect to \(\mathscr{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\cdot\,\big{)}\) takes the form,
\[\mathscr{E}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\ \sigma_{z}\ \big{)}=\sum_{ \begin{subarray}{c}\sigma\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}\end{subarray}} \sigma_{z}\ \mathrm{d}\mathscr{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma_{ z}\big{)}=\sum_{\begin{subarray}{c}\sigma\in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}} \end{subarray}}\sigma_{z}\ \mathrm{d}\bigg{[}\frac{\exp\Big{[}\frac{1}{2}\big{|}\mathrm{ log}x\big{|}\big{(}hr\big{(}\sigma\big{)}+\log\big{(}x^{e(\sigma)}\big{)}\big{)}\Big{]}}{ \mathscr{Z}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma\big{)}} \bigg{]}\ \.\]
Similarly, for respect to multiple spins at countably many sites in the complex plane,
\[\mathscr{E}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\Big{(} \prod_{z_{i}\in\mathbf{C}}\sigma_{z_{i}}\Big{)}=\sum_{\begin{subarray}{c}\sigma \in\{\pm 1\}^{\mathscr{I}\times\mathscr{J}}\\ \sigma\partial_{\Lambda_{\mathbf{H}}}\equiv+1\end{subarray}}\biggl{(}\prod_{z _{i}\in\mathbf{C}}\sigma_{z_{i}}\biggr{)}\ \mathrm{d}\mathscr{P}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma _{z}\big{)}=\sum_{\begin{subarray}{c}\sigma\in\{\pm 1\}^{\mathscr{I}\times \mathscr{J}}\\ \sigma\partial_{\Lambda_{\mathbf{H}}}\equiv+1\end{subarray}}\biggl{(}\prod_{z _{i}\in\mathbf{C}}\sigma_{z_{i}}\ \biggr{)}\times\cdots\] \[\mathrm{d}\bigg{[}\frac{\exp\bigl{[}\frac{1}{2}\big{|}\mathrm{ log}x\big{|}\big{(}hr\big{(}\sigma\big{)}+\log\big{(}x^{e(\sigma)}\big{)}\big{)}\big{]}}{ \mathscr{Z}_{\Lambda_{\mathbf{H}}}^{\mathrm{loop},\xi}\big{(}\sigma\big{)}} \bigg{]}\ \.\]
Under \(+\) boundary conditions, the expectation over the product of spins is equivalent to,
\[\frac{\langle e_{(+)}|\prod\limits_{z_{i}\in\mathbf{C}}\sigma_{z_{i}}\,|e_{(+ )}\rangle}{\frac{\text{countably many }z_{i}}{\langle e_{(+)}|\,V^{\mathrm{loop},N}\,|e_{(+)} \rangle}}\ \.\]
From the loop generators, similar relations are satisfied as those from the Ashkin-Teller generators, in which,
\[p_{u}^{\mathrm{loop}}\ \,\] \[q_{u}^{\mathrm{loop}}\ \,\]
for the basis spanned by the elements,
\[\psi^{\mathrm{loop}}\big{(}k+iy\big{)} =V^{\mathrm{loop},-y}\psi_{k}V^{\mathrm{loop},y}\ \,\] \[\psi^{\mathrm{loop}}\big{(}\bar{k}+iy\big{)} =V^{\mathrm{loop},-y}\bar{\psi}_{k}V^{\mathrm{loop},y}\ \,\]
for \(k\in\mathbf{I}^{**}\) and \(y\in\mathbf{J}\).
From complexification procedure for the loop model, from the tuple \(\big{(}\psi^{\mathrm{loop}},\psi^{\widetilde{\mathrm{loop}}}\big{)}\), under the loop induced rotation,
\[R\big{(}\psi^{\mathrm{loop}}\big{(}z\big{)}\big{)} =\psi^{\widetilde{\mathrm{loop}}}\big{(}r\big{(}z\big{)}\big{)}\ \,\] \[R\big{(}\psi^{\mathrm{loop}}\big{(}z\big{)}\big{)} =\psi^{\widetilde{\mathrm{loop}}}\big{(}r\big{(}z\big{)}\big{)}\ \,\]
in which the image of the basis elements under \(R\) satisfy the relations above. Similarly, under the other induced rotation \(J\),
\[J\big{(}\psi^{\mathrm{loop}}\big{(}z\big{)}\big{)} =\psi^{\widetilde{\mathrm{loop}}}\big{(}z\big{)}\ \.\] \[J\big{(}\psi^{\mathrm{loop}}\big{(}z\big{)}\big{)} =\psi^{\widetilde{\mathrm{loop}}}\big{(}z\big{)}\ \.\]
With properties of the loop measure defined in this subsection, and relations that the loop generators satisfy with respect to the induced rotations \(R\) and \(J\), below we introduce the massive s-holomorphicity result for the loop fermion.
**Theorem \(3^{*}\)** (_massive s-holomorphicity for loop fermions_).: Fix \(z\in\mathbf{I}^{**}\times\mathbf{J}\), and the same parameters \(\nu\) and \(\lambda\) provided in **Definition 7**, and **Definition 8**. For the loop fermion operators \(\psi^{\mathrm{loop-F}}\big{(}z\big{)}\) and \(\psi^{\mathrm{lo\widetilde{\mathrm{op}}-F}}\big{(}z\big{)}\), there exists extensions of \(\psi^{\mathrm{loop-F}}\), and of \(\psi^{\mathrm{lo\widetilde{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{ \mathrmmathrm{\mathrmmathrmmathrmmathrm{\mathrmmathrmmathrm{ \mathrmmathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrm{ \mathrmmathrm{ \mathrmmathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrmmathrm{ \mathrmmathrm{ \mathrmmathrm{ \mathrm{ \mathrm{ \mathrmmathrmmathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}{}}\)\)
\[\psi^{\mathrm{loop-F}}\big{(}z_{1}\big{)}+\bar{e}_{1}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{1}\big{)} =\psi^{\mathrm{loop-F}}\big{(}z_{2}\big{)}+\bar{e}_{1}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{2}\big{)}\ \,\] \[\psi^{\mathrm{loop-F}}\big{(}z_{2}\big{)}+\bar{e}_{2}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{2}\big{)} =\psi^{\mathrm{loop-F}}\big{(}z_{3}\big{)}+\bar{e}_{2}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{3}\big{)}\ \,\] \[\psi^{\mathrm{loop-F}}\big{(}z_{3}\big{)}+\bar{e}_{3}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{3}\big{)} =\psi^{\mathrm{loop-F}}\big{(}z_{4}\big{)}+\bar{e}_{3}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{4}\big{)}\ \,\] \[\psi^{\mathrm{loop-F}}\big{(}z_{4}\big{)}+\bar{e}_{4}{}^{2s}\psi^{ \mathrm{loop-F}}\big{(}z_{4}\big{)} =\psi^{\mathrm{loop-F}}\big{(}z_{1}\big{)}+\bar{e}_{4}{}^{2s}\psi ^{\mathrm{loop-F}}\big{(}z_{1}\big{)}\ \,\]
for any face of the complex plane with edges E,N,W,S. At the left, and right, boundary points of the finite volume, the fermion operators also satisfy,
\[\psi^{\mathrm{loop-F}}\big{(}a+iy\big{)}+i\psi^{\mathrm{loop-F}} \big{(}a+iy\big{)} =0\ \,\] \[\psi^{\mathrm{loop-F}}\big{(}b+iy\big{)}+i\psi^{\mathrm{loop-F}} \big{(}b+iy\big{)} =0\ \,\]
for all \(y\in\mathbf{J}^{**}\).
_Proof of Theorem 3\({}^{*}\)._ Appeal to arguments provided in **Theorem 3** of the previous subsection, by making use of s-holomorphicity equations provided for the loop fermion observable \(\psi^{\mathrm{loop-F}}\) above. \(\sqcap\)\(\sqcup\)
### Low temperature expansions of parafermionic observables
With the massive s-holomorphicity result for fermion operators of the previous section, in this section we define the parafermionic observable in the vanishing temperature limit, for parameters \(\alpha^{\mathrm{AT}}\equiv\exp\big{(}-2\beta\big{)}\), and \(\alpha^{\mathrm{loop}}\equiv\exp\big{(}-2\beta^{\mathrm{loop}}\big{)}\).
#### 2.5.1 Ashkin-Teller model
For the first model, as \(\beta\longrightarrow+\infty\) from below, introduce the following two observables related to the Ashkin-Teller parafermionic observable,
\[f_{a}^{\mathrm{AT},\dagger}\big{(}z\big{)} =\frac{1}{\mathscr{C}\big{(}\mathscr{Z}_{\Lambda}^{\xi}\big{)} \mathscr{Z}_{\Lambda}^{\mathrm{low-temp},\xi}}\sum_{\gamma\in\mathcal{C}_{a}^ {\mathrm{AT},\dagger}(z)}\big{(}\alpha^{\mathrm{AT}}\big{)}^{L(\gamma_{i})} \mathrm{exp}\big{(}-i\sigma_{m}\theta_{\gamma}\big{(}r,\vec{r}\big{)}\big{)}s _{m}\big{(}r\big{)}\mu_{m}\big{(}r\big{)}\ \,\] \[f_{a}^{\mathrm{AT},\dagger}\big{(}z\big{)} =\frac{1}{\mathscr{C}\big{(}\mathscr{Z}_{\Lambda}^{\xi}\big{)} \mathscr{Z}_{\Lambda}^{\mathrm{low-temp},\xi}}\sum_{\gamma\in\mathcal{C}_{a} ^{\mathrm{AT},\dagger}(z)}\big{(}\alpha^{\mathrm{AT}}\big{)}^{L(\gamma_{i})} \mathrm{exp}\big{(}-i\sigma_{m}\theta_{\gamma}\big{(}r,\vec{r}\big{)}\big{)}s _{m}\big{(}r\big{)}\mu_{m}\big{(}r\big{)}\ \,\]
for:
* (1): A horizontal edge \(a\in\mathscr{I}^{*}\times\mathscr{J}\),
* (2): a rectangle \(\mathscr{I}\times\mathscr{J}\),
* (3): the dual lattice \(\big{(}\mathbf{Z}^{2}\big{)}^{*}=\mathbf{Z}^{2}+\big{(}\frac{1}{2},\frac{1}{2} \big{)}\),
* (4): a subset \(\bigg{(}\mathscr{I}^{*}\times\mathscr{J}^{*}\bigg{)}\subsetneq\bigg{(} \mathbf{Z}^{2}+\big{(}\frac{1}{2},\frac{1}{2}\big{)}\bigg{)}\),
* (5): a set \(\mathcal{C}_{a}^{\mathrm{AT},\dagger}\big{(}z\big{)}\), consisting of paths such that for any face of the dual lattice excluding points \(a+\frac{i}{2}\), the parity, ie the number of edges of the path, adjacent to the face is even,
* (6): a set \(\mathcal{C}_{a}^{\mathrm{AT},\dagger}\big{(}z\big{)}\), consisting of paths such that for any face of the dual lattice excluding points \(a+\frac{i}{2}\), the parity, ie the number of edges of the path, adjacent to the face is odd,
* (7): a strictly positive constant \(\mathscr{C}\) so that \(\mathscr{C}\big{(}\mathscr{Z}_{\Lambda}^{\xi}\big{)}\mathscr{Z}_{\Lambda}^{ \mathrm{low-temp},\xi}\propto\mathscr{Z}_{\Lambda}^{\xi}\).
**Definition 11** (_discrete residues and massive s-holomorphicity_, **Definition 20**, [13]).: For a horizontal edge \(a\), there exists a complex valued, massive s-holomorphic function \(f\), such that, for \(z\neq a\), over the nonempty collection of faces of the lattice containing \(a+\frac{i}{2}\), the residue of \(f\) is given by \(\frac{i}{2\pi}\big{(}f^{\mathrm{front}}\big{(}a\big{)}-f^{\mathrm{back}}\big{(}a \big{)}\big{)}\). The function \(f\) can be extended so that it is massive s-holomorphic if \(f^{\mathrm{front}}\big{(}\cdot\big{)}\) is extended to \(a+\frac{i}{2}\), while \(f^{\mathrm{back}}\big{(}\cdot\big{)}\) is extended to \(a-\frac{i}{2}\).
**Theorem 4**: \((\)_convergence of Fermion two-point correlations to Ashkin-Teller parafermionic observables defined at the beginning of 2.5.1\()\)_. Over \(\mathscr{I}\times\mathscr{J}\), one has,
\[<\psi^{\mathrm{AT-F}}(z)\psi^{\mathrm{AT-F}}\big{(}a\big{)}>_{ \mathscr{I}\times\mathscr{J}}=-f_{a}^{\mathrm{AT,\uparrow}}\big{(}z\big{)}+if_{ a}^{\mathrm{AT,\downarrow}}\big{(}z\big{)}\enspace,\] \[<\psi^{\mathrm{AT-F}}\big{(}z\big{)}\psi^{\mathrm{AT-F}}\big{(}a \big{)}>_{\mathscr{I}\times\mathscr{J}}=f_{a}^{\mathrm{AT,\uparrow}}\big{(}z \big{)}+if_{a}^{\mathrm{AT,\downarrow}}\big{(}z\big{)}\equiv-f_{a}^{\mathrm{AT,\uparrow}}\big{(}z\big{)}-if_{a}^{\mathrm{AT,\uparrow}}\big{(}z\big{)}\enspace,\] \[<\psi^{\mathrm{AT-F}}\big{(}z\big{)}\psi^{\mathrm{AT-F}}\big{(}a \big{)}>_{\mathscr{I}\times\mathscr{J}}=-f_{a}^{\mathrm{AT,\uparrow}}\big{(}z \big{)}-if_{a}^{\mathrm{AT,\uparrow}}\big{(}z\big{)}\enspace.\]
_Proof of Theorem 4._ Fix \(z=x+iy\) and \(z^{\prime}=x^{\prime}+iy^{\prime}\). By the massive s-holomorphicity of \(f_{a}^{\mathrm{AT,\uparrow}}\big{(}\cdot\big{)}\), and of \(f_{a}^{\mathrm{AT,\downarrow}}\big{(}\cdot\big{)}\), compute the expansion of the following state from a product of transfer matrices,
\[\langle e_{(+)}|\,V^{\mathrm{AT,N-}y}\psi_{x}^{\mathrm{AT-F}}V^{\mathrm{AT,y- y^{\prime}}}\psi^{\mathrm{AT-F}_{x^{\prime}}}V^{\mathrm{AT,\mathscr{J}^{\prime}}} \,|e_{(+)}\rangle\enspace,\]
for \(x\neq x^{\prime}\). Before performing the computation, observe,
\[\langle e_{(+)}|\,V^{\mathrm{AT,N-}y}\psi_{x}^{\mathrm{AT-F}}V^{\mathrm{AT,y- y^{\prime}}}\psi^{\mathrm{AT-F}_{x^{\prime}}}V^{\mathrm{AT,y^{\prime}}}\,|e_{(+)} \rangle\neq 0\Longleftrightarrow\exists\ \phi_{i}\ \text{st}\ \tau=\prod_{i<x^{\prime}}\big{(}-1\big{)}^{i}\dot{\sigma}_{i}\enspace,\text{ or}\ \tau=\prod_{i>x^{\prime}}\big{(}-1\big{)}^{i}\dot{\sigma}_{i}\enspace,\]
from which we write,
\[\langle e_{(+)}|\,V^{\mathrm{AT,N-}y}\psi_{x}^{\mathrm{AT-F}}V^{ \mathrm{AT,y-y^{\prime}}}\psi^{\mathrm{AT-F}_{x^{\prime}}}V^{\mathrm{AT,y^{ \prime}}}\,|e_{(+)}\rangle\propto\sum_{\sigma}\bigg{(}\ V_{(+),\sigma^{(N-1)}} ^{\mathrm{AT}}\ \big{(}\prod_{N-1\leq i\leq y}V_{\sigma(i)}^{\mathrm{AT}}\ \big{)}\ V_{\sigma(y+1),\tau(y)}^{ \mathrm{AT}}\ \times\cdots\] \[\big{(}\psi_{x}\big{)}_{\tau(y),\sigma(y)}\ \big{(}\prod_{y\leq i^{\prime}\leq y ^{\prime}}V_{\sigma(i^{\prime})}^{\mathrm{AT}}\ \big{)}\ \big{(}\bar{\psi}_{x}\big{)}_{\tau(y),\sigma(y^{\prime})}\ V_{\sigma(1),(+)}^{ \mathrm{AT}}\ \bigg{)}\enspace.\]
Proceeding with the computation, write the following expression for the entries of the Askin-Teller transfer matrix, in which,
\[V^{\mathrm{AT}}\equiv\big{(}V^{\mathrm{AT,h}}\big{)}^{\frac{1}{2}}V^{\mathrm{ AT,V}}\big{(}V^{\mathrm{AT,h}}\big{)}^{\frac{1}{2}}\enspace,\]
satisfies the relation,
\[V^{\mathrm{AT}}\propto\bigg{(}\exp\big{[}\ J^{*}\big{(}\!\sum_{ k\in\mathds{I}}\!\!p_{k-\frac{1}{2}}^{\mathrm{AT}}q_{k-\frac{1}{2}}^{ \mathrm{AT}}+\big{(}\!p_{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{\prime}\big{(}q _{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{\prime}\big{)}\big{]}+\exp\big{[}\ U^{*}\big{(}\!\sum_{k\in\mathds{I}}\!\!p_{k-\frac{1}{2}}^{\mathrm{AT}} \big{(}\!p_{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{\prime}q_{k-\frac{1}{2}}^{ \mathrm{AT}}\big{(}q_{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{\prime}\big{)} \big{]}\bigg{)}\times\cdots\] \[\exp\big{[}\ J\big{(}\!\sum_{k\in\mathds{I}}\!\!p_{k-\frac{1}{2}}^{ \mathrm{AT}}q_{k-\frac{1}{2}}^{\mathrm{AT}}+\big{(}\!p_{k-\frac{1}{2}}^{ \mathrm{AT}}\big{)}^{\prime}\big{(}q_{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{ \prime}\big{)}\big{]}+\exp\big{[}\ U^{*}\big{(}\!\sum_{k\in\mathds{I}}\!\!p_{k- \frac{1}{2}}^{\mathrm{AT}}\big{(}\!p_{k-\frac{1}{2}}^{\mathrm{AT}}\big{)}^{ \prime}q_{k-\frac{1}{2}}^{\mathrm{AT}}\big{(}q_{k-\frac{1}{2}}^{\mathrm{AT}} \big{)}^{\prime}\big{)}\big{]}\bigg{)}\enspace.\]
From the other expectation value,
\[\mathscr{E}^{\prime}\equiv\langle e_{(+)}|\,V^{\mathrm{AT,N-}y}\psi_{x}^{ \mathrm{AT-F}}V^{\mathrm{AT,y-y^{\prime}}}\psi^{\mathrm{AT-F}_{x^{\prime}}}V^{ \mathrm{AT,y^{\prime}}}\,|e_{(+)}\rangle\enspace,\]
observe,
\[\mathscr{E}^{\prime}\propto i\sum_{\gamma\in\mathcal{C}_{a}^{\dagger}(z)\,\cup \,\mathcal{C}_{a}^{\dagger}(z)}\big{(}\alpha^{\mathrm{AT}}\big{)}^{L(\gamma_{i})} \big{(}-1\big{)}^{\#\{\gamma_{i}\cap\mathcal{T}_{y}^{\mathrm{AT}_{y}^{\mathrm{AT} }\}-\#\{\gamma_{i}\cap\mathcal{T}_{y}^{\mathrm{AT}_{y}^{\mathrm{AT}}\}}}}\exp \big{(}-i\sigma_{m}\theta_{\gamma}(r,\vec{r})\big{)}s_{m}(r)\enspace.\]
Furthermore,
\[i\sum_{\gamma\in\mathcal{C}_{a}^{\dagger}(z)\,\cup\,\mathcal{C}_{a}^ {\dagger}(z)}\big{(}\alpha^{\mathrm{AT}}\big{)}^{L(\gamma_{i})}\big{(}-1\big{)}^ {\#\{\gamma_{i}\cap\mathcal{T}_{y}^{\mathrm{AT}_{y}^{\mathrm{AT}}\}}-\#\{\gamma_{i} \cap\mathcal{T}_{y}^{\mathrm{AT}_{y}^{\mathrm{AT}}\}}}\exp\big{(}-i\sigma_{m} \theta_{\gamma}\big{(}r,\vec{r}\big{)}\big{)}s_{m}\big{(}r\big{)}\propto i\sum_{ \gamma\in\mathcal{C}_{a}^{\mathrm{AT},\downarrow}(z)}\big{(}\alpha^{\mathrm{AT}} \big{)}^{L(\gamma_{i})}\times\cdots\] \[\exp\big{(}-i\sigma_{m}\theta_{\gamma}(r,\vec{r})\big{)}s_{m}(r) \mu_{m}(r)\enspace.\]
Hence,
\[<\psi^{\rm AT-F}\big{(}z\big{)}\psi^{\rm AT-F}\big{(}a\big{)}>_{\mathscr{I}\times \mathscr{J}}=-f_{a}^{\rm AT,\uparrow}\big{(}z\big{)}+if_{a}^{\rm AT,\downarrow} \big{(}z\big{)}\ \,\]
for \(y>y^{\prime}\), while,
\[\mathscr{E}^{\prime}\propto<\psi^{\rm AT-F}\big{(}z\big{)}\psi^{\rm AT-F} \big{(}a\big{)}>_{\mathscr{I}\times\mathscr{J}}=-f_{a}^{\rm AT,\uparrow} \big{(}z\big{)}-if_{a}^{\rm AT,\uparrow}\big{(}z\big{)}\ \,\]
for \(y<y^{\prime}\), from which we conclude the argument.
**Theorem 5** (_Pfaffian from multi-point correlations of the Ashkin-Teller fermion operator_, **Theorem**_23_, [13]_). From the Ashkin-Teller fermion operator, \(\psi^{\rm AT-F}\),
\[<\prod_{\begin{subarray}{c}1\leq i\leq n\\ {}_{i\in{\bf C}}\end{subarray}}\psi_{z_{i}}^{(\rm AT-F),(i)}>_{\mathscr{I} \times\mathscr{J}}^{+}=\bigcup_{1\leq i,j\leq n}{\rm Pf}\bigg{(}<\psi^{(\rm AT -F),(i)}\big{(}z_{i}\big{)}\psi^{(\rm AT-F),(j)}\big{(}z_{j}\big{)}>_{ \mathscr{I}\times\mathscr{J}}\bigg{)}\ \,\]
for the expectation with respect to \(+\) boundary conditions supported over \({}_{\mathscr{I}\times\mathscr{J}}\),
\[<\cdot>_{\mathscr{I}\times\mathscr{J}}^{+}\ \.\]
_Proof of Theorem 5._ By the polarization result in _2.3.1_, **Lemma 10**_, which works for almost all \(\beta\) except for countably many temperatures. Independently of whether \(J\equiv U\equiv\frac{1}{4}{\rm log}\big{(}3\big{)}\), or \(J,U<\frac{1}{4}{\rm log}\big{(}3\big{)}\), write,
\[\big{(}v_{\rm vac}^{\prime,(+),N}\big{)}^{*}\equiv\frac{e_{(+)}^{\prime,\rm T }V^{N}}{e_{(+)}^{\prime,\rm T}V^{\prime}e_{(+)}^{\prime}}\ \,\]
from **Lemma**13\({}^{*}\), as the image of some map \(\phi\) from the state space into the wedge product of Fock space representations. From the fact that individual terms of quantum states can be expressed in terms of inner and outer products,
\[\langle e_{(+)}|\,V^{\rm AT,N}\,|e_{(+)}\rangle\longleftrightarrow\big{(}e_ {(+)}\big{)}^{\bf T}V^{\rm AT,N}e_{(+)}\ \,\]
write,
\[<\prod_{\begin{subarray}{c}1\leq i\leq n\\ {}_{i\in{\bf C}}\end{subarray}}\psi_{z_{i}}^{(\rm AT-F),(i)}>_{\mathscr{I} \times\mathscr{J}}^{+}\equiv{\bf E}_{\mathscr{I}\times\mathscr{J}}^{+}\bigg{[} \prod_{\begin{subarray}{c}1\leq i\leq n\\ {}_{i\in{\bf C}}\end{subarray}}\psi_{z_{i}}^{(\rm AT-F),(i)}\bigg{]}\] \[\equiv\bigg{(}{\bf E}_{\mathscr{I}\times\mathscr{J}}^{+}\bigg{)}^ {\prime}\bigg{[}\big{(}v_{\rm vac}\big{)}^{*}\bigg{(}\prod_{\begin{subarray}{c }1\leq i\leq n\\ {}_{i\in{\bf C}}\end{subarray}}\psi_{z_{i}}^{(\rm AT-F),(i)}\bigg{)}v_{\rm vac }\bigg{]}\] \[\stackrel{{\text{\bf(Lemma \ref{lem:1})}}}{{\equiv}}<\big{(}v_{\rm vac }\big{)}^{*}\bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq n\\ {}_{i\in{\bf C}}\end{subarray}}\psi_{z_{i}}^{(\rm AT-F),(i)}\bigg{)}v_{\rm vac }>\ \,\]
where,
\[\frac{{\bf E}_{\mathscr{I}\times\mathscr{J}}^{*}\big{[}\ \cdot\ \big{]}}{\bigg{(}{\bf E }_{\mathscr{I}\times\mathscr{J}}^{*}\bigg{)}^{\prime}\big{[}\ \cdot\ \big{]}}\propto\sum_{i\sim j}\bigg{[}J\bigg{(}\tau(i)\tau(j)+\tau^{\prime}(i) \tau^{\prime}(j)\big{)}-\big{(}v_{\rm vac}\big{)}^{*}\tau\big{(}i\big{)}\tau \big{(}j\big{)}v_{\rm vac}-\big{(}v_{\rm vac}\big{)}^{*}\tau^{\prime}(i)\tau^{ \prime}(j)v_{\rm vac}\bigg{)}+\cdots\] \[U\bigg{(}\tau(i)\tau(j)\tau^{\prime}(i)\tau^{\prime}(j)-\big{(}v_ {\rm vac}\big{)}^{*}\tau(i)\tau\big{(}j\big{)}\tau^{\prime}\big{(}j\big{)}v_{ \rm vac}\bigg{)}\bigg{]}\ \,\]
from which we conclude the argument.
We also introduce the multipoint observable, and its connections with the Pfaffian (see _4.5_ of [13] for a more extensive overview). For the Ashkin-Teller model, in comparison to the parafermionic observable that is defined over a single point, over multiple points, the observable takes the form,
which is a function of the winding number of each path, with respective parameters \(\sigma_{1},\cdots,\sigma_{n}\), from the collection of paths,
\[\Gamma\equiv\bigcup_{\gamma_{i}\in\mathcal{C}_{\{r_{1},\cdots,r_{n}\}}}\bigl{\{} \text{paths }\gamma_{i}\mid 0\leq\theta_{\gamma_{i}}\bigl{(}r_{i},\vec{r_{i}}\bigr{)} \leq 2\pi\bigr{\}}\ \,\]
for:
* (1): Countably many points \(r_{1},\cdots,r_{n}\) in the complex plane,
* (2): the position vector, \(\vec{r_{1}},\cdots,\vec{r_{n}}\), of each \(\gamma_{i}\),
* (3): the spin of each \(\gamma_{i}\), \(s_{m}\bigl{(}r_{i}\bigr{)}\),
* (4): the disorder variable of each \(\gamma_{i}\), \(\mu_{m}\bigl{(}r_{i}\bigr{)}\),
* (5): the set of paths \(\mathcal{C}_{\{r_{1},\cdots,r_{n}\}}\), where each \(\gamma_{i}\) beings at point \(r_{i}\),
* (6): a parameter \(\epsilon\), whose form will be provided in **Theorem 6** below.
**Theorem 6** (_expectation value of multipoint Ashkin-Teller fermion operators is equal to the Ashkin-Teller multipoint parafermionic observable_). Denote, for \(\psi^{(\mathrm{MP-AT})}\equiv\psi\),
\[\psi^{\dagger}\bigl{(}z\bigr{)}=\frac{1}{2}\bigl{(}\psi^{\mathrm{AT-F}}\bigl{(} z\bigr{)}-\psi^{\mathrm{AT-F}}\bigl{(}z\bigr{)}\bigr{)}\ \,\]
and,
\[\psi^{\dagger}\bigl{(}z\bigr{)}=\frac{i}{2}\bigl{(}\psi^{\mathrm{AT-F}}\bigl{(} z\bigr{)}+\psi^{\mathrm{AT-F}}\bigl{(}z\bigr{)}\bigr{)}\ \,\]
for \(z\in\mathbf{C}\), from which one has,
\[<\
for real \(\eta_{j}\). To demonstrate that the desired identity holds between the expectation under \(+\) boundary conditions and the multipoint Ashkin-Teller parafermionic observable, write,
\[<\prod_{\begin{subarray}{c}1\leq i\leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}>^{+}_{\mathscr{J}\times\mathscr{J}}=\mathbf{E}^{+}_{ \mathscr{J}\times\mathscr{J}}\bigg{[}\prod_{\begin{subarray}{c}1\leq i\leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}\bigg{]}\\ =\mathbf{E}^{+}_{\mathscr{J}\times\mathscr{J}}\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}\bigg{]}\\ =\sum_{\begin{subarray}{c}+\text{ boundary conditions }\\ \mathscr{J}\times\mathscr{J}\leq\mathbf{Z}^{2}\\ \omega\in\mathcal{S}\end{subarray}}\bigg{(}\sum_{\begin{subarray}{c}1\leq i \leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\prod_{i,j}\psi_{(j)}^{ \dagger,(2m-i)}\big{(}z_{(j)}\big{)}\bigg{)}\enspace,\]
from which terms can further be rearranged, by separating terms in the product over \(i\) and \(j\) for which \(\psi^{\dagger}\), or for which \(\psi^{\dagger}\),
\[\sum_{\begin{subarray}{c}+\text{ boundary conditions }\\ \mathscr{J}\times\mathscr{J}\leq\mathbf{Z}^{2}\\ \omega\in\mathcal{S}\end{subarray}}\bigg{(}\sum_{\begin{subarray}{c}1\leq i \leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\prod_{i\in\mathbb{J},j} \psi_{(j)}^{\dagger,(2m-i)}\big{(}z_{(j)}\big{)}\prod_{\begin{subarray}{c} i\in\mathbb{J},j\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}\bigg{)}\enspace,\]
which can then be expressed with the Pfaffian,
\[\sum_{\begin{subarray}{c}+\text{ boundary conditions }\\ \mathscr{J}\times\mathscr{J}\leq\mathbf{Z}^{2}\\ \omega\in\mathcal{S}\end{subarray}}\bigg{(}\bigcup_{1\leq i\leq 2m}\text{Pf} \big{(}\prod_{\begin{subarray}{c}i\in\mathbb{V}^{\dagger},j\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}\big{)}\bigcup_{1\leq j\leq 2m-1}\text{Pf} \big{(}\prod_{\begin{subarray}{c}i\in\mathbb{V}^{\dagger},j\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{ \dagger,(2m-i)}\big{(}z_{(j)}\big{)}\big{)}\bigg{)}\enspace,\]
which is equivalent to,
\[\sum_{\begin{subarray}{c}+\text{ boundary conditions }\\ \mathscr{J}\times\mathscr{J}\leq\mathbf{Z}^{2}\\ \omega\in\mathcal{S}\end{subarray}}\bigg{(}\bigcup_{\begin{subarray}{c}1\leq i \leq 2m\\ 1\leq j\leq 2m-1\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\text{Pf} \bigg{(}\prod_{\begin{subarray}{c}i\in\mathbb{V}^{\dagger},j\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{ \dagger,(2m-i)}\big{(}z_{(j)}\big{)}\prod_{\begin{subarray}{c}i\in\mathbb{V} ^{\dagger},j\\ \text{countably many }z_{(j)}\in\mathbf{C}\end{subarray}}\psi_{(j)}^{\dagger,(2m-i)} \big{(}z_{(j)}\big{)}\bigg{)}\bigg{)}\]
For low temperatures as \(\alpha^{\text{AT}}\longrightarrow 0\), one recovers components of the multipoint Ashkin-Teller parafermionic observable from the Pfaffian, in which,
\[\sum_{\gamma_{i}\in\mathcal{C}_{\{r_{1},\cdots,r_{n}\}}}\prod_{\gamma_{i}\in \mathcal{C}_{\{r_{1},\cdots,r_{n}\}}}\exp\big{(}-i\sigma_{i}\theta_{\gamma_{i} }\big{(}r_{i},\vec{r}_{i}\big{)}\big{)}s_{m}\big{(}r_{i}\big{)}\mu_{m}\big{(}r _{i}\big{)}\enspace,\]
from which we conclude the argument.
#### 2.5.2 Loop model
For the second model, as \(\beta^{\text{loop}}\longrightarrow+\infty\) from below, introduce the following two observables related to the loop parafermionic observable,
\[f_{a}^{\text{loop},\uparrow}\big{(}z\big{)}=\frac{1}{\mathscr{C}\big{(}\mathscr{Z}_{ \Lambda}^{\text{loop},\xi}\big{)}\mathscr{Z}_{\Lambda}^{\text{low-temp loop},\xi}}\sum_{\begin{subarray}{c}\gamma:a\to z\\ \gamma\subset\Omega\end{subarray}}\exp\big{(}-i\sigma W_{\gamma}\big{(}a,z \big{)}\big{)}x^{l(\gamma)}\enspace,\]
for:
* (1): A horizontal edge \(a\in\mathscr{I}^{**}\times\mathscr{J}\),
* (2): a rectangle \(\mathscr{I}^{**}\times\mathscr{J}\),
* (3): the dual lattice \(\big{(}\mathbf{H}\big{)}^{*}=\mathbf{T}\),
* (4): a subset \(\bigg{(}\mathscr{I}^{**}\times\mathscr{J}^{**}\bigg{)}\subset\mathbf{T}\),
* (5): a set \(\mathcal{C}_{a}^{\text{loop},\uparrow}\big{(}z\big{)}\), consisting of paths such that for any face of the dual lattice excluding points \(a+\frac{i}{2}\), the parity, ie the number of edges of the path, adjacent to the face is even,
* (6): a set \(\mathcal{C}_{a}^{\text{loop},\downarrow}\big{(}z\big{)}\), consisting of paths such that for any face of the dual lattice excluding points \(a+\frac{i}{2}\), the parity, ie the number of edges of the path, adjacent to the face is odd,
* (7): a strictly positive constant \(\mathscr{C}\big{(}\mathscr{Z}_{\Lambda}^{\text{loop},\xi}\big{)}\mathscr{Z}_ {\Lambda}^{\text{low-temp loop},\xi}\propto\mathscr{Z}_{\Lambda}^{\text{loop},\xi}\).
**Definition 12** (_discrete residues and massive s-holomorphicity_, **Definition**_20_, [13])).: For a horizontal edge \(a\), there exists a complex valued, massive s-holomorphic function \(f\), such that, for \(z\neq a\), over the nonempty collection of faces of the lattice containing \(a+\frac{i}{2}\), the residue of \(f\) is given by \(\frac{i}{2\pi}\big{(}f^{\text{front}}\big{(}a\big{)}-f^{\text{back}}\big{(}a \big{)}\big{)}\). The function \(f\) can be extended so that it is massive s-holomorphic if \(f^{\text{front}}\big{(}\cdot\big{)}\) is extended to \(a+\frac{i}{2}\), while \(f^{\text{back}}\big{(}\cdot\big{)}\) is extended to \(a-\frac{i}{2}\).
**Theorem 4\({}^{*}\)** (_convergence of Fermion two-point correlations to loop parafermionic observables defined at the beginning of 2.5.2)_. Over \(\mathscr{I}\times\mathscr{J}\), one has,
\[<\psi^{\text{loop}-\text{F}}\big{(}z\big{)}\psi^{\text{loop}- \text{F}}\big{(}a\big{)}>_{\mathscr{I}\times\mathscr{J}}=-f_{a}^{\text{loop },\uparrow}\big{(}z\big{)}+i\text{f}_{a}^{\text{loop},\downarrow}\big{(}z \big{)}\enspace,\] \[<\psi^{\text{loop}-\text{F}}\big{(}z\big{)}\psi^{\text{loop}- \text{F}}\big{(}a\big{)}>_{\mathscr{I}\times\mathscr{J}}=-f_{a}^{\text{loop },\uparrow}\big{(}z\big{)}-i\text{f}_{a}^{\text{loop},\downarrow}\big{(}z \big{)}\enspace,\] \[<\psi^{\text{loop}-\text{F}}\big{(}z\big{)}\psi^{\text{loop}- \text{F}}\big{(}a\big{)}>_{\mathscr{I}\times\mathscr{J}}=-f_{a}^{\text{loop },\uparrow}\big{(}z\big{)}-i\text{f}_{a}^{\text{loop},\downarrow}\big{(}z \big{)}\enspace.\]
_Proof of Theorem 4\({}^{*}\)._ Perform similar computations for each of the four identities of the fermion loop operators as given in the previous subsection for **Theorem 4**. \(\qed\)
**Theorem 5\({}^{*}\)** (_Pfaffian from multi-point correlations of the loop fermion operator_). From the loop fermion operator, \(\psi^{\text{loop}-\text{F}}\),
\[<\prod_{\begin{subarray}{c}1\leq i\leq n\\ z_{i}\in\mathbf{C}\\ \text{countably many $z_{i}$}\end{subarray}}\psi^{\text{(loop}-\text{F}),(i)}_{ \mathscr{I}\times\mathscr{J}}=\bigcup_{1\leq i,j\leq n}\text{Pf}\bigg{(}<\psi^{ \text{(loop}-\text{F}),(i)}\big{(}z_{i}\big{)}\psi^{\text{(loop}-\text{F}),(j )}\big{(}z_{j}\big{)}>_{\mathscr{I}\times\mathscr{J}}\bigg{)}\enspace.\]
_Proof of Theorem 5\({}^{*}\)._ Repeat the computations provided for **Theorem 5** of the previous subsection, but for the loop fermion operator instead of for the Ashkin-Teller fermion operator. \(\qed\)
We also introduce the multipoint observable, and its connections with the Pfaffian (see _4.5_ of [13] for a more extensive overview). For the loop model, in comparison to the parafermionic observable that is defined over a single point, over multiple points, the observable takes the form,
\[F^{\mathrm{MP-loop}}\bigg{(}\epsilon,\big{(}a_{1},\cdots\big{)}, \big{(}z_{1},\cdots\big{)},x,\big{(}x_{1},\cdots\big{)},\big{(}\sigma_{1},\cdots \big{)}\bigg{)}=\sum_{\begin{subarray}{c}\gamma_{i}:a_{i}\to z_{i}\\ \gamma_{i}\in\Omega\\ \gamma_{i}\in\mathcal{C}_{\{a_{1},\cdots,a_{n}\}}\end{subarray}}\big{(}\alpha^ {\mathrm{loop}}\big{)}^{L(\gamma_{i})}\times\cdots\] \[\prod_{\begin{subarray}{c}\gamma_{i}\in\mathcal{C}_{\{a_{1}, \cdots,a_{n}\}}\end{subarray}}\exp\big{(}-i\sigma_{i}W_{\gamma_{i}}\big{(}a_{i },z_{i}\big{)}\big{)}x^{l(\gamma_{i})}\enspace,\]
which is a function of the winding number of each path, with respective parameters \(\sigma_{1},\cdots,\sigma_{n}\), from the collection of paths,
\[\Gamma\equiv\bigcup_{\gamma_{i}\in\mathcal{C}_{\{a_{1},\cdots,a_{n}\}}}\big{\{} \text{paths }\gamma_{i}\mid 0\leq W_{\gamma_{i}}\big{(}a_{i},z_{i}\big{)}\leq 2 \pi\big{\}}\enspace,\]
for:
* (1): Countably many points \(z_{1},\cdots,z_{n}\) in the complex plane,
* (2): the beginning points, \(a_{1},\cdots,a_{n}\), of each \(\gamma_{i}\),
* (3): the ending points \(x_{1},\cdots,x_{n}\) of each \(\gamma_{i}\),
* (4): the number of loops, \(x^{l(\gamma_{i})}\) of each \(\gamma_{i}\),
* (5): an arbitrary point \(x\) in the complex plane,
* (6): The set of paths, \(\mathcal{C}_{\{a_{1},\cdots,a_{n}\}}\), where each \(\gamma_{i}\) begins at \(a_{i}\),
* (7): a parameter \(\epsilon\), whose form will be provided in **Theorem**\(6^{*}\) below.
**Theorem**\(6^{*}\) (_expectation value of multipoint loop fermion operators is equal to the loop multipoint parafermionic observable_). Denote, for \(\psi^{\mathrm{(loop-F)}}\equiv\psi\),
\[\psi^{\uparrow}\big{(}z\big{)}=\frac{1}{2}\big{(}\psi^{\mathrm{loop-F}}\big{(} z\big{)}-\psi^{\mathrm{loop-F}}\big{(}z\big{)}\big{)}\enspace,\]
and,
\[\psi^{\downarrow}\big{(}z\big{)}=\frac{i}{2}\big{(}\psi^{\mathrm{loop-F}} \big{(}z\big{)}+\psi^{\mathrm{lo\bar{o}p-F}}\big{(}z\big{)}\big{)}\enspace,\]
for \(z\in\mathbf{C}\), from which one has,
\[<\prod_{\begin{subarray}{c}1\leq i\leq 2m\\ \frac{1}{1\leq j\leq 2m-1}\\ \text{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{\small{\small{\small{\small{\small{\small{\small{ \cdot}}}}}}}}}}}}}}}\\ \
#### 2.6.1 Ashkin-Teller model
The RPS operator satisfies Riemann boundary conditions which were introduced in _2.1_ with **Definition 3**.
**Lemma 15** (_Riemann boundary conditions for RPS operators_, **Lemma 27**, [13]).: For some finite volume \(\Omega\) over \(\mathbf{Z}^{2}\), with edge set \(\mathscr{E}\). If there exists an s-holomorphic function, \(h:\mathscr{E}\longrightarrow\mathbf{C}\), which satisfies Riemann boundary conditions on the boundary of the edge set, then \(h\equiv 0\).
Proof of Lemma 15.: Refer to the proof of **Lemma 27** in [13].
**Lemma 16** (_s-holomprhic extension of h_, **Lemma 28**, [13]).: For a boundary edge in \(\Omega\), \(u\), there exists another boundary edge in \(\mathscr{E}\), \(v\), such that \(u+v\) has an s-holomorphic extension \(h:\mathscr{E}\longrightarrow\mathbf{C}\), which satisfies Riemann boundary conditions on \(\partial\Omega\backslash v\).
Proof of Lemma 16.: Refer to the proof of **Lemma 28** in [13].
With the each **Lemma** above, we define the RPS operator below.
**Definition 13** (_Ashkin-Teller RPS operator_).: The map,
\[\big{(}U_{\Omega}^{\mathbf{b}}\big{)}^{\mathrm{AT}}:\mathcal{R}_{ \Omega}^{\mathbf{b}} \longrightarrow\mathcal{I}_{\Omega}^{\mathbf{b}}\] \[u \mapsto v\ \,\]
constitutes an RPS operator, for \(\mathbf{b}\in\partial\Omega\).
**Lemma 17** (_extending the s-holomorphic function to the convolution kernel_, **Lemma 30**, [13]).: The convolution kernel is given by the summation,
\[v\big{(}x\big{)}=\underset{y\in\mathbf{b}}{\sum}u\big{(}y\big{)}f_{\Omega} \big{(}y,x\big{)}\ \,\]
which can be extended to an s-holomorphic function \(h\), with,
\[h\big{(}x\big{)}=\underset{y\in\mathbf{b}}{\sum}u\big{(}y\big{)}f_{\Omega} \big{(}y,z\big{)}\ \,\]
for \(x\in\mathscr{E}\), and \(f_{\Omega}\) belonging to the space of functions,
\[f^{\mathrm{AT}}=f:\mathbf{b}\longrightarrow\mathbf{C}\ \.\]
Proof of Lemma 17.: (\(\Leftarrow\)) Suppose that the expansion for \(h\big{(}x\big{)}\) of the s-holomorphic extension exists. To demonstrate that the expansion for \(v\big{(}x\big{)}\) holds, for edges which lie on the boundary set \(\partial\mathscr{E}\), then the function \(u\big{(}x\big{)}f_{\Omega}\big{(}x,x\big{)}\), and, for \(y\in\partial\mathscr{E}\backslash\big{\{}x\big{\}}\), \(u\big{(}x\big{)}f_{\Omega}\big{(}y,x\big{)}\) each satisfy Riemann boundary conditions. Hence the existence of the s-holomorphic extension implies that another expansion for \(v\big{(}x\big{)}\) exists. (\(\Rightarrow\)) Suppose that the expansion for \(v\big{(}x\big{)}\) exists. Observe that \(h\equiv 0\), from which we conclude the argument.
In the case of finite volumes that are boxes, either squares or rectangles, the RPS operator introduced in **Definition 13** can be expressed in terms of a s-holomorphic propagation operator which satisfies the following properties.
**Lemma 18** (_s-holomorphic propagator of the RPS operator_, **Lemma 31**, [13]).: For a rectangular domain over the square lattice,
\[\Omega=\mathbf{I}\times\big{\{}0,\cdots,N\big{\}}\ \,\]
with the bottom endpoint of \(\Omega\),
\[\mathbf{b}\times\left\{0\right\}\enspace.\]
For the Ashkin-Teller propagator,
\[P^{\mathrm{AT}}:\left(\mathbf{R}^{2}\right)^{\left|\mathbf{I}^{\star\star} \right|}\longrightarrow\left(\mathbf{R}^{2}\right)^{\left|\mathbf{I}^{\star \star}\right|}\enspace,\]
there exists a decomposition of \(N\) th power of the propagator matrix,
\[\left(P^{\mathrm{AT}}\right)^{N}\equiv\begin{bmatrix}\left(P^{\mathrm{AT}} \right)^{N}&\left(P^{\mathrm{AT}}\right)^{N}\\ \mathcal{R}&\left(P^{\mathrm{AT}}\right)^{N}_{\mathcal{S}\mathcal{S}}\\ \left(P^{\mathrm{AT}}\right)^{N}_{\mathcal{S}\mathcal{S}}&\left(P^{\mathrm{AT }}\right)^{N}_{\mathcal{S}\mathcal{S}}\end{bmatrix}\]
into four blocks, each of dimension \(\left|\mathbf{I}^{\star\star}\right|\times\left|\mathbf{I}^{\star\star}\right|\). The decomposition of the \(N\) th power of the Ashkin-Teller propagator above holds as a result of the decomposition,
\[\left(\mathbf{R}^{2}\right)^{\mathbf{I}^{\star\star}}\cong\left(\mathbf{R} \oplus i\mathbf{R}\right)^{\mathbf{I}^{\star\star}}\cong\left(\mathbf{R} \right)^{\mathbf{I}^{\star\star}}\oplus\left(i\mathbf{R}\right)^{\mathbf{I}^{ \star\star}}\enspace,\]
from which it follows that the convolution operator, from the s-holomorphic extension of **Lemma 17**, satisfies,
\[\left(U_{\Omega}^{\mathrm{R}}\right)^{\mathrm{AT}}=-\left(\left(P^{\mathrm{AT }}\right)^{N}_{\mathcal{S}\mathcal{S}}\right)^{-1}\left(P^{\mathrm{AT}} \right)^{N}_{\mathcal{R}\mathcal{S}}\enspace.\]
Proof of Lemma 18.: From the definition of the Ashkin-Teller convolution operator, for \(u\in\mathcal{R}_{\Omega}^{\mathrm{b}}\) st \(\mathrm{Im}\big{(}u\big{)}\equiv 0\),
\[\left(P^{\mathrm{AT}}\right)^{N}\begin{bmatrix}u\\ v\end{bmatrix}=\begin{bmatrix}w\\ 0\end{bmatrix}\]
for some \(w:\mathbf{I}\times\left\{n\right\}\longrightarrow\mathbf{R}\), with \(\mathrm{Im}\big{(}w\big{)}\equiv 0\). From the definition of \(\left(P^{\mathrm{AT}}\right)^{N}\), it follows that \(\left(P^{\mathrm{AT}}\right)^{N}_{\mathcal{S}\mathcal{R}}u+\left(P^{\mathrm{ AT}}\right)^{N}_{\mathcal{S}\mathcal{S}}v=0\), while for the remaining term from the equality above involving \(w\), it follows that \(v=\left(\left(P^{\mathrm{AT}}\right)^{N}_{\mathcal{S}\mathcal{S}}\right)^{-1} \left(P^{\mathrm{AT}}\right)^{N}_{\mathcal{R}\mathcal{S}}u\). We conclude the argument.
#### 2.6.2 Loop model
The RPS operator satisfies Riemann boundary conditions which were introduced in _2.1_ with **Definition 3**.
**Lemma \(15^{*}\)** (_Riemann boundary conditions for RPS operators_). For some finite volume \(\Omega\) over \(\mathbf{Z}^{2}\), with edge set \(\mathcal{E}\). If there exists an s-holomorphic function, \(h:\mathcal{E}\longrightarrow\mathbf{C}\), which satisfies Riemann boundary conditions on the boundary of the edge set, then \(h\equiv 0\).
Proof of Lemma \(15^{*}\).: Refer to the proof of **Lemma 27** in [13].
**Lemma \(16^{*}\)** (_s-holomprhic extension of \(h\)_). For a boundary edge in \(\Omega\), \(u\), there exists another boundary edge in \(\mathcal{E}\), \(v\), such that \(u+v\) has an s-holomorphic extension \(h:\mathcal{E}\longrightarrow\mathbf{C}\), which satisfies Riemann boundary conditions on \(\partial\Omega\backslash v\).
Proof of Lemma 16.: Refer to the proof of **Lemma 28** in [13].
With the each **Lemma** above, we define the RPS operator below.
**Definition 13** (_loop RPS operator_). The map,
\[\left(U_{\Omega}^{\mathbf{b}}\right)^{\mathrm{loop}}:\mathcal{R}_{ \Omega}^{\mathbf{b}} \longrightarrow\mathcal{I}_{\Omega}^{\mathbf{b}}\] \[u\mapsto v\enspace,\]
constitutes an RPS operator, for \(\mathbf{b}\in\partial\Omega\).
**Lemma 17\({}^{*}\)** (_extending the s-holomorphic function to the convolution kernel_). The convolution kernel is given by the summation,
\[v\big{(}x\big{)}=\underset{y\in\mathbf{b}}{\sum}u\big{(}y\big{)}f_{\Omega} \big{(}y,x\big{)}\ \,\]
which can be extended to an s-holomorphic function \(h\), with,
\[h\big{(}x\big{)}=\underset{y\in\mathbf{b}}{\sum}u\big{(}y\big{)}f_{\Omega} \big{(}y,z\big{)}\ \,\]
for \(x\in\mathscr{E}\), and \(f_{\Omega}\) belonging to the space of functions,
\[f^{\text{loop}}=f:\mathbf{b}\longrightarrow\mathbf{C}\ \.\]
_Proof of Lemma 17\({}^{*}\)_. The result above follows from identical observations presented for each direction in **Lemma 17** in the previous subsection. \(\sqcap\)\(\sqcup\)
In the case of finite volumes that are boxes, either squares or rectangles, the RPS operator introduced in **Definition 13** can be expressed in terms of a s-holomorphic propagation operator which satisfies the following properties.
**Lemma 18\({}^{*}\)** (_s-holomorphic propagator of the RPS operator_). For a rectangular domain over the hexagonal lattice,
\[\Omega=\mathbf{I}^{**}\times\big{\{}0,\cdots,N\big{\}}\ \,\]
with the bottom endpoint of \(\Omega\),
\[\mathbf{b}\times\{0\}\ \.\]
For the loop propagator,
\[P^{\text{loop}}:\big{(}\mathbf{R}^{2}\big{)}^{\mathbf{I}^{**}} \longrightarrow\big{(}\mathbf{R}^{2}\big{)}^{\mathbf{I}^{**}}\ \,\]
there exists a decomposition of \(N\) th power of the propagator matrix,
\[\big{(}P^{\text{loop}}\big{)}^{N}\equiv\begin{bmatrix}\big{(}P^{\text{loop}} \big{)}^{N}_{\mathscr{MF}}&\big{(}P^{\text{loop}}\big{)}^{N}_{\mathscr{MF}}\\ \big{(}P^{\text{loop}}\big{)}^{N}_{\mathscr{MF}}&\big{(}P^{\text{loop}}\big{)} ^{N}_{\mathscr{S}\mathscr{S}}\end{bmatrix}\]
into four blocks, each of dimension \(\big{|}\mathbf{I}^{**}\big{|}\times\big{|}\mathbf{I}^{**}\big{|}\). The decomposition of the \(N\) th power of the loop propagator above holds as a result of the decomposition,
\[\big{(}\mathbf{R}^{2}\big{)}^{\mathbf{I}^{**}}\ \widetilde{\simeq}\ \big{(} \mathbf{R}\oplus i\mathbf{R}\big{)}^{\mathbf{I}^{**}}\ \widetilde{\simeq}\ \big{(}\mathbf{R}\big{)}^{\mathbf{I}^{**}}\oplus(i \mathbf{R})^{\mathbf{I}^{**}}\ \,\]
from which it follows that the convolution operator, from the s-holomorphic extension of **Lemma 17**, satisfies,
\[\big{(}U_{\Omega}^{\mathbf{b}}\big{)}^{\text{loop}}=-\big{(}\big{(}P^{\text{ loop}}\big{)}^{N}_{\mathscr{S}\mathscr{S}}\big{)}^{-1}\big{(}P^{\text{loop}} \big{)}^{N}_{\mathscr{MF}}\ \.\]
_Proof of Lemma 18\({}^{*}\)_. Execute the same computations provided for **Lemma 18** in the previous subsection, for \(\big{(}P^{\text{loop}}\big{)}^{N}\) instead of for \(\big{(}P^{\text{AT}}\big{)}^{N}\). \(\sqcap\)\(\sqcup\)
Staggered, and odd, eight-vertex models
In the last section, we describe connections between the parafermionic observables of the Ashkin-Teller and staggered eight-vertex models, which are shown to be equivalent in [15]. By virtue of this correspondence between parafermionic observables, we list results which are expected to apply, not only for the parafermionic observable of the staggered eight-vertex model, but also for the parafermionic observable of the odd eight-vertex model.
There exists a mapping between the eight-vertex model and the Ising model, in which there exists a coupling between the two-colored Ashkin-Teller model, with the following mapping between partition functions, [15],
\[Z\propto\sum_{\sigma\in\{\pm 1\}}\bigl{[}\prod_{j,k\in\mathbf{N}}\exp\bigl{(}K ^{+}\bigl{(}\sigma_{j,k}\sigma_{j+1,k+1}\bigr{)}+K^{-}\bigl{(}\sigma_{j,k+1} \sigma_{j+1,k}\bigr{)}+\lambda\sigma_{j,k}\bigl{(}\sigma_{j+1,k+1}\sigma_{j,k+1 }\sigma_{j+1,k}\bigr{)}\bigr{)}\bigr{]}\ \.\]
At the same critical point along the line of possible critical points for the Ashkin-Teller model for which massive, and massless s-holomorphicity were given (**Definition 5** and **Definition 6**), we list the results below which would apply to the staggered eight-vertex parafermionic observable. From the similarity which is shared between the parafermionic observable of the Ashkin-Teller model with the parafermionic observable of the staggered eight-vertex model, propagation mechanisms akin to those provided in **Lemma 3**, and for **Lemma 4**, are satisfied for equvual couplings of the two Potts models to \(\frac{1}{4}\mathrm{log}\bigl{(}3\bigr{)}\).
### Staggered eight-vertex parafermionic observable
Below, we list results emulating those obtained in the previous section for the Ashkin-Teller model.
**Proposition S8V 1** (_generator relations for the staggered eight-vertex model, from generator relations for the Ising model_, **Proposition 8**, [13]). The components of the Ashkin-Teller transfer matrix satisfy the relations,
\[V^{\mathrm{S8V,}h}\equiv\exp\bigl{[}\ J\bigl{(}\sum_{k\in \mathbf{I}}p_{k}^{\mathrm{S8V}}q_{k}^{\mathrm{S8V}}+\bigl{(}p_{k}^{\mathrm{S8V }}\bigr{)}^{\prime}\bigl{(}q_{k}^{\mathrm{S8V}}\bigr{)}^{\prime}\bigr{)} \bigr{]}+\exp\bigl{[}\ U\bigl{(}\sum_{k\in\mathbf{I}}p_{k}^{\mathrm{S8V}}\bigl{(} p_{k}^{\mathrm{S8V}}\bigr{)}^{\prime}q_{k}^{\mathrm{S8V}}\bigl{(}q_{k}^{ \mathrm{S8V}}\bigr{)}^{\prime}\bigr{)}\bigr{]}\ \,\] \[V^{\mathrm{S8V,}V}\equiv\mathscr{P}\biggl{(}\exp\bigl{[}\ J^{*} \bigl{(}\sum_{k\in\mathbf{I}}p_{k-\frac{1}{2}}^{\mathrm{S8V}}q_{k-\frac{1}{2} }^{\mathrm{S8V}}+\bigl{(}p_{k-\frac{1}{2}}^{\mathrm{S8V}}\bigr{)}^{\prime} \bigl{(}q_{k-\frac{1}{2}}^{\mathrm{S8V}}\bigr{)}^{\prime}\bigr{)}\bigr{]}+ \exp\bigl{[}\ U^{*}\bigl{(}\sum_{k\in\mathbf{I}}p_{k-\frac{1}{2}}^{\mathrm{S8V }}\bigl{(}p_{k-\frac{1}{2}}^{\mathrm{S8V}}\bigr{)}^{\prime}q_{k-\frac{1}{2}}^ {\mathrm{S8V}}\bigl{(}q_{k-\frac{1}{2}}^{\mathrm{S8V}}\bigr{)}^{\prime}\bigr{)} \bigr{]}\biggr{)}\ \,\]
for generators \(p_{k}^{\mathrm{S8V}}\), \(\bigl{(}p_{k}^{\mathrm{S8V}}\bigr{)}^{\prime}\), \(q_{k}^{\mathrm{S8V}}\), and \(\bigl{(}q_{k}^{\mathrm{S8V}}\bigr{)}^{\prime}\), dual couplings \(\bigl{(}J^{*},U^{*}\bigr{)}\), obtained from \(\bigl{(}J,U\bigr{)}\) under the relation,
\[\frac{\exp\bigl{(}-2J+2U\bigr{)}-1}{\exp\bigl{(}-2J^{*}+2U^{*} \bigr{)}-1}=\exp\bigl{(}2U\bigr{)}\mathrm{sinh}\bigl{(}2J\bigr{)}=\frac{1}{ \exp(2U^{*})\mathrm{sinh}\bigl{(}2J^{*}\bigr{)}}\ \,\]
and the prefactor,
\[\mathscr{P}\equiv\exp\bigl{[}\bigl{(}\exp\bigl{[}U^{*},J,J^{*}\bigr{]}-U\bigr{)} \bigl{(}\ \sum_{k\in\mathbf{I}}p_{k}^{\mathrm{S8V}}\bigl{(}p_{k}^{\mathrm{S8V}}\bigr{)}^{ \prime}q_{k}^{\mathrm{S8V}}\bigl{(}q_{k}^{\mathrm{S8V}}\bigr{)}^{\prime} \bigr{)}\bigr{]}\ \.\]
**Lemma S8V 1** (_staggered eight-vertex transfer matrix conjugation_, **Lemma** in _3.2_, [13]). The action,
\[\bigl{(}V^{\mathrm{S8V,}h}\bigr{)}^{-\frac{1}{2}}\cdot q_{k}^{ \mathrm{S8V}}\cdot\bigl{(}V^{\mathrm{S8V,}h}\bigr{)}^{\frac{1}{2}} =isp_{k}^{\mathrm{S8V}}+cq_{k}^{\mathrm{S8V}}\ \,\]
under the composition operator, \(\cdot\), \(V^{\mathrm{AT,}h}\) takes the form above, while the action,
\[\big{(}V^{\text{SSV},V}\big{)}^{-1}\cdot p_{k}^{\text{SSV}}\cdot\big{(}V^{\text{SSV },V}\big{)}^{1}=\frac{C}{S}P_{k}^{\text{SSV}}+\frac{i}{S}q_{k+1}^{\text{SSV}}\enspace,\] (*) \[\big{(}V^{\text{SSV},V}\big{)}^{-1}\cdot q_{k}^{\text{SSV}}\cdot \big{(}V^{\text{SSV},V}\big{)}^{1}=-\frac{i}{S}p_{k-1}^{\text{SSV}}+\frac{C}{S} q_{k}^{\text{SSV}}\enspace,\] (**)
under the composition operator \(\cdot\), \(V^{\text{SSV},V}\) takes the form above, where _(*)_ holds for \(k\neq k_{R}\), and where _(**)_ holds for \(k\neq k_{L}\). Additionally,
\[\big{(}V^{\text{SSV},V}\big{)}^{-1}\cdot p_{k}^{\text{SSV}}\cdot V ^{\text{SSV},V}=p_{k}^{\text{SSV}}\enspace,\] (***) \[\big{(}V^{\text{SSV},V}\big{)}^{-1}\cdot q_{k}^{\text{SSV}}\cdot V ^{\text{SSV},V}=q_{k}^{\text{SSV}}\enspace,\] (****)
where _(****)_ holds for \(k\equiv k_{R}\), and where _(****)_ holds for \(k\equiv k_{L}\).
**Lemma**_S8V 2_ (_commutation rule_, **Lemma 9**, [13]). For two other maps, the identities,
\[T_{V}\cdot R=R\cdot T_{V}\enspace,\] \[T_{V}\cdot J=J\cdot T_{V}\enspace,\]
for the rotation hold, where the maps \(R\) and \(J\) satisfy,
\[R\big{(}p_{k}\big{)}=iq_{a+b-k}^{\text{SSV}}\enspace,\] \[R\big{(}q_{k}^{\text{SSV}}\big{)}=-ip_{a+b-k}^{\text{SSV}}\enspace,\] \[J\big{(}p_{k}^{\text{SSV}}\big{)}=ip_{k}^{\text{SSV}}\enspace,\] \[J\big{(}q_{k}^{\text{SSV}}\big{)}=-iq_{k}^{\text{SSV}}\enspace,\]
for the bases \(\psi_{x}\) and \(\bar{\psi_{x}}\) spanning,
\[\mathcal{V}_{\psi_{x}}=\big{\{}x\bigm{|}\psi_{x}\in\mathcal{V} \big{\}}\enspace,\] \[\mathcal{V}_{\bar{\psi_{x}}}=\big{\{}x\bigm{|}\bar{\psi_{x}}\in \mathcal{V}\big{\}}\enspace,\]
which have the following images under \(R\) and \(J\),
\[R\big{(}\psi_{x}\big{)}=\psi_{a+b-k}^{\enspace-}\enspace,\] \[R\big{(}\bar{\psi_{x}}\big{)}=\psi_{b+a-k}\enspace,\] \[J\big{(}\psi_{x}\big{)}=\bar{\psi_{x}}\enspace,\] \[J\big{(}\bar{\psi_{x}}\big{)}=\psi_{k}\enspace,\]
for the elements,
\[\psi_{x}\equiv\frac{i}{\sqrt{2}}\big{(}p_{k}^{\text{SSV}}+q_{k}^{ \text{SSV}}\big{)}\enspace,\] \[\bar{\psi_{x}}\equiv\frac{1}{\sqrt{2}}\big{(}p_{k}^{\text{SSV}}-q_ {k}^{\text{SSV}}\big{)}\enspace.\]
For exactly similar properties of \(T_{V}\) for the staggered eight-vertex model from those which are satisfied for the Ashkin-Teller model (**Theorem** in _2.2.1_), the following version of the Wick's formula for the staggered eight-vertex model, which is formulated from the vacuum vector basis, an identical expression for the Wick's formula can be obtained from the Pfaffian. The desired property stems from the fact that a suitable polarization for the staggered eight-vertex model exists from the generators, which are in correspondence with the Ashkin-Teller generators.
**Lemma _S8V 3_ One has, for the staggered eight-vertex polarization, that,
\[\begin{array}{l}{\cal W}_{\rm cr}^{(+)}\equiv{\rm span}\big{\{}p_{k}^{\rm SSV}- iq_{k}^{\rm SSV}\bigm{|}k\in{\bf\Gamma}^{\ast}\bigm{\}}\ \,\\ {\cal W}_{\rm ann}^{(+)}\equiv{\rm span}\big{\{}p_{k}^{\rm SSV}+iq_{k}^{\rm SSV }\bigm{|}k\in{\bf\Gamma}^{\ast}\bigm{\}}\ \.\end{array}\]
Besides the expression for correlations obtained with the Pfaffian, by passing to a suitable polarization, the lattice fermion operator for the staggered eight-vertex model, as does that for the Ashkin-Teller model, satisfy the properties of massive, and massless, s-holomorphicity (coinciding with those provided in **Definition 5** and in **Definition 6**).
For the multipoint staggered eight-vertex parafermionic observable, from results akin to **Theorem 6**, the multipoint parafermionic observable for the staggered eight-vertex model admits an equivalent identity.
The kernel of the convolution operator for the staggered eight-vertex model
**Lemma _S8V 4_. (_extending the s-holomorphic function to the convolution kernel_). The convolution kernel is given by the summation,
\[v\big{(}x\big{)}=\sum_{y\in{\bf b}}u\big{(}y\big{)}f_{\Omega}\big{(}y,x\big{)}\ \,\]
which can be extended to an s-holomorphic function \(h\), with,
\[h\big{(}x\big{)}=\sum_{y\in{\bf b}}u\big{(}y\big{)}f_{\Omega}\big{(}y,z\big{)}\ \,\]
for \(x\in{\mathscr{E}}\), and \(f_{\Omega}\) belonging to the space of functions,
\[f^{\rm SSV}=f:{\bf b}\longrightarrow{\bf C}\ \.\]
The final result provides a block decomposition of the \(N\) th power of the staggered eight-vertex model propagator.
### Odd eight-vertex parafermionic observable, and abelian sandpile parafermionic observable
Below, instead of listing results emulating those obtained in the previous section for the Ashkin-Teller, and staggered eight-vertex, models, we refer to the previous section. To avoid being overly repetitive, all results for the staggered eight-vertex model directly carry over for the odd eight-vertex model.
Furthermore, as described in [15], the fact that the abelian sandpile model coincides with the 0-color limit of the self dual Potts model directly implies that the operator formalism can also apply to that model, in which the observable is a boson. Again, for the sake of not being overly repetitive, the arguments for the operator formalism which are expected to carry over are excluded.
|
2307.02496 | Learning to reconstruct the bubble distribution with conductivity maps
using Invertible Neural Networks and Error Diffusion | Electrolysis is crucial for eco-friendly hydrogen production, but gas bubbles
generated during the process hinder reactions, reduce cell efficiency, and
increase energy consumption. Additionally, these gas bubbles cause changes in
the conductivity inside the cell, resulting in corresponding variations in the
induced magnetic field around the cell. Therefore, measuring these gas
bubble-induced magnetic field fluctuations using external magnetic sensors and
solving the inverse problem of Biot-Savart Law allows for estimating the
conductivity in the cell and, thus, bubble size and location. However,
determining high-resolution conductivity maps from only a few induced magnetic
field measurements is an ill-posed inverse problem. To overcome this, we
exploit Invertible Neural Networks (INNs) to reconstruct the conductivity
field. Our qualitative results and quantitative evaluation using random error
diffusion show that INN achieves far superior performance compared to Tikhonov
regularization. | Nishant Kumar, Lukas Krause, Thomas Wondrak, Sven Eckert, Kerstin Eckert, Stefan Gumhold | 2023-07-04T08:00:31Z | http://arxiv.org/abs/2307.02496v3 | Learning to reconstruct the bubble distribution with conductivity maps using Invertible Neural Networks and Error Diffusion
###### Abstract
Electrolysis is crucial for eco-friendly hydrogen production, but gas bubbles generated during the process hinder reactions, reduce cell efficiency, and increase energy consumption. Additionally, these gas bubbles cause changes in the conductivity inside the cell, resulting in corresponding variations in the induced magnetic field around the cell. Therefore, measuring these gas bubble-induced magnetic field fluctuations using external magnetic sensors and solving the inverse problem of Biot-Savart's Law allows for estimating the conductivity in the cell and, thus, bubble size and location. However, determining high-resolution conductivity maps from only a few induced magnetic field measurements is an ill-posed inverse problem. To overcome this, we exploit Invertible Neural Networks (INNs) to reconstruct the conductivity field. Our qualitative results and quantitative evaluation using random error diffusion show that INN achieves far superior performance compared to Tikhonov regularization.
Machine Learning, Invertible Neural Networks, Water Electrolysis, Biot-Savart Law Industrial Application: Clean Energy
## 1 Introduction
The increasing demand for clean energy has driven extensive research on electrolysis for hydrogen production, offering advantages like zero greenhouse gas emissions, energy storage capabilities, and a promising pathway towards reducing the carbon footprint (Capurso et al., 2022). However, the efficiency of electrolysis is limited by the formation of gas bubbles that impede the reaction and block electric currents, thereby decreasing the efficiency of the electrolysis cell for producing hydrogen (Angulo et al., 2020). Therefore, the detection of both bubble sizes and gas distribution, as well as the control of the bubble formation, is crucial for ensuring the safety and sustainability of hydrogen production via electrolysis.
Locating bubbles in electrolysis cells is difficult as the electrolyzer structures are typically non-transparent. However, an easy and non-invasive approach to address this problem is to use externally placed magnetic field sensors to measure bubble-induced fluctuations. However, the availability of only low-resolution magnetic field measurements outside the cell, coupled with the high-resolution current distribution inside the cell necessary to provide bubble information creates an ill-posed inverse problem for bubble detection. Additionally, bubble growth and detachment are governed by a complex interplay of various forces, such as buoyancy, hydrodynamic and electrostatic forces (Hossain et al., 2020), while measurement errors due to sensor noise add to the challenge of bubble detection.
Contactless Inductive Flow Tomography (CIFT), pioneered by (Sletani and Gerbeth, 1999), enables the reconstruction of flow fields in conducting fluids by utilizing Tikhonov regularization. The technique estimates induced electric and magnetic fields resulting from fluid motion under an applied magnetic field, with the measurements taken from magnetic sensors placed on the external walls of the fluid volume. However, in our current tomography configuration, we do not induce current through an external magnetic field, and the limited number of available sensors poses an added problem in achieving a satisfactory reconstruction of the high-dimensional current distribution.
Deep Neural Networks (DNNs) offer a data-driven approach to reconstruct the current distribution inside an electrolysis cell based on external magnetic field measurements, thereby capturing complex relationships between the two. A method called Network Tikhonov (NETT) (Li et al., 2020) combines DNNs with Tikhonov regularization, where a regularization parameter \(\alpha\) plays a crucial role in balancing
data fidelity and regularization terms. However, selecting an appropriate \(\alpha\) can be challenging, as it impacts the quality of outcomes and often relies on heuristic assumptions (Hanke, 1996).
We applied Invertible Neural Networks (INNs) to reconstruct the current distribution in 2D from one-dimensional magnetic field measurements, aiming to capture 200 times more features in the output compared to the input space. However, the INN struggles to generalize due to limited and low-resolution magnetic field data, resulting in poor reconstruction or significant overfitting. The lack of information hampers the performance despite adding latent variables to match dimensionality. We also explored Fourier analysis solutions, as suggested by (Roth et al., 1989), but as the authors pointed out, it proved insufficient due to the high sensor distance from the current distribution and noise in sensor readings.
To address the limitation of reconstructing high-resolution current distribution with limited magnetic sensors, we explored an alternative approach based on lower-resolution binary conductivity maps. These discrete maps represent non-conducting void fractions as zeros, indicating the presence of bubbles. A cluster of zeros can indicate either the existence of large bubbles or a cluster of small bubbles, enabling us to estimate the bubble distribution and their sizes. We define the conductivity map as \(x\in\mathbb{R}^{N}\) and the magnetic field measurements as \(y\in\mathbb{R}^{M}\) where \(N>M\) such that the transformation \(x\to y\) incurs information loss. Let us formulate the additional latent variables as \(z\in\mathbb{R}^{D}\) such that for the INN shown in Figure 1, the dimensionality of \([y,z]\) is equal to the dimensionality of \(x\) or \(M\)\(+\)\(D=N\). Hence, in the inverse process, the objective is to deduce the high-dimensional conductivity \(x\), from a sparse set of magnetic field measurements \(y\). Note that \(x\) can be either the current distribution or the conductivity map, where the former was difficult to reconstruct based on the INNs.
## 2 Method
### Forward Problem - Biot-Savart Law
We define the conductivity map \(x\) as \(\sigma\) and the applied electric field in 3D space as \(E\). Since neither the liquid metal nor the non-conducting bubble void fractions inside the conductor in our simulation setup (Krause et al., 2023) are moving, the Ohm's Law at a position \(r\) results in \(j(r)\ =\ \sigma(r)E(r)\) where \(j(r)\) is the current density. With the known current density at pre-defined points, the induced magnetic flux density at a point \(r\) in 3D space is computed using the Biot-Savart law,
\[B(r)=\frac{\mu_{0}}{4\pi}J_{V}\frac{j(r^{\prime})\times(r-r^{\prime})}{|r-r|^{ 3}}\,dV \tag{1}\]
\(\mu_{0}\) is the permeability of free space, \(V\) is the volume with \(dV\) as infinitesimal volume element and \(B(r)\ \in\mathbb{R}^{3}\) is the magnetic field at point \(r\). We term the measurable component of \(B(r)\) as \(y(r)\) while \(r^{\prime}\) is the integration variable and a location in \(V\). In (1) and our simulation, steady-state current flow is assumed. But, for time-varying current or magnetic field, the time derivative of the fields must be considered.
### Invertible Neural Network (INN)
The overview of our INN model is provided in Figure 1 which closely follows (Ardizzone et al., 2019). The INN has a bijective mapping between \([y,z]\) and \(x\), leading to INN's invertibility, that learns to associate the conductivity \(x\) with unique pairs \([y,z]\) of magnetic field measurements \(y\) and latent
Figure 1: An overview of our INN architecture
variables \(z\). We incorporate the latent variables \(z\) to address the information loss in the forward process \(x\to y\). Assuming INN is an invertible function \(f\), the optimization via training explicitly calculates the inverse process, i.e. \(x=f(y,z;\theta)\) where \(\theta\) are the INN parameters. We define the density of the latent variable \(p(z)\) as the multi-variate standard Gaussian distribution. The desired posterior distribution \(p(x|y)\) can now be represented by the deterministic function \(f\) that pushes the known Gaussian prior distribution \(p(z)\) to \(x\)-space, conditioned on \(y\). Note that the forward mapping, i.e. \(x\to[y,z]\) and the inverse mapping, i.e. \([y,z]\to x\) are both differentiable and efficiently computable for posterior probabilities. Therefore, we aim to approximate the conditional probability \(p(x|y)\) by our tractable INN model \(f(y,z;\theta)\) which uses the training data \(\{(x_{i},y_{i})\}_{i=1}^{T}\) with \(T\) samples from the forward simulation.
### INN Architecture and Training Loss
Our INN model \(f\) is a series of \(k\) invertible mappings called coupling blocks with \(f\coloneqq\big{(}f_{1},\ldots,f_{j},\ldots f_{k}\big{)}\) and \(x=f(y,z;\theta)\). Our coupling blocks are learnable affine transformations, scaling \(s\) and translation \(t\), such that these functions need not be invertible and can be represented by any neural network (Dinh et al., 2017). The coupling block takes the input and splits it into two parts, which are transformed by \(st\) networks alternatively. The transformed parts are subsequently concatenated to produce the block's output. The architecture allows for easy recovery of the block's input from its output in the inverse direction, with minor architectural modifications ensuring invertibility. We follow (Kingma and Dhariwal, 2018) to perform a learned invertible \(1\times 1\) convolution after every coupling block to reverse the ordering of the features, thereby ensuring each feature undergoes the transformation. Even though our INN can be trained in both directions with losses \(L_{x},L_{y}\) and \(L_{x}\) for variables \(x,\,y,\,z\) respectively as in (Aridizone et al., 2019), we are only interested with reconstructing the conductivity variable \(x\), i.e. the inverse process. Additionally, leaving out \(L_{y}\) and \(L_{x}\) allows us to not perform the manual optimization of the weights of multiple loss definitions for stable training. Given the batch size as \(W\), the loss \(L_{x}\) minimizes the reconstruction error between the groundtruth and predictions during training as follows:
\[L_{x}(\theta)\ =\ \ \big{(}\tfrac{1}{w}\sum_{i=1}^{w}[x_{i}\,-\,f(y_{i},z_{i},\theta)]^{2}\big{)}^{\frac{1}{2}}\quad with\ objective\quad\theta^{*}\,=\, \operatorname*{argmin}_{\theta}\,L_{x}(\theta) \tag{2}\]
## 3 Experiments and Results
### Simulation setup and Data pre-processing
We calculate the dataset for training and testing our INN model by using the proof-of-concept (POC) simulation setup by (Krause et al., 2023) as shown in Figure 2 (left). The model simplifies the water electrolyzer to an electrical conductor with dispersed non-conducting components. Through Cu wires, with (length, width, height) of 50 x 0.5 x 0.5 cm, connected to Cu electrodes (10 x 7 x 0.5 cm), a current is applied to liquid GalnSn. The liquid metal is filled into a thin channel of (16 x 7 x 0.5 cm), and thereafter, the conductive electrolyte is simulated. With the use of GalnSn, reactions at the electrode surfaces and, thus, concentration-induced conductivity gradients are excluded.
The gas bubbles are simulated in the quasi-two-dimensional setup by using between 30 and 120 PMMA cylinders with diameters from 4 to 5 mm and negligibly low electrical conductivity placed in GalnSn. To
Figure 2: The POC model with Cu electrodes and wire, liquid GalnSn with PMMA cylinders and magnetic sensors (left). The binarized conductivity distribution of GalnSn containing region shown in x-y Cartesian plane (right).
measure the magnetic flux density field, an array of 10 x 10 sensors is positioned at a distance (\(d_{sensor}\)) of 5 mm and 25 mm below the electric current carrying part of the setup that contains GalnSn. In our future experimental setup, only one spatial component of the magnetic flux density is measurable. Hence, the conductivity map, together with one spatial component of the magnetic flux density, act as the groundtruth data. We simulated the electric conductivity distributions of 10,000 different geometrical configurations with a fixed applied current strength. After transforming \(\sigma\) of the initial variously dimensioned tetrahedral to a hexahedral mesh with defined dimensions, the resulting conductivities were divided with \(\sigma_{\text{GasSn}}=3.3\cdot 10^{6}\) S/m, giving \(\sigma_{\text{rel}}\) between 0 and 1. Subsequently, the relative conductivities were binarized by assigning values smaller than 0.25 to 0 and equal or superior to 1. A binary conductivity map of a sample is shown in Figure 2 (right). More details related to our simulation setup can be found in (Krause et al., 2023). From the originally 774 simulated conductivity data points, we selected only those directly above the sensor positions, resulting in 510 data points. Hence, for INN training, the data comprises magnetic field values with 100 sensor features and a conductivity map with 510 features for each geometry. To create training and validation sets, we shuffled the geometries and allocated 80% for training and 20% for validation. Standardization of the data was performed to ensure a common scale and distribution of conductivity and magnetic field features.
### Comparison with classical approaches
We obtained qualitative results of our INN model and compared it with regularization approaches (Tikhonov and ElasticNet) in Figure 3. The evaluation was performed using data with a sensor distance (\(d_{sensor}\)) of 5 mm and 100 sensors. The regularization parameters for Tikhonov and ElasticNet were determined through cross-validation on the training set. Our INN model shows a good approximation of the groundtruth, providing meaningful insights into the location of the non-conducting PMMA cylinder-induced void fractions mimicking the bubbles. The results of Tikhonov and ElasticNet regularization were similar, indicating the minimal impact of the L\({}_{1}\) penalty in ElasticNet for improving the predictions.
We also compared the latency for fitting ElasticNet and Tikhonov models and training our INN with four coupling blocks on similar hardware. The ElasticNet took approximately 4 hours, the Tikhonov took 45 minutes, while our INN training took only 142 seconds on a single GPU. Note that the reported time for ElasticNet and Tikhonov models includes the regularization parameter tuning process. These timings present a significant speed advantage of our INN model compared to the other approaches.
### Ablation study on \(d_{sensor}\) and number of sensors
We performed an ablation study to investigate the effect of changing the distance of the sensors from the conducting plate and the number of sensors. Figure 4 shows the results obtained after training separate instances of our INN model. Interestingly, the INN can reconstruct the placement of PMMA
Figure 3: The results from different reconstruction models for data with \(d_{sensor}\) = 5 mm and 10 x 10 sensors.
cylinder-induced void fractions even in the simulation setup with only 50 sensors and a sensor distance of 25 mm. However, the pixel-level correlations with adjacent data points are slightly degraded. Since statistical models, including INN, provides continuous valued predictions, we quantitatively evaluate the performance of our INN-based approach with those from classical approaches in Section 3.5.
### Validation Loss vs Coupling Blocks (k)
Multiple instances of the INN model were trained on various experimental settings with batch size 100, learning rate \(\alpha=1e-4\), and Adam Optimizer (\(\beta_{1}=0.8\) and \(\beta_{2}=0.9\)). Figure 5 (top row) displays validation loss curves for different numbers of coupling blocks (\(k\)) in the INN. Using only one coupling block leads to underfitting, while a higher number of blocks can cause overfitting. We stop training when the validation loss begins to increase. Notably, increasing \(k\) beyond three does not significantly reduce the validation loss, making it difficult to determine the best convergence. For the setup with 25 mm distance and 100 sensors, validation losses are higher compared to the setup with 5 mm distance and 100 sensors due to reduced information in magnetic field measurements with greater sensor distance. The setup with 25 mm distance and 50 sensors further degrades information. However, Figure 4 demonstrates the INN's ability to learn the PMMA cylinder distribution. Validation loss scores at the last epoch in Figure 5 reveal higher loss values for greater sensor distance and fewer sensors compared to the setup with 5 mm and 100 sensors, and the optimal \(k\) for coupling blocks to be 3 to 4 for each setup.
Figure 4: The results from our INN model after varying distance of sensors from the plate and number of sensors.
Figure 5: Top row shows the validation losses with varying INN coupling blocks. The centre image in the bottom row shows the log-likelihoods of the groundtruth with respect to the probability distribution of binary ensemble maps via error diffusion. The right image in the bottom row are the averaged log-likelihoods through the entire validation set.
### Random Error Diffusion
As the visual comparison in Figure 3 is not conclusive to determine the best reconstruction approach, we developed an ensemble-based evaluation to convert continuous maps to discrete conductivity values. In principle, Floyd-Steinberg dithering (known as Error Diffusion) can be used, but it spreads quantization errors into neighboring pixels with pre-defined fractions, due to which this technique won't reproduce exact binary groundtruth. We, therefore, randomize error fractions using Dirichlet distribution and generate an ensemble of binary maps from each predicted continuous conductivity map. Note that the fractions sum up to 1, and for computation constraints, we generate an ensemble of 100 binary maps. Next, the probabilistic density of the binary ensembles is estimated for each groundtruth, and the likelihood of the groundtruth with respect to the estimated density is computed. Figure 5 (bottom center) displays log-likelihood scores as a kernel density plot for validation samples (setup: 5 mm, 100 sensors). The Tikhonov model shows greater deviation compared to ElasticNet, whereas our INN model exhibits the least deviation, as confirmed by the averaged log-likelihood scores in Figure 5 (bottom right). Hence, the INN provides higher likelihood scores for the groundtruth compared to other approaches.
## 4 Conclusion
In this study, we proposed Invertible Neural Networks (INNs) to reconstruct conductivity maps from external magnetic field measurements in a model simulation setup mimicking features of a water electrolyzer. The results demonstrate the robustness of our INN model in learning the conductivity distribution, despite the ill-posed nature of the problem. Quantitative evaluation using randomized error diffusion confirms that INN provides accurate conductivity map approximations and significantly improves the likelihood that the predictions resemble the groundtruth. Our findings show that INNs can effectively reconstruct conductivity maps with a low number of sensors and at distances greater than 20 mm. Hence, INNs offer a promising approach for localizing and estimating non-conductive fractions in current conducting liquids, with potential for practical applications. Future research directions include investigating INN performance on higher-resolution conductivity maps and performing experiments with sensor measurements that contain noisy readings.
|
2303.07019 | Crossover from Boltzmann to Wigner thermal transport in thermoelectric
skutterudites | Skutterudites are crystals with a cage-like structure that can be augmented
with filler atoms ("rattlers"), usually leading to a reduction in thermal
conductivity that can be exploited for thermoelectric applications. Here, we
leverage the recently introduced Wigner formulation of thermal transport to
elucidate the microscopic physics underlying heat conduction in skutterudites,
showing that filler atoms can drive a crossover from the Boltzmann to the
Wigner regimes of thermal transport, i.e., from particle-like conduction to
wave-like tunnelling. At temperatures where the thermoelectric efficiency of
skutterudites is largest, wave-like tunneling can become comparable to
particle-like propagation. We define a Boltzmann deviation descriptor able to
differentiate the two regimes and relate the competition between the two
mechanisms to the materials' chemistry, providing a design strategy to select
rattlers and identify optimal compositions for thermoelectric applications. | Enrico Di Lucente, Michele Simoncelli, Nicola Marzari | 2023-03-13T11:32:42Z | http://arxiv.org/abs/2303.07019v3 | # Crossover from Boltzmann to Wigner thermal transport in thermoelectric skutterudites
###### Abstract
Skutterudites are crystals with a cage-like structure that can be augmented with filler atoms ("rattlers"), usually leading to a reduction in thermal conductivity that can be exploited for thermoelectric applications. Here, we leverage the recently introduced Wigner formulation of thermal transport to elucidate the microscopic physics underlying heat conduction in skutterudites, showing that filler atoms can drive a crossover from the Boltzmann to the Wigner regimes of thermal transport, _i.e._, from particle-like conduction to wave-like tunnelling. At temperatures where the thermoelectric efficiency of skutterudites is largest, wave-like tunneling can become comparable to particle-like propagation. We define a Boltzmann deviation descriptor able to differentiate the two regimes and relate the competition between the two mechanisms to the materials' chemistry, providing a design strategy to select rattlers and identify optimal compositions for thermoelectric applications.
Heat is a waste product of many and diverse energy intensive technologies, from vehicle exhausts in transportation to nuclear and natural gas power plants production, to large-scale manufacturing. Ongoing research is focused on finding strategies to convert such waste heat into electricity, and thermoelectric materials are among the most promising candidates [1] for this task, and for augmenting sustainable energy supplies in the near future. The thermoelectric figure of merit reaches a record value of 3.1 at 783 K in polycrystalline SnSe [2], even greater than the value obtained for single crystals [3] due to enhanced scattering. While many efforts have focused on designing materials with enhanced thermoelectric performance [4; 5], understanding how to maximise energy-conversion efficiency by decreasing thermal conductivity has been hindered by the lack of a microscopic theory capturing the mechanisms of heat conduction in poor thermal conductors. Fittingly, the recently developed Wigner formulation of thermal transport [6; 7] allows to describe heat conduction in anharmonic crystals and in solids with ultralow or glass-like conductivity; this is exactly the case of thermoelectrics. The Wigner formulation offers a comprehensive approach to describe heat transport across different regimes, covering on the same footing harmonic "Boltzmann crystals", anharmonic "Wigner crystals" and amorphous solids and glasses. A Boltzmann crystal is characterized by phonon interband spacings much larger than the phonon linewidths; in Boltzmann crystals particle-like heat conduction dominates [6; 7; 8] and the Peierls-Boltzmann transport equation [9; 10] describes accurately thermal conductivity [11; 12]. In amorphous solids and glasses wave-like tunneling dominates and the Wigner formulation recovers the Allen-Feldman formulation [13] accounting also for anharmonicity [14]. Last, Wigner crystals can be considered as the intermediate regime between the first two and are characterized by interband spacings which are comparable to phonon linewidths. The Peierls-Boltzmann equation fails to describe the wave-like contributions [15; 16; 17; 18] in materials with ultralow thermal conductivity, that are captured by the Wigner transport equation [6; 7].
Among others, skutterudites have been extensively studied for their possible applications in thermoelectrics [19; 20; 21] showing both high electrical and low thermal conductivities. The cage-like structure of skutterudites is a critical feature for thermoelectricity [21; 22]: the voids present in the structure can be occupied by loosely bound atoms ("fillers" or "rattlers") that can reduce thermal conductivity and enhance the thermoelectric figure of merit [23]. However, the interpretation of filler vibrations remains not fully understood [16; 37; 38]. The explanation of the rattling motion based on filler vibrations concentrated in specific energy ranges [37] was contradicted by a theoretical study suggesting the presence of a strong hybridization between the vibrations of the filler atoms and specific phonon bands of cage atoms [38]. This concept was then generalized to that of a "coherent interaction between fillers and host matrices" [16], which is more intricate than the simplistic "bare rattling". Here, we emphasize that also the latter explanation did not investigate the role of the filler in terms of phonon wave-like heat conduction and so we aim to clarify the fundamental mechanisms behind the reduction of thermal conductivity - and thus the related thermoelectric performance - applying the Wigner transport equation and its resulting
\[\kappa_{\rm tot}^{\alpha\beta}=\kappa_{\rm P,SMA}^{\alpha\beta}+\frac{1}{(2\pi)^{3 }}\int_{\rm BZ}\sum_{s\neq s^{\prime}}\frac{\omega(\mathbf{q})_{s}+\omega(\mathbf{q})_{s^ {\prime}}}{4}\left[\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}+\frac{C(\mathbf{q})_{s^ {\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right]\nu^{\alpha}(\mathbf{q})_{s,s^{ \prime}}\nu^{\beta}(\mathbf{q})_{s^{\prime},s}\frac{\frac{1}{2}[\Gamma(\mathbf{q})_{s }+\Gamma(\mathbf{q})_{s^{\prime}}]}{[\omega(\mathbf{q})_{s^{\prime}}-\omega(\mathbf{q})_{ s}]^{2}+\frac{1}{2}[\Gamma(\mathbf{q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}\,d^{3}q, \tag{1}\]
where \(\omega(\mathbf{q})_{s}\) is the angular frequency of a phonon having wavevector \(\mathbf{q}\) and mode \(s\), \(C(\mathbf{q})_{s}=\frac{\hbar^{2}\omega^{2}(\mathbf{q})_{s}}{k_{\rm B}T^{2}}\bar{\mathbf{ \imath}}(\mathbf{q})_{s}[\bar{\mathbf{\imath}}(\mathbf{q})_{s}+1]\) is the specific heat of that phonon population, \(\bar{N}(\mathbf{q})_{s}\) is the equilibrium Bose-Einstein distribution at temperature \(T\), \(\nu^{\alpha}(\mathbf{q})_{s,s^{\prime}}\) and \(\nu^{\beta}(\mathbf{q})_{s,s^{\prime}}\) are the cartesian components of the velocity operator, which generalizes the concept of group velocity, and \(\Gamma(\mathbf{q})_{s}=\frac{1}{\tau(\mathbf{q})_{s}}\) is the phonon linewidth of a phonon with lifetime \(\tau(\mathbf{q})_{s}\). The symbol BZ in Eq. (1) represents an integral over the Brillouin zone. \(\kappa_{\rm P,SMA}^{\alpha\beta}\) in Eq. (1) is the Peierls-Boltzmann particle-like conductivity, which is driven by phonon-phonon scattering in the single-mode relaxation time approximation (SMA). The additional Wigner term in Eq. (1) is a positive-definite tensor (\(\kappa_{\rm C}^{\alpha\beta}\)) emerging from the phase "coherences" between pairs of phonon eigenstates; _i.e._, from the wave-tunneling between two nondegenerate bands (\(s\neq s^{\prime}\)) [39; 40]. In order to explore the relative strength of the particle-like and wave-like heat-conduction mechanisms, we study three different families of skutterudites, where we compute the effective thermal conductivity \(\kappa_{\rm tot}^{\alpha\beta}\):
\[\kappa_{\rm tot}^{\alpha\beta}=\kappa_{\rm P,SMA}^{\alpha\beta}+\frac{1}{(2 \pi)^{3}}\int_{\rm BZ}\sum_{s\neq s^{\prime}}\frac{\omega(\mathbf{q})_{s}+\omega( \mathbf{q})_{s^{\prime}}}{4}\left[\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}+\frac{C (\mathbf{q})_{s^{\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right]\nu^{\alpha}(\mathbf{q}) _{s,s^{\prime}}\nu^{\beta}(\mathbf{q})_{s^{\prime},s}\frac{\frac{1}{2}[\Gamma(\mathbf{ q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]}{[\omega(\mathbf{q})_{s^{\prime}}-\omega(\mathbf{q})_{ s}]^{2}+\frac{1}{2}[\Gamma(\mathbf{q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}\,d^{3}q, \tag{2}\]
where \(\omega(\mathbf{q})_{s}\) is the angular frequency of a phonon having wavevector \(\mathbf{q}\) and mode \(s\), \(C(\mathbf{q})_{s}=\frac{\hbar^{2}\omega^{2}(\mathbf{q})_{s}}{k_{\rm B}T^{2}}\bar{\mathbf{ \imath}}(\mathbf{q})_{s}[\bar{\mathbf{\imath}}(\mathbf{q})_{s}+1]\) is the specific heat of that phonon population, \(\bar{N}(\mathbf{q})_{s}\) is the equilibrium Bose-Einstein distribution at temperature \(T\), \(\nu^{\alpha}(\mathbf{q})_{s,s^{\prime}}\) and \(\nu^{\beta}(\mathbf{q})_{s,s^{\prime}}\) are the cartesian components of the velocity operator, which generalizes the concept of group velocity, and \(\Gamma(\mathbf{q})_{s}=\frac{1}{\tau(\mathbf{q})_{s}}\) is the phonon linewidth of a phonon with lifetime \(\tau(\mathbf{q})_{s}\). The symbol BZ in Eq. (1) represents an integral over the Brillouin zone. \(\kappa_{\rm P,SMA}^{\alpha\beta}\) in Eq. (1) is the Peierls-Boltzmann particle-like conductivity, which is driven by phonon-phonon scattering in the single-mode relaxation time approximation (SMA). The additional Wigner term in Eq. (1) is a positive-definite tensor (\(\kappa_{\rm C}^{\alpha\beta}\)) emerging from the phase "coherences" between pairs of phonon eigenstates; _i.e._, from the wave-tunneling between two nondegenerate bands (\(s\neq s^{\prime}\)) [39; 40]. In order to explore the relative strength of the particle-like and wave-like heat-conduction mechanisms, we study three different families of skutterudites, where we compute the effective thermal conductivity \(\kappa_{\rm tot}^{\alpha\beta}\):
\[\kappa_{\rm tot}^{\alpha\beta}=\kappa_{\rm P,SMA}^{\alpha\beta}+\frac{1}{(2 \pi)^{3}}\int_{\rm BZ}\sum_{s\neq s^{\prime}}\frac{\omega(\mathbf{q})_{s}+\omega( \mathbf{q})_{s^{\prime}}}{4}\left[\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}+\frac{C (\mathbf{q})_{s^{\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right]\nu^{\alpha}(\mathbf{q}) _{s,s^{\prime}}\nu^{\beta}(\mathbf{q})_{s^{\prime},s}\frac{\frac{1}{2}[\Gamma(\mathbf{ q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]}{[\omega(\mathbf{q})_{s^{\prime}}-\omega(\mathbf{q})_{ s}]^{2}+\frac{1}{2}[\Gamma(\mathbf{q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}\,d^{3}q, \tag{3}\]
where \(\omega(\mathbf{q})_{s}\) is the angular frequency of a phonon having wavevector \(\mathbf{q}\) and mode \(s\), \(C(\mathbf{q})_{s}=\frac{\hbar^{2}\omega^{2}(\mathbf{q})_{s}}{k_{\rm B}T^{2}}\bar{\mathbf{ \imath}}(\mathbf{q})_{s}[\bar{\mathbf{\imath}}(\mathbf{q})_{s}+1]\) is the specific heat of that phonon population, \(\bar{N}(\mathbf{q})_{s}\) is the equilibrium Bose-Einstein distribution at temperature \(T\), \(\nu^{\alpha}(\mathbf{q})_{s,s^{\prime}}\) and \(\nu^{\beta}(\mathbf{q})_{s,s^{\prime}}\) are the cartesian components of the velocity operator, which generalizes the concept of group velocity, and \(\Gamma(\mathbf{q})_{s}=\frac{1}{\tau(\mathbf{q})_{s}}\) is the phonon linewidth of a phonon with lifetime \(\tau(\mathbf{q})_{s}\). The symbol BZ in Eq. (1) represents an integral over the Brillouin zone. \(\kappa_{\rm P,SMA}^{\alpha\beta}\) in Eq. (1) is the Peierls-Boltzmann particle-like conductivity, which is driven by phonon-phonon scattering in the single-mode relaxation time approximation (SMA). The additional Wigner term in Eq. (1) is a positive-definite tensor (\(\kappa_{\rm C}^{\alpha\beta}\)) emerging from the phase "coherences" between pairs of phonon eigenstates; _i.e._, from the wave-tunneling between two nondegenerate bands (\(s\neq s^{\prime}\)) [39; 40]. In order to explore the relative strength of the particle-like and wave-like heat-conduction mechanisms, we study three different families of skutterudites, where we compute the effective thermal conductivity \(\kappa_{\rm tot}^{\alpha\beta}\):
\[\kappa_{\rm tot}^{\alpha\beta}=\kappa_{\rm P,SMA}^{\alpha\beta}+\frac{1}{(2 \pi)^{3}}\int_{\rm BZ}\sum_{s\neq s^{\prime}}\frac{\omega(\mathbf{q})_{s}+\omega( \mathbf{q})_{s^{\prime}}}{4}\left[\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}+ \frac{C(\mathbf{q})_{s^{\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right]\nu^{\alpha}(\mathbf{q })_{s,s^{\prime}}\nu^{\beta}(\mathbf{q})_{s^{\prime},s}\frac{\frac{1}{2}[\Gamma(\mathbf{ q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]}{[\omega(\mathbf{q})_{s^{\prime}}-\omega(\mathbf{q})_{ s}]^{2}+\frac{1}{2}[\Gamma(\mathbf{q})_{s}+\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}\,d^{3}q, \tag{4}\]
where \(\omega(\mathbf{q})_{s}\) is the angular frequency of a phonon having wavevector \(\mathbf{q}\) and mode \(s\), \(C(\mathbf{q})_{s}=\frac{\hbar^{2}\omega^{2}(\mathbf{q})_{s}}{k_{\rm B}T^{2}}\bar{\mathbf{ \imath}}(\mathbf{q})_{s}[\bar{\mathbf{\imath}}(\mathbf{q})_{s}+1]\) is the specific heat of that phonon population, \(\bar{N}(\mathbf{q})_{s}\) is the equilibrium Bose-Einstein distribution at temperature \(T\), \(\nu^{\alpha}(\mathbf{q})_{s,s^{\prime}}\) and \(\nu^
highly anharmonic crystals [16; 17; 49]. We highlight how Yb-filled materials show a stronger Wigner-crystal behavior, _i.e._\(\kappa_{\rm P}\sim\kappa_{\rm C}\), while for BaFe\({}_{4}\)Sb\({}_{12}\) the behaviour is more similar to that of a Boltzmann crystal, _i.e._\(\kappa_{\rm P}>\kappa_{\rm C}\). It is worth noting that some experimental measurements show an increase in conductivity at very high temperatures (\(\geq 700\) K); this could be understood as driven by radiative and electronic heat transfer [50; 51], unrelated to the increase in coherences' conductivity. At high temperatures, radiative effects on thermal transports are expected to be important because radiative thermal conductivity should increase as \(T^{3}\)[52], while in semiconductors the electronic contribution can be important when, as temperature rises, electrons are excited to the conduction band [51].
In order to elucidate the physics underlying the low thermal conductivity of efficient thermoelectric skutterudites, we make a comparison between phonon lifetimes and average phonon interband spacing \(\Delta\omega_{\rm avg}=\frac{\omega_{\rm max}}{3N_{\rm at}}\) (\(\omega_{\rm max}\) being the maximum phonon frequency and \(N_{\rm at}\) the number of atoms in the primitive cell) [7], to describe how much each phonon contributes to the wave-like vs. the particle-like conduction mechanisms. As given in Ref. [7] (see Appendix E of Ref. [7] for a detailed derivation), the ratio between the two is approximately equivalent to the ratio between the phonon linewidth and the average phonon interband spacing,
\[\frac{\bar{\kappa_{\rm C}}(\mathbf{q})_{s}}{\bar{\kappa_{\rm P}}(\mathbf{q})_{s}} \simeq\frac{\Gamma(\mathbf{q})_{s}}{\Delta\omega_{\rm avg}}=\frac{1}{\tau(\mathbf{q})_ {s}\Delta\omega_{\rm avg}}, \tag{2}\]
where \(\bar{\kappa_{\rm P}}(\mathbf{q})_{s}\) and \(\bar{\kappa_{\rm C}}(\mathbf{q})_{s}\) are the average trace of the thermal conductivity tensor for particle-like and wave-like heat conduction respectively. From this one can define the Wigner limit in time \(\tau^{\omega}=\frac{1}{\Delta\omega_{\rm avg}}\), that determines the non-sharp crossover from a regime of dominant particle-like conduction to one of dominant wave-like conduction.
In Fig. 2 we show the distribution \(\tau(\mathbf{q})_{s}\) of the phonon lifetimes as a function of the energy \(\hbar\omega(\mathbf{q})_{s}\) for the unfilled (upper panels) and Yb-filled (lower panels) skut
Figure 2: Distribution of phonon lifetimes \(\tau(\mathbf{q})_{s}\ =\ \Gamma(\mathbf{q})_{s}^{-1}\) as a function of energy \(\hbar\omega(\mathbf{q})_{s}\) for unfilled (upper panels) and Yb-filled (lower panels) skutterudites at 800 K. The area of each scatter point is proportional to the contribution to the total lattice thermal conductivity and colored according to the origin of the contribution: \(c=[\bar{\kappa_{\rm P}}(\mathbf{q})_{s}-\bar{\kappa_{\rm C}}(\mathbf{q})]/[\bar{ \kappa_{\rm P}}(\mathbf{q})_{s}+\bar{\kappa_{\rm C}}(\mathbf{q})]\), where particle-like is green (\(c=1\)), wave-like is blue (\(c=-1\)) and intermediate mechanisms have intermediate colors, with red corresponding to 50% of particle-like and 50% of wave-like contributions [7]. The Wigner limit in time (dashed-black line) corresponds to a phonon lifetime equal to the inverse of the average interband spacing (\(\tau^{\omega}\ =\ \Delta\omega_{\rm avg}^{-1}\)). The dashed-purple hyperbola shows the Ioffe-Regel limit in time [7; 42] (\(\tau^{\rm IR}=\omega^{-1}\)), below which phonons are no longer well-defined quasi-particles. The pie charts have an area proportional to the total lattice thermal conductivity, and the slices resolve the particle-like conductivity (green) and the wave-like conductivity (blue). The black symbols connected by black lines are the points whose coordinates are the average energies and lifetimes projected on atoms (see SI [41]). Open (closed) symbols refer to unfilled (filled) skutterudites. The projections on the filler, transition-metal and antimony atoms are given by crosses, triangles and squares, respectively. The phonon lifetimes distribution for the remaining filled skutterudites are given in the SI [41].
terudites at 800 K. Phonons above the Wigner limit in time (_i.e._ with \(\tau(\mathbf{q})_{s}>\tau^{\omega}\)) contribute mainly to particle-like conductivity, while phonons below this limit (_i.e._ with \(\tau(\mathbf{q})_{s}<\tau^{\omega}\)) contribute mainly to the wave-like conductivity. We note that in all the compounds studied [41] the phonon lifetimes sit well above the Ioffe-Regel limit in time \(\tau^{\rm IR}=\omega^{-1}\)[7] (dashed-purple), confirming that phonons in these materials are well defined quasi particles and that the Wigner formulation is valid [7]. If this were not the case, then full spectral function approaches [53; 54] would be required. The clouds of phonon lifetimes in Fig. 2 remain distinctly above the Wigner limit for the unfilled compounds, while they are centered around the Wigner limit for the Yb-filled compounds. This allows to identify the crossover from the Boltzmann-crystal behaviour of unfilled skutterudites to the Wigner-crystal behaviour of Yb-filled ones, where phonon coherences become significant. Moreover, Fig. 2 also shows the atom-resolved energies and lifetimes (see Eqs. 3 and 4 of SI [41]), indicating how, in general, different atoms contribute to different regions of the distribution. We see that Yb fillers drive the characteristic lifetimes towards the Wigner limit in time, while the characteristic energies of the Ir/Co/Fe atoms, and of Sb are only slightly shifted higher. Given a certain phonon lifetime, the frequency associated to the filler is significantly lower than that of the other atom types in the structure, suggesting that indeed Yb behaves like a rattler [55]. These findings extend those discussed in Ref. [16], where it was suggested that fillers interact with the host matrix coherently, albeit without resolving the single-atom contributions in the energy-lifetime distribution. In fact, the present analysis shows how filling with Yb lowers all the lifetimes, not only those of the filler itself.
Then, we want to describe how each chemical species that composes the skutterudite structure influences the relative strengths of \(\kappa_{\rm P}\) and \(\kappa_{\rm C}\). Since the Boltzmann-crystal or Wigner-crystal behaviour is regulated by the competition between the phonon lifetimes and the Wigner limit in time [7] (see Eq. (2)), we introduce a Boltzmann deviation descriptor (\(B\)) defined as the inverse of the product between the skutterudite characteristic lifetime - _i.e._ the Matthiessen' sum of the average lifetimes resolved on atom types (see SI [41] for details) - and the average phonon interband spacing [7]:
\[B=\frac{1}{\bar{\tau}\Delta\omega_{\rm avg}}=\begin{cases}\frac{1}{\tau^{\rm V }\Delta\omega_{\rm avg}},&\text{with }\tau^{\rm U}=\frac{\tau_{\rm DM}\tau_{\rm Sh }}{\tau_{\rm M}+\tau_{\rm Sb}}\\ \frac{1}{\tau^{\rm F}\Delta\omega_{\rm avg}},&\text{with }\tau^{\rm F}=\frac{\tau_{\rm B }\tau_{\rm B}\tau_{\rm SM}}{\tau_{\rm M}\tau_{\rm M}+\tau_{\rm B}\tau_{\rm SM }},\end{cases} \tag{3}\]
where \(\bar{\tau}\) is the average lifetime [41], F and U superscripts symbolize filled and unfilled skutterudites, R the rattler, and M the Ir/Co/Fe metal.
In the upper panel of Fig. 3 we show the correlation between \(\frac{\kappa_{\rm P}}{\kappa_{\rm C}}\) and \(B\) at 800 K. Interestingly, it can be shown analytically (see Eq. 12 of the SI [41]) that, for a constant DOS, \(\frac{\kappa_{\rm P}}{\kappa_{\rm C}}=B^{-1}\) and the upper panel of Fig. 3 shows computational proof of this. It is worth noting that the correlation for, _i.g._, YbFe\({}_{4}\)Sb\({}_{12}\) at 300 K (burgundy cross in the upper panel of Fig. 3) shows how the relation \(\frac{\kappa_{\rm P}}{\kappa_{\rm C}}=B^{-1}\) remains valid at different temperatures.
Finally, we want to understand how skutterudites' chemistry comes into play in discriminating the ability of the filler atom to move inside the cage and how this is related to \(B\). The mean-square displacement (MSD) is often used in the literature to describe vibrating systems characterized by loosely bound atoms with long and elongated bonds [56; 41]. From this follows that we can define a heuristic parameter \(\eta\) that captures how filler's vibrations fill the space available in the skutterudite's cage:
\[\eta=\left|\frac{\rm MSD_{R}-MSD_{Sb}}{\rm MSD_{Sb}}\frac{d_{\rm Sb-R}-d_{\rm Sb -Sb}}{d_{\rm Sb-Sb}}\right|, \tag{4}\]
where \(d_{\rm Sb-R}\) is the distance between the filler atom and the atoms of the cage, \(d_{\rm Sb-Sb}\) is the bond length defining the Sb icosahedral cage where the filler is located and \(\rm MSD_{R}\) and \(\rm MSD_{Sb}\) are the mean-square displacement of the rattler and the Sb cage atoms, respectively. We note that, by definition, \(\eta=1\) for unfilled skutterudites.
In the lower panel of Fig. 3 we show the correlation between \(\frac{\kappa_{\rm P}}{\kappa_{\rm C}}\) and \(\eta\) at 800 K obtained for each of the filled skutterudites studied. This confirms the trend obtained
Figure 3: (Upper panel) Relation between the relative strength of particle-like and wave-like conduction, \(\frac{\kappa_{\rm D}}{\kappa_{\rm C}}\), at 800 K and \(B\) as given in Eq. (3); unfilled (filled) symbols represent unfilled (filled) skutterudites. Circles, triangles and squares represent Fe-Co- and Ir-based skutterudites respectively. The black dashed line correspond to the \(\frac{\kappa_{\rm D}}{\kappa_{\rm C}}=B^{-1}\) power law interpolating the data (see Eq. S12 of the SI [41]). The shaded regions show a smooth crossover from a dominant particle-like heat conduction in green, to a competing particle- and wave-like mechanism in red. The burgundy cross represents the correlation for YbFe\({}_{4}\)Sb\({}_{12}\) at 300 K. (Lower panel) Relation between the relative strength of particle-like and wave-like conduction (\(\frac{\kappa_{\rm D}}{\kappa_{\rm C}}\)) at 800 K and the descriptor \(\eta\) given in Eq. (4). The linear correlation between \(\eta\) and \(B\) shows how skutterudites’ chemistry determines the degree of Wigner thermal transport in these materials. The color scale goes from red (Wigner-crystal behaviour) to green (Boltzmann-crystal behaviour).
for \(\frac{\kappa_{\rm D}}{\kappa_{\rm C}}\) in the upper panel of Fig. 3, and validates numerically the predictions from Eq. (2). Most importantly, \(\eta\) allows to connect the physics behind Wigner heat transport to the chemistry of the skutterudites: we find that \(\eta\) correlates linearly with \(B\). In this sense, \(\eta\) can be used to provide a computationally cheap and close-to-be quantitatively accurate estimate of the degree of Wigner-crystal behaviour of a material. This underscores how \(B\) is proportional to the rattling motion of the filler, quantified by \(\eta\), and is therefore able to distinguish between optimal filler atoms for the reduction of \(\kappa_{\rm tot}\) from those for which the thermal behavior remains similar to that of unfilled skutterudites. _E.g._, the filled skutterudite BaFe\({}_{4}\)Sb\({}_{12}\) behaves very close to a Boltzmann crystal, since its \(\eta\) is significantly lower than that of the other filled skutterudites. We also observe that a rescaling of the filler's atomic weight translates into negligible changes of \(\kappa_{\rm tot}\), thus confirming that thermal transport is determined by bonding chemistry [41]. In the end, it is worth noting that only the MSDs (harmonic properties) and the crystal chemical bonds enter the definition of \(\eta\), and thus already at this level it is possible to give an approximate estimate of the degree of Wigner behavior of a thermoelectric material. Since the price of computing harmonic properties (MSDs) is orders of magnitude lower than that for computing anharmonic ones (phonon lifetimes), one could screen for cage-like thermoelectric materials with a strong wave-like contribution to conductivity through the parameter \(\eta\). In practice, one may exploit this parameter to perform high-throughput computational screening of Wigner crystals promising for thermoelectric applications.
In conclusion, we have used the Wigner formulation of thermal transport to investigate the microscopic physics underlying heat conduction in skutterudites, showing a crossover from Boltzmann to Wigner thermal transport when filling with, _e.g._, Yb atoms. Unfilled skutterudites behave as Boltzmann crystals, while filled ones change behavior from Boltzmann to Wigner depending on the filler atom and its bonding properties. We showed that, given the same host structure, the materials displaying the lowest conductivity are precisely those in which the \(\frac{\kappa_{\rm E}}{\kappa_{\rm C}}\) ratio between particle-like and wave-like contributions is larger. We also elucidated how the degree of Wigner heat conduction (quantified by the Boltzmann deviation descriptor, \(B\), derived from the microscopic harmonic and anharmonic quantities entering in the Wigner theory of thermal transport) is correlated to the relative motion between the filler atom and the cage; the latter being dependent on the chemical composition of the skutterudite structure (captured by the harmonic and computationally much cheaper parameter \(\eta\)). Thus, the rattling motion of the filler causing good thermoelectric performances can be seen a direct manifestation of phonon coherences becoming as important as phonon population. Thereby, this study paves the way for the identification of the most suitable chemical compositions to engineer new and efficient cage-like thermoelectric materials.
This research was supported by the Swiss National Science Foundation (SNSF), through Grant No. CR-SII5_189924 ("Hydronics" project).
|
2309.12329 | Mono/Multi-material Characterization Using Hyperspectral Images and
Multi-Block Non-Negative Matrix Factorization | Plastic sorting is a very essential step in waste management, especially due
to the presence of multilayer plastics. These monomaterial and multimaterial
plastics are widely employed to enhance the functional properties of packaging,
combining beneficial properties in thickness, mechanical strength, and heat
tolerance. However, materials containing multiple polymer species need to be
pretreated before they can be recycled as monomaterials and therefore should
not end up in monomaterial streams. Industry 4.0 has significantly improved
materials sorting of plastic packaging in speed and accuracy compared to manual
sorting, specifically through Near Infrared Hyperspectral Imaging (NIRHSI) that
provides an automated, fast, and accurate material characterization, without
sample preparation. Identification of multimaterials with HSI however requires
novel dedicated approaches for chemical pattern recognition. Non negative
Matrix Factorization, NMF, is widely used for the chemical resolution of
hyperspectral images. Chemically relevant model constraints may make it
specifically valuable to identify multilayer plastics through HSI.
Specifically, Multi Block Non Negative Matrix Factorization (MBNMF) with
correspondence among different chemical species constraint may be used to
evaluate the presence or absence of particular polymer species. To translate
the MBNMF model into an evidence based sorting decision, we extended the model
with an F test to distinguish between monomaterial and multimaterial objects.
The benefits of our new approach, MBNMF, were illustrated by the identification
of several plastic waste objects. | Mahdiyeh Ghaffari, Gerjen H. Tinnevelt, Marcel C. P. van Eijk, Stanislav Podchezertsev, Geert J. Postma, Jeroen J. Jansen | 2023-08-15T10:00:53Z | http://arxiv.org/abs/2309.12329v1 | Mono/Multi-material Characterization Using Hyperspectral Images and Multi-Block Non-Negative Matrix Factorization
###### Abstract
Plastic sorting is a very essential step in waste management, especially due to the presence of multilayer plastics. These mono/multi-material plastics are widely employed to enhance the functional properties of packaging, combining beneficial properties in thickness, mechanical strength, and heat tolerance. However, materials containing multiple polymer species need to be pre-treated before they can be recycled as mono-materials and therefore should not end up in mono-material streams. Industry4.0 has significantly improved materials sorting of plastic packaging in speed and accuracy compared to manual sorting, specifically through Near Infrared-Hyperspectral Imaging (NIR-HSI) that provides an automated, fast and accurate material characterization, without sample preparation. Identification of multi-materials with HSI however requires novel dedicated approaches for chemical pattern recognition. Non-negative Matrix Factorization, NMF, is widely used for the chemical resolution of hyperspectral images. Chemically relevant model constraints may make it specifically valuable to identify multilayer plastics through HSI. Specifically, Multi-Block Non-Negative Matrix Factorization (MB-NMF) with "correspondence among different chemical species" constraint may be used to evaluate presence or absence of particular polymer species. To translate the MB-NMF model into an evidence-based sorting decision, we extended the model with an F-test to distinguish between mono-material and multi-material objects. The benefits of our new approach, MB-NMF, were illustrated by the identification of several plastic waste objects.
Non-Negative Matrix Factorization, Hyperspectral Images, Image unmixing, Plastic sorting, Multilayer plastic, Multi-block data sets.
## I Introduction
A major challenge in plastics recycling is the sorting of mixed plastic waste into streams that can be further processed into molecularly homogeneous recycling [1-3]. Each piece of plastic waste needs to be chemically characterized in very little time. Specifically challenging are multi-material plastics, which make up a significant fraction of plastics waste. Multi-layered packaging material consists of at least one plastic layer, with one or more layers of other polymers or e.g. paper, aluminum foils, polymers, or other materials [1, 4, 5]. Multi-layered plastic thereby contains plastics and other thin laminated material layers that require additional physical separation into constituent layers [6, 7]. Multilayer plastics are widely used in the food industry, as they may increase shelf-life [1-3]. Therefore, their specific recycling treatment requires accurate characterization [8-10] during sorting, which requires advanced, fast and dedicated data analyses that do not yet exist. Near InfraRed-HyperSpectral Imaging [11] is the most widely used analytical paradigm for fast industrial characterizations in automated plastics sorting. Chemometrics [12-14] plays a key role in the translation of spectroscopic data into chemically supported sorting decisions [9, 15-19]. Van Den Broek, W., et al., have used a remote sensing spectroscopic near-infrared (NIR) system for real-time plastic identification in mixed household waste [15]. First, the measurement setup was used for the acquisition of the spectroscopic image data and then a non-linear transformation is performed by a neural network for the supervised classification of these measured images[15]. Other authors have developed further dedicated chemometrics pipelines for plastics sorting. None of these methods however, enable characterization of multilayer materials. Mid-infrared hyperspectral images of the multilayer polymer film (MLPF) cross sections are acquired with a high-speed quantum cascade laser (QCL) based mid-infrared microscope and analyzed using different data analysis techniques. The investigated MLPF contains polypropylene and ethylene-vinyl alcohol co-polymer composite which is commonly used for food packaging. Principal component analysis (PCA) and multivariate curve resolution- Alternating Least Square (MCR-ALS) algorithms were used by Zimmerleiter, R., et al. to extract information about the object [17]. However, cross-section characterization of multilayer plastics is impractical and difficult to implement in online plastic sorting.
In another work, HSI-based sorting is investigated by Bonifazi G., et al. to perform an automatic separation of paper, cardboard, plastics, and multilayer packaging. The built Partial Least Squares - Discriminant Analysis model was able to characterize multilayer multi-material objects [22]. However, a discriminant analysis model requires a training dataset with all types of multilayer plastics with different compositions. In the case of new polymers, which are not included in the training database, the model will wrongly classify the object as belonging to a polymer class present in the training database. This highlights one of the cornerstones in the current paradigm
of automated plastics sorting: it is treated in these existing approaches as a sorting task among a closed set of polymer species. Although contemporary plastics packaging developments continuously introduce new multilayer materials and even new polymer species. Non-negative matrix factorization [20, 21] of HSI data transcends the closed classification task as it provides a free estimation of compound spectra that can be supervised by (partial) spectroscopic, chemical and materials knowledge on known and unknown polymer materials. The method is based on multiplicative updates with the inherent non-negativity property, introduced by Lee and Seung [21] in the factorization of facial images. A known issue in NMF is the non-uniqueness of the obtained models when knowledge introduced into the model as model constraints is incomplete, limiting the correspondence between model results and actual chemistry. To increase possible uniqueness and resulting correspondence to actual chemistry, additional constraints may be implemented in NMF as "Correspondence among the Species" [22-24]. This correspondence constraint requires multiple single data sets that share a common mode and therefore multi-block data sets adds more information about the minor compounds in some blocks on one hand and helps to break the possible rank deficiency or highly collinear contribution maps of plastics on the other hand. This constraint imposes information about the presence/absence of components by defining the number and identity of compounds in each data block along the multi-block data. This additional information may strongly improve the accuracy of results. Furthermore, this data structure helps to generate a proper initial estimate for further decompositions.
The Multi-Block Non-Negative Matrix Factorization under the "correspondence among the species" constraint we propose here may be a highly suitable digital solution to translate HSI images of plastic waste into sorting decisions on multilayer plastics. This constraint is essential to tackle the high collinearity among NIR spectra of several widely used polymer species. We also introduce an F-test on the MB-NMF model observations to obtain crisp evidence-based sorting decisions for each plastic waste object into mono or multi-material streams. MB-NMF can accurately characterize objects (partly) composed of unknown polymer materials, which opens up the materials classification into an observation of new polymer and other materials occurring in plastic waste streams. We demonstrate this new digital method with numerical simulations and several real HSI data sets of real plastic waste objects with known polymer composition.
## II II. Materials and Methods
**A brief description of Non-Negative Matrix Factorization:** Non-negative matrix factorization (NMF) decomposes a matrix (which can be an unfolded HSI image), \(\textbf{R}_{\text{IJ}\times\text{K}}\), into the product of smaller submatrices \(\textbf{W}_{\text{IJ}\times\text{n}}\) and \(\textbf{H}_{\text{n}\times\text{K}}^{\text{T}}\):
\[\textbf{R}_{\text{IJ}\times\text{K}}\textbf{=W}_{\text{IJ}\times\text{n}}\ \textbf{H}_{\text{n}\times\text{K}}+\textbf{E}_{\text{IJ}\times\text{K}} \tag{1}\]
where _IJ_, \(K\), and \(n\) are the number of pixels and wavelengths, and chemical components (factors), respectively. To this, formalize NMF optimizes the following cost function:
\[\begin{array}{ll}\text{Minimize (W, H)}&\|\textbf{R}-\textbf{W}\textbf{H}\|_{\text{F}}\\ \text{subject to}&\text{W}\geq\)0,H\(\geq\)0 \\ \end{array} \tag{2}\]
The multiplicative update algorithm is more popular due to the simplicity of implementation while a vast majority of algorithms are introduced in the literature. The updating parts are:
\[\textbf{H}\leftarrow\textbf{H}\ \frac{(\textbf{W}^{\text{T}}\textbf{R})}{( \textbf{W}^{\text{T}}\textbf{W}\textbf{H})} \tag{3}\]
\[\textbf{W}\leftarrow\textbf{W}\ \frac{(\textbf{R}\textbf{H}^{\text{T}})}{( \textbf{W}\textbf{H}\textbf{H}^{\text{T}})} \tag{4}\]
Equations 3 and 4 denote element-wise multiplication. These update rules preserve the nonnegativity of **W** and \(\textbf{H}^{\text{T}}\), given that \(\textbf{R}_{\text{IJ}\times\text{K}}\)is element-wise non-negative.
The resulting **W** and \(\textbf{H}^{\text{T}}\) can be unique or not, depending on the data structure. Using additional constraints is more convincing to lead the unique solutions. To achieve chemical/physical uniqueness, it is necessary to use information that is based on the chemical/physical structure of the data sets; like trilinearity, zero region/ selectivity, and correlation constraints.
The"Correspondence among the species" constraint is an important constraint that may lead to accurate solutions in multi-block datasets that may be imposed upon **W** in MB-NMF because the presence/absence of the compounds is known in some of the data blocks in advance. In the absence of this constraint the solutions my lack the accuracy and lead to incorrect interpretations. This prior knowledge also makes that the number of components in the MB-NMF model is known in advance because to determine the number using Singular Value Decomposition is too time-consuming for HSI-based online sorting. In principle, the number of components is equal to the number of known data blocks augmented to the unknown data block plus one. The extra component will be used to deal with the objects with unknown composition.
**Multi-Block Non-Negative Matrix Factorization:** Although unfolded HSI images may be analyzed by various curve resolution methods to extract pure contribution maps of single polymer species, a lack of enough information/constraint may often lead to a lack of uniqueness between species and therefore wrong interpretations and characterizations. Additional multi-block constraints (Trilinearity, correspondence among the species, area correlation constraint). Fusing HSI data from _unknown_ mono/multi-material plastic with hyperspectral images of mono/multilayer plastic objects of _known_ polymer composition, produces multi-block/higher order data which can be analyzed by MB-NMF as visualized in Fig. 1. Correspondence among species may be used as a constraint on the _known_ blocks to impose the presence/absence of polymer species in every considered block. Prior to analyzing the data by MB-NMF, specific data pretreatments are necessary to focus upon the linear spectroscopic response of the polymer-containing pixels. Fig. 1b summarizes all the necessary steps.
Object detection is necessary to remove most of the profiles
corresponding to the conveyor belt and other pixels that contain non-polymer noise. In this work we used "activecontour" function in MATLAB software (R2021a) to segment each image into the foreground and background, leading to a binary output matrix indicating object pixels, removing most of the conveyor belt and selecting object pixels. Then we removed Saturated Spectral profiles (SS) with zero slopes in the middle part of the spectra (1100 - 1500 nm). The next two steps transform raw reflectance data into positive absorbance data. The Minus logarithm of the spectral profiles is calculated and then Asymmetric Least Squares is used to remove the baseline. A final Singular Value Decomposition, SVD then removes most of the remaining non-polymer noise. First, the data analyzed by SVD and then is reconstructed by a few Principal Components (PC). These steps are summarized in Fig. 1a and visualized in Fig. 2.
The data can be divided into two parts, known blocks, and unknown blocks. The "_known_" blocks which contain the data sets of known objects with known composition which can contain one or more compounds and their composition is known in advance. "Unknown" blocks are coming from unknown objects. This data structure helps to apply correspondence among the species constraint. To impose the mentioned constraint in MB-NMF, it suffices to set the zero values in the initial estimates (**W**). The resulting **W**, contains characteristic information about the composition of the unknown object. The sum of squares of elements of each column of **W** (variance) in the last block (came from the unknown object) is a good representative of the signal contribution of each polymer in the unknown object. The signal contribution (variance) can be used to distinguish mono-material and multi-material objects based on a statistical F-test. The F-test is generally used in statistics to determine if the standard deviations of two populations are equal. In this case, in the first step the variance of all polymers in the last data block is calculated. Then each value was divided to the minimum value. The minimum value is a good representation of noise population. Finally, a value of P \(<\) 0.05 was considered statistically significant. Thus, in the case of monolayer materials, only one polymer type will pass the F-test but for multilayers, more than one.
However, one final step is necessary to visualize the composition of the object which is sorted as multilayer, is a multilayer object or multicomponent object (with connected parts) which is made of different types of polymers like a water bottle (the cap could be PP or PE and the body is PET). First of all, the predestined columns of **W** will convert to binary numbers and then reshape to generate the contribution maps of the objects. Based on these maps it will be possible to distinguish between the connected objects (like a water bottle) and the multi-materials. The last column/row of **W/H** is added to capture the concentration and spectral profile of any of the unmodeled compounds (brown in Fig. 1b).
Fig. 2 presents the effect of each data pretreatment step on the shape of the data for an example object. Fig. 2a is the recorded raw HSI of an optional object. Some spectral profiles should be removed from the data before data analysis of the signal-saturated spectral profiles. The corresponding spectral profile is horizontal and easy to remove by calculating the slope of all profiles in the middle of the data. Fig. 2b presents the normalized data after object detection and removal of some pixels (SS). The next step (c) is the data after baseline correction using asymmetric least squares on the minus logarithm of the data set. Finally, the data set is reconstructed by a few principal components for denoising purposes.
Fig. 1: Data preprocessing and analyzing are visualized in a and b, respectively. The data, **R**, can be decomposed to **W** and **H** using MB-NMF under correspondence among the species constraint. The known blocks are blue (PP), green (PE), red (PET), mint green (PVC), avocado green (PS), magenta (belt), and cyan (PA). The black color is used for the unknown data from an unknown object. Some of the unknown objects may partially contain new type of polymers. All information regarding this new material will collect as component X which is shown in brown.
## III Data description
**Simulated data sets:** The objects which are used to make the known data blocks for the simulated cases are made of PET, PolyEthylene Terephthalalate (first block), PP, PolyPropylene (second block), and PE, PolyEthylene (third block). These data blocks will be used as "known blocks" with known chemical composition (Fig. 3a). The mean image of the unknown objects which are multi-material plastics made of PP/PE (both known compounds) and PET/X/PP is presented in Fig. 3b, in which compound X is a new polymer that is not included in the known data blocks.
**Experimental data sets:** Some monolayer objects with known composition were collected as standards from the packaging waste. These objects are made of PP (Polypropylene), PE (Polyethylene), PET (Polyethylene terephthalalate), PVC (Polyvinyl chloride), PS (Polystyrene), and PA (Polyamide). These objects will cover the most possible cases of multilayered plastics. For each object, HSI was recorded in NIR (900-1700nm). The recorded HSIs will be used to generate the known blocks of the data for MB-NMF. The mean spectral images are presented in Fig. S1. Furthermore, some objects were used as unknown objects. However, the characteristic information is provided by the brand owners in advance. These objects are made of PP/PE, PP/PS, PE, PP, PS, PVC, PET, PP/PE, PE/PET, respectively.
## IV Result and discussions
**Simulation:** The simulated data sets were analyzed by MB-NMF under the "correspondence among the species" constraint [22, 23, 24]. The signal contribution (variance) of each component in the last block were calculated and visualized in Fig. 4 for both data sets. The first unknown object is sorted as a multilayer/multi-material plastic made of PP and PE (Fig. 4a). In the second simulated data, the object contains PET, X, and PP (Fig. 3b). Compound X, a new type of polymer, is not represented in the known data blocks. By analyzing this multi-block data by MB-NMF, this object will be characterized as a multilayer/multi-material containing PET and PP. The presence
Figure 3: a and b are the mean images of the generated HSI data for the known and unknown objects, respectively.
Figure 2: The data set corresponds to the first unknown object that is visualized after some pretreatment. The raw data (a), after object detection and removing some pixels due to signal saturation (b), baseline correction of the minus logarithm of the data (c), and the data denoised by Singular Value Decomposition. The wavelengths are in nm.
of a new type of polymer that was not included in the model, does not wrongly identify the known polymer types in the object; it is sorted out as a multilayer/multi-material which contains PET/PP.
**a)**
**Experimental:** To analyze the experimental data sets, some additional data pretreatments were necessary compared to the simulated studies, as visualized in Fig. 1. The six different polymers and data from the empty conveyor belt (to impose the background) produce the known part of the multi-block data for MB-NMF. The selected Region Of Interest (ROI ) is indicated in Fig. S1 by blue squares. The mean spectral profile of the ROIs, which is presented in Fig. S2, is used to generate the initial estimate for \(\mathbf{H}^{\text{T}}\). The spectral profile of each polymer is shown by a specific color (Fig. S2).
The first object with unknown composition was used to visualize the effects of the used data pretreatments. The mean spectral image is presented in Fig. S3. removing the pixels corresponding to the belt (object detection) calculating a mask that removes pixels associated with the belt.
The multi-block data set corresponding to the first unknown object was analyzed by MB-NMF for characterization purposes in two different scenarios. In the first scenario, the known data blocks contain information about all possible polymers like PP, PE, PET, etc. The calculated variances for all of the polymers using the resulting \(\mathbf{W}\) matrices are presented in Fig. 5. Based on this information, this object is a multilayer object which contains PE and PP. As it is clear in Fig. 5a, X does not have any variance which means there is not any unknown compound in this object.
In the other way (second scenario), the data set which contains PE is removed from the known blocks of the data to make an incomplete _known_ block set. Based on the MB-NMF outputs, this object will be sorted as a multilayer, multi-material object made of PP and X. Thus the algorithm accurately predicts the _known_ part of the object which is included in the known data blocks and is not disturbed by unknown interfere
**a)**
The remaining HSIs of unknown objects were analyzed by MB-NMF after some data pretreatments as explained before. The generated bar plots using the last rows of \(\mathbf{W}\) (corresponding to the last data block) are visualized in Fig. 6. The calculated variance (sum of squares of all elements) for each object is a way to characterize the mono/multi-material composition of all
Figure 4: The signal contribution is visualized versus the number of target polymers for the simulated data sets. Blue, red, yellow, and brown colors are used for PET, PP, PE, and X, respectively. In the bar plot, the intensity is an indication for the concentration of each polymer type. The numbers one to four is used for PET, PP, PE, and X, respectively.
Figure 5: The calculated signal contribution (variance) of all polymers for an unknown object by using MB-NMF under two different scenarios. With the aim of a statistical F-test with a 0.05 interval, these objects will be sorted as multi-material objects made of PP/PE (a) and PP/X (b).
plastic objects. A statistical F-test with a 95 percent confidence interval was performed to evaluate the significance of the calculations. If more than one polymer type passes the F-test, the object is a multi-material plastic. Based on Fig. 6, a, g and h should be sorted as multi-material and the rest, as mono-material plastic. Finally, the recovered spectral profiles are visualized in Fig. 7.
Based on the outputs of MB-NMF and the F-test, "a", "g" and "h" will be sorted as multi-material objects containing two different types of polymers. However, it is unknown yet whether these objects are made of two different polymers which are connected (like a water bottle whose body is PET and cap is PE/PP) or whether they are multilayer objects.
Thus, the contribution maps of **W** (which is converted to binary numbers first) are visualized for "g" and "h" in Fig. 8. As it is clear, object "g" which is visualized in Fig. 8a, is a multilayer object. However, object "h" which is made of PE and PET is an object with two parts (body in PET and cap in PE). There are some pixels in which PET and PE are on top of each other which is shown in white color. The black part in both maps is for the background.
Figure 8: This resulted in contribution maps of objects “g” and “h” using MB-NMF.
Figure 6: The calculated signal contribution of all polymers in the unknown objects using MB-NMF.
Figure 7: The resulting **H**T Matrix using B-NMF for all of the unknown objects is visualized.
## V Conclusion
Multilayer plastic characterization is demonstrated through the analysis of hyperspectral images using non-Negative Matrix Factorization. NMF can extract characteristic information from the Hyperspectral images using the non-negative property of the data. However, a lack of enough information/constraint can lead to wrong interpretations. Thus, additional information can be used to increase the accuracy of characterization and accurate interpretations consequently.
In this work, MB-NMF is used instead of NMF, because the multi-block structure enables the use of correspondence constraints which lead to uniqueness. Several known and unknown data blocks were fused in the first step. Then the fused data (MB) was analyzed by MB-NMF under correspondence among the different chemical species constraints. Besides accurate characterization, it will also tackle possible collinearity among concentration contribution maps. Another challenge of using data unmixing is the determination of the number of components. Analyzing the fused data instead of single data will tackle this issue. To use this algorithm in online plastic sorting the number of components is always equal to the number of known data blocks. MB-NMF can characterize the multi-material objects which partially is made of a new material/polymer that is not included in the known data block. In this case, the known part of the object will be predicted accurately due to unmixing. Thus this object will be sorted accurately.
## Acknowledgment
This project is co-funded by TKI-E&I with the supplementary grant "TKI- Toeslag" for Top consortia for Knowledge and Innovation (TKI's) of the Ministry of Economic Affairs and Climate Policy. We thank all partners in the project "Towards improved circularity of polyolefin-based packaging", managed by ISPT and DPI in the Netherlands. It was partly funded by the Perfect Sorting Consortium, a consortium that develops AI technology to enable the intended use of sorting for the recycling of packaging material. The members are NTCP, Danone, Colgate-Palmolive, Ferrero, LVMH, Mars, Michelin, Nestle, Procter & Gamble, PepsiCo, Ghent University, and Radboud University.
|
2310.07121 | Motion Vector-Domain Video Steganalysis Exploiting Skipped Macroblocks | Video steganography has the potential to be used to convey illegal
information, and video steganalysis is a vital tool to detect the presence of
this illicit act. Currently, all the motion vector (MV)-based video
steganalysis algorithms extract feature sets directly on the MVs, but ignoring
the steganograhic operation may perturb the statistics distribution of other
video encoding elements, such as the skipped macroblocks (no direct MVs). This
paper proposes a novel 11-dimensional feature set to detect MV-based video
steganography based on the above observation. The proposed feature is extracted
based on the skipped macroblocks by recompression calibration. Specifically,
the feature consists of two components. The first is the probability
distribution of motion vector prediction (MVP) difference, and the second is
the probability distribution of partition state transfer. Extensive experiments
on different conditions demonstrate that the proposed feature set achieves good
detection accuracy, especially in lower embedding capacity. In addition, the
loss of detection performance caused by recompression calibration using
mismatched quantization parameters (QP) is within the acceptable range, so the
proposed method can be used in practical scenarios. | Jun Li, Minqing Zhang, Ke Niu, Yingnan Zhang, Xiaoyuan Yang | 2023-10-11T01:51:19Z | http://arxiv.org/abs/2310.07121v1 | # Motion Vector-Domain Video Steganalysis
###### Abstract
Video steganography has the potential to be used to convey illegal information, and video steganalysis is a vital tool to detect the presence of this illicit act. Currently, all the motion vector (MV)-based video steganalysis algorithms extract feature sets directly on the MVs, but ignoring the steganographic operation may perturb the statistics distribution of other video encoding elements, such as the skipped macroblocks (no direct MVs). This paper proposes a novel 11-dimensional feature set to detect MV-based video steganography based on the above observation. The proposed feature is extracted based on the skipped macroblocks by recompression calibration. Specifically, the feature consists of two components. The first is the probability distribution of motion vector prediction (MVP) difference, and the second is the probability distribution of partition state transfer. Extensive experiments on different conditions demonstrate that the proposed feature set achieves good detection accuracy, especially in lower embedding capacity. In addition, the loss of detection performance caused by recompression calibration using mismatched quantization parameters (QP) is within the acceptable range, so the proposed method can be used in practical scenarios.
Video Steganography, Video Steganalysis, Skipped Macroblocks, Motion Vector(MV), Motion Vector Prediction(MVP), Calibration.
## I Introduction
With the rapid development of information technology, steganology has become an emerging discipline in the field of information security, which mainly contains two opposing directions of steganography and steganalysis [1, 2]. Steganography achieves the goal of concealing the act of communication by hiding secret information in common media covers, including image, text, video and audio. The purpose of steganalysis is to detect the presence of secret information in common media through statistical analysis. Image steganography has been a hot research topic for the past two decades. However, with the popularity of video media on the Internet, research on video steganography has received more and more attention from scholars in recent years [3, 4]. Due to its complex coding rules, video has richer covers suitable for information embedding compared with image, text, and audio, mainly intra-frame prediction patterns [5], inter-frame prediction patterns [6], MVs [7], transformation coefficients [8], entropy coding coefficients [9], etc. MV-based steganography has a larger embedding capacity among these embedding domains due to the larger amount of MVs. Moreover, the MV-based steganography algorithm is closely connected with the video coding process. The steganographic perturbations on the MVs are usually handled automatically by the subsequent coding process, so the visual quality of the stego video is less affected. For these reasons, MV-based video steganography and steganalysis techniques have been the focus of researchers' attention.
The development process of MV-based video steganography can be divided into three stages. The first stage is the traditional steganography method, including the MV amplitude-based [7] and phase-based [10] steganography methods. The basic principle of this type of method is to select a specific MV by setting a rule and then directly modify the MV. The second stage mainly uses coding techniques [11] to reduce the number of MV modifications and improve the algorithm's performance [12]. Inspired by the framework of minimizing embedding distortion [13] in image steganography, the third stage of MV-based video steganography is mainly based on the minimizing embedding distortion method, which is the mainstream framework of the whole steganography technology. The basic idea is to minimize the overall distortion by designing a distortion function for covers and then combining it with coding methods such as STC [13], which greatly improves the security of steganography algorithms. According to the design perspective of distortion function, the third stage of video steganography can be subdivided into complexity-based methods [14], local optimality-based methods [15, 16, 17], and multiple factors-based methods [18, 19, 20].
Video steganalysis is the adversary of video steganography, whose purpose is to detect whether the video media contains secret information by statistical analysis methods. Due to the complexity of video coding, MV-based video steganography leads to the perturbation of different types of coding parameters of the cover video stream [21]. MV-based video steganalysis can be divided into five categories from the different perspectives of feature extraction. The first category is the method based on the spatio-temporal statistical properties of MVs [22, 23] because there is a strong correlation between MVs similar to that between pixels and DCT coefficients in images. The second category is the method based on MV calibration [24, 25] because the stego MVs tend to recover to the original values after calibrating. The third category is based on the local optimality [26, 27, 28] of MVs. Because the MV is a locally optimal output process in the sense of rate-distortion, the steganagrahic operation will likely destroy this local optimality. The fourth category is the steganalysis algorithm designed based on the fact that the MVs
of sub-blocks in a macroblock usually have non-consistency [29]. And these methods have the best performance at present and can detect steganography methods based on inter-frame prediction patterns and MV domains simultaneously. The fifth category is the steganalysis method based on a convolutional neural network for the MV domain [30], which is at the initial research stage.
From the above steganalysis research status, all MV-based video steganalysis algorithms extract features directly on the MVs because MV-based steganography takes the MVs as the covers. It seems reasonable but ignores that video compression coding is a closely interconnected process. Modifications to the MVs cause perturbations to their own statistical properties and may lead to anomalies in the statistical characteristics of other coding parameters. For example, the MV Consistency steganalysis feature proposed by Zhai et al. [29] can effectively detect inter-frame prediction pattern domain steganography even though the features are extracted from the MV domain. Take the H.264/AVC standard [31, 32] as an example, the P-frames' macroblocks are mainly P blocks(inter), I blocks(intra), and P-Skip macroblocks. Each P block contains a set of MVs (including horizontal and vertical components), which are used to point to the optimal reference block in a reference frame. I blocks are encoded using the intra-prediction mode, but their number in P frames is small. The P-Skip macroblocks do not directly contain MVs; their optimal reference block is determined by the MVP. The encoder calculates the P-Skip macroblock's MVP based on the MVs of its three encoded neighborhood blocks. MV-based steganography directly modifies the MVs of the P blocks, which leads to changes in the statistical features of the MVs. Therefore, feature extraction in the MV domain can detect MV-based steganography. Although P-Skip macroblocks do not have MVs directly used for steganographic embedding, their MVPs are determined by the MVs of their neighborhoods. If the encoded blocks in the neighborhood around the P-Skip macroblocks are modified, their MVPs may also be modified passively. Thus the best matching block corresponding to the P-Skip macroblocks will change from optimal to non-optimal. In this paper, we find that the MVP of the P-Skip macroblock and the distribution state of the P-Skip macroblocks will be significantly impacted by message embedding. However, all current steganalysis feature sets of MV-based steganography ignore the perturbations suffered by the P-Skip macroblocks. Based on this observation, this paper uses the MVP and partition state distribution information of the P-Skip macroblocks to distinguish the cover videos from the stego videos.
The main contributions of this paper can be summarized as follows.
1. For the first time, the coding information based on skipped macroblocks is used to construct MV domain video steganalysis feature, which enriches the methods to extract video steganalysis features.
2. By the recompression calibration, an 11-dimensional feature set exploiting P-Skip macroblocks is proposed. The proposed feature has two components: a 5-dimensional sub-feature set based on MVP reversion and A 6-dimensional sub-feature set based on partition state transfer probability.
3. The experimental results show that the proposed steganalysis feature set can effectively detect the state-of-the-art MV-based steganography algorithms, especially in low embedding capacity.
The rest of this paper is organized as follows. The second part gives the motivation for this paper. The basic principle of the skipped partition in video inter-frame coding is introduced, and the effect of MV-based steganography on skipped macroblocks is also analyzed. The third part presents the construction process of the steganalysis feature based on the skipped macroblock. The experimental results and analysis are given in the fourth part. Finally, the paper is concluded.
## 2 Motivation
Steganographic embedding inevitably causes perturbations to the original covers. The most important thing for steganalysis is finding relevant evidence to distinguish cover from stego videos. The motivation for our proposed steganalysis method based on skipped macroblocks is presented as follows.
### Skipped Macroblock in Inter-frame Video Coding
The current mainstream international video coding standards are H.264/AVC, H.265/HEVC [33], and H.266/VVC [34], etc. Although new coding standards have been advancing, there has been no revolutionary change. These compression standards use a hybrid coding framework, which usually contains techniques such as prediction, transformation, quantization, entropy coding, intra-frame prediction, inter-frame prediction, loop filtering, etc. H.264/AVC is still the most used compression standard, while the popularity of HEVC or VVC will take a long time. Especially in video steganography, most of the current research focuses on steganography and steganalysis based on the H.264/AVC standard. Therefore, in this paper, we still use the H.264/AVC compression standard for our research. However, it is worth noting that since the mainstream coding standards use the hybrid coding framework, this paper's proposed scheme can apply to new video coding standards with appropriate modifications. The hybrid coding framework uses the intra-frame prediction, inter-frame prediction, and entropy coding processes to reduce the spatial redundancy, temporal redundancy, and statistical redundancy in videos, respectively. Among them, temporal redundancy is the largest because natural video consists of consecutive frames, and adjacent frames usually contain the same content, especially in scenarios such as surveillance and conferences. Therefore the inter-frame predictive coding process has an important role in the coding framework.
In the inter-frame coding process of H.264/AVC, the frame \(F\) is first divided into 16\(\times\)16 pixel-sized non-overlapping macroblocks. Depending on the complexity of the macroblocks, the inter-frame encoded luminance blocks are divided into 16\(\times\)16, 16\(\times\)8, 8\(\times\)16, and 8\(\times\)8 (which can be further divided into 8\(\times\)4, 4\(\times\)8, and 4\(\times\)4). For the block \(B_{m\times n}\) with the size of \(m\times n\) in a macroblock, the encoder uses the motion estimation (ME) algorithm to find the most suitable reference block
\(T^{mv(h,v)}_{m\times n}\) in the reference frame based on the Lagrangian rate-distortion optimization model. \(mv(h,v)\) is the MV obtained by ME, which represents the position offset between the encoded block and the reference block, and contains horizontal component \(h\) and vertical component \(v\). The pixels' residual values between the current encoded block and the reference block are calculated as follows:
\[D^{mv(h,v)}_{m\times n}=B_{m\times n}-T^{mv(h,v)}_{m\times n}. \tag{1}\]
Then, these residual values are output as video compression stream by Discrete Cosine Transform (DCT), quantization, and entropy coding.
In addition, due to the strong correlation between adjacent encoding blocks, the MV of the current encoding block can be predicted based on the MVs of the encoded adjacent blocks. Fig.1 shows the MVP sketch map of H.264/AVC standard. E refers to the current encoding macroblock or sub-block. Blocks A, B, and C are three neighborhood blocks on the top, left, and top right of E, respectively. If there is more than one block on the left side of E, the uppermost one is A. If there is more than one block on the top, the leftmost one is B. Fig.1 shows the case where all macroblock's partitions have the same size, and Fig.1 shows the case where the partitions' sizes are not the same. The MVP of the encoding block E is determined by the median of the MVs of the encoded blocks A, B, and C [31]. The motion vector difference (MVD) is output as a stream after entropy coding (Exponential-Golomb coding), whose value is the difference between the MV and the MVP as follows:
\[MVD=MV-MVP. \tag{2}\]
Both pixel residual and MVD are output as a stream for a P block in a P-frame. However, there are special P-Skip macroblocks in P-frame. In H.264/AVC standard, a macroblock is encoded as a P-Skip macroblock when [35]: 1) the optimal motion compensation macroblock's size is 16\(\times\)16. 2) the reference frame must be the previous frame. 3) the MV and the MVP are the same. 4) the macroblock's pixel values are all zero after transformation. For the P-Skip macroblock, no other information about the macroblock is required to be transmitted at the encoding side except for some very few bits that identify the macroblock as a skipped macroblock. On the decoding side, the decoder can first get the MVP of this macroblock according to the prediction algorithm, and since MVD is 0, the MV obtained on the decoding side is equal to MVP. Then the decoder can recover the pixel value of this macroblock by finding the reconstructed pixel of the corresponding macroblock according to MV in the reference frame. In fact, in P-frames, there exist a large number of P-Skip macroblocks. We use an x264 encoder [36] to compress the _foreman.yuv_ with a resolution of 352\(\times\)288 in the standard test sequence [37]. The percentage of P-Skip macroblocks in the first P-frame with QP of 25 and 35 is 33.8% and 59.1%, respectively. It can be seen that the proportion of P-Skip macroblocks in the normal video stream is very high, and the encoder can significantly improve the compression efficiency due to the small number of bits required for P-Skip macroblock coding.
### MV-based Video Steganography and Its Effect on Skipped Macroblocks
In the MV-based video steganography, the MV \(mv(h,v)\) of a P block in a P-frame is used as the original cover. And the cover is modified by embedding algorithm \(E\) to:
\[mv(h^{\prime},v^{\prime})=E(mv(h,v))=mv(h\pm\Delta h,v\pm\Delta v), \tag{3}\]
where \(\Delta h\) and \(\Delta v\) are 0 or positive integers, representing the amplitude of modification, usually no more than 1. Obviously, after changing \(mv(h,v)\) to \(mv(h^{\prime},v^{\prime})\), the reference block \(T^{mv(h,v)}_{m,n}\) in formula(1) will change to \(T^{mv(h^{\prime},v^{\prime})}_{m,n}\), which will affect the whole subsequent encoding process. Therefore, the above embedding process will inevitably affect the original statistical properties of MVs. Current MV-based video steganalysis algorithms all extract features based on the statistical differences of MVs. In addition to the direct effect on the MV of the P block, the steganography embedding will indirectly affect the MVP of the P-Skip macroblock. Taking Fig.1 (a) as an example, if the current encoding block E is a P-Skip macroblock, there is no direct MV for this block, and thus the steganography algorithm will not have a direct effect on this block. However, since the MVP of E is determined by the median of the MVs of blocks A, B, and C, if the MVs of blocks A, B, and C are modified during the embedding process, the MVP of E may also be perturbed. So the reference block of E will become non-optimal. Table I shows two examples of when the MVs of A, B, and C are perturbed by steganography. For case1, the optimal MVP of P-Skip macroblock E changes from (1, 2) to (2, 3). For case2, the optimal MVP changes from (14, 7) to (13, 6). The optimal matching blocks of these coding blocks have been deviated, thus providing the possibility for steganalysis.
Fig. 1: The MVP sketch map in H.264/AVC, numbers are the sizes of the blocks. (a) Current block E and adjacent blocks with the same size. (b) Current block E and adjacent blocks with different sizes.
To illustrate the steganography's effect on the P-Skip microblocks, we embed the message to the _foreman.264_ standard sequence using an MV-based steganography algorithm named PCAMV(Principles on Cost Assignment for Motion Vector) [19]. Table II shows the modified number of MVPs and their proportions in the first P-frame with different QPs and embedding capacities (bits per frame, bpf). From the data in the table, we can see that although the steganography operation does not directly modify the P-Skip macroblocks' MVPs, some of the MVPs are still modified. For example, the MVPs of an average of 23.6% of the P-Skip macroblocks are perturbed at an embedding capacity of 200 bpf and QP of 25.
### Recompression Calibration
The idea of recompression calibration [38] comes from JPEG image steganalysis, which means that the coding parameters of JPEG images can be revised to the original state after recompression. For the stego video, it is possible to recover the original MV through calibration, which can provide evidence for determining whether the video contains the secret message. Recompression, MV correlation, and MV coding optimality are the primary calibration methods used in video steganalysis. Cao et al. [24] found that the MVs of calibrated stego video with the same parameters would exhibit a reversion to their original values. Therefore, they designed 15-dimensional feature of Motion Vector Reversion Based (MVRB) based on the difference between MVs and prediction errors before and after calibration. Deng et al. [39] proposed calibration-based steganalysis feature from the perspective of neighboring blocks, but there is a problem of poor applicability. To solve the problem of coding parameter mismatch, Wang et al. [40] first collect various types of invariant coding parameters (e.g., size, frame rate, etc.), and then find the best ME algorithm by searching. This method has a performance improvement but has high computational complexity. In order to construct steganalysis feature applicable to various coding standards, Zhai et al. [25] propose joint calibration feature with dimension 124 from three aspects: neighborhood optimality of MV, MV residual distribution, and MV calibration.
From the above literature, existing calibration-based video steganalysis algorithms for the MV domain aim to recover the original MVs of P blocks but ignore that the MVPs of the P-Skip macroblocks can also be predicted by calibration. In this paper, we use the calibration method to recover various statistical properties of P-Skip macroblocks.
Based on the above analysis, the following conclusions can be drawn, which are the main motivations to design steganalysis feature from the perspective of skipped macroblocks.
1. Skipped macroblocks are common in normal video coding, and their number increases with the increase of QP.
2. Although skipped macroblocks do not have MVs available for steganographic embedding, the steganographic operation may impact their MVPs, thus affecting the optimality of the skipped macroblocks.
3. Recompression calibration is an effective method to recover coding parameters. We use recompression calibration to recover various statistical properties of Skip macroblocks for constructing steganalysis features.
## 3 The Proposed Steganalysis Feature set Exploiting Skipped Macroblocks
This section constructs the steganalysis feature set based on skipped macroblocks. Firstly, by the calibration of video recompression, a 5-dimensional sub-feature set based on MVP reversion of P-Skip macroblocks is constructed. Then, A 6-dimensional partition state transfer probability sub-feature set is designed by comparing P-Skip macroblocks before and after recompression calibration. Finally, we integrate these two sub-feature sets into an 11-dimensional feature set for MV-based steganalysis. Although skipped macroblocks also exist in B-frames, for simplicity, this paper mainly discusses the P-Skip coding block problem in P-frames.
### The MVP's Revision Feature of P-Skip Macroblocks by Recompression Calibration
#### 3.1.1 The effect of steganography on the MVPs of P-Skip macroblocks
Section _2.1_ described that after message embedding on the MVs of the P blocks, a portion of the MVPs of the P-Skip macroblocks would also be perturbed. For a P-Skip encoding block in a P-frame, let its MVP before and after calibration be \(mvp(h,v)\) and \(mvp(h^{\prime},v^{\prime})\), respectively. We define a differential operator on the MVP of the P-Skip macroblock in formula 4, which describes the difference between MVP before and after calibration.
\[diff=|h-h^{\prime}|+|v-v^{\prime}|. \tag{4}\]
Fig.2 shows the statistical distribution of _diff_ in the cover and stego video. Only those macroblocks' _diff_ that before and after calibration are all P-Skip macroblocks are counted. The stego video is embedded using the PCAMV algorithm [19] with an embedding capacity of 100 bpf. The horizontal coordinate in Fig.2 is the value of _diff_, and the vertical coordinate is the empirical probability of the occurrence in all P-Skip macroblocks. As can be seen from the figure, firstly, the difference of MVP in the cover video before and after calibration is mainly 0 (85% and 83.1% for QP of 25 and 35, respectively), while the number of P-Skip macroblocks with a difference exceeding 2 is very small. The result indicates that the recompression calibration operation accurately recreates the MVPs of the P-Skip macroblocks. Secondly, regardless of the QP with 25 or 35, there is a significant decrease in
the probability of P-Skip macroblocks with a difference of zero in the stego video compared to the cover video. And there is a more significant increase in the probability of P-Skip macroblocks with a difference of one. Also, the probability of P-Skip macroblocks with a difference value greater than one increases to some extent. This is because the steganography mainly performs a plus or minus one operation on the MVs, resulting in a change in the MVP of the P-Skip macroblock in the stego video. The most direct effect is that the probability of blocks with a difference of zero decreases, while the probability of blocks with a difference of one increases. The above data illustrate that we can use the statistical features based on the difference of MVPs to distinguish cover videos from stego videos.
#### 3.1.2 Feature construction based on MVP reversion of P-Skip macroblocks
During feature extraction, firstly, the video sequence is calibrated by recompression. Then, the consecutive N P-frames are used as a feature extraction window. Suppose a total of \(n\) P-Skip macroblocks are kept in the P-Skip partition before and after calibration, and these P-Skip macroblocks are denoted as \(\{B_{i}\}_{i=1}^{n}\). Based on the above discussion, we can design the first class of feature set based on MVP reversion, which is formally expressed as follows.
\[f_{1}(k)=\Pr(diff\!=\!k)=\frac{\sum\limits_{i=1}^{n}\delta(diff_{B_{i}},k)}{n}, \quad(k=0,1,2,3), \tag{5}\]
\[f_{1}(4)=\Pr(diff>=4)=\frac{\sum\limits_{i=1}^{n}\delta(diff_{B_{i}},l)}{n}, \quad(l>=4), \tag{6}\]
where \(\delta(x,y)\!=\!1\) when \(x\) is equal to \(y\), otherwise \(\delta(x,y)\!=\!0\). The above feature set represents the distribution of MVP's difference before and after calibration.
### The Partition State Transfer Probability Feature of P-Skip Macroblocks by Recompression Calibration
#### 3.2.1 The effect of steganography on the partition state transfer of P-Skip macroblocks
Recompression calibration can restore the state of the first compression with a considerable probability, but not perfect. There are some reasons. Firstly, video encoding is a lossy compression process, and recompression after decoding will cause some distortion. Secondly, the parameters used in the second compression cannot be kept the same as those the first time, so there are mistakes between them. In the recompression calibration process, the coding block partitioned as P-Skip at the first time will maintain the P-Skip in the second recompression with a higher probability. Still, it may also become other partitions (such as P block of 16\(\times\)16, 16\(\times\)8, 8\(\times\)8, etc.). However, from another point of view, the coded blocks not originally P-Skip partitioned may also transfer to P-Skip. On the one hand, through the experiment, we found that for cover video and stego video, more than 95% and 93% of P-Skip macroblocks in the original compression will still keep the P-Skip partition in the second compression, respectively. And the statistical difference between the cover video and the stego video is insignificant. The result indicates that this feature can not provide a direct basis for steganalysis.
On the other hand, Fig.3 shows the original partition distribution (in the first compression) of blocks(whose partition is P-Skip in the second compression). For example, in Fig.3(a) with the QP of 25, for the cover video, the partitions of the blocks at the first compression are mainly P-Skip and 16\(\times\)16, and their proportions are 68.4% and 21.4%, respectively. For the stego video, the partitions of the blocks at the first compression are also mainly P-Skip and 16\(\times\)16, but their proportions change significantly, 42.7% and 43.3%, respectively. The result indicates that fewer P-Skip macroblocks and more blocks with 16\(\times\)16 partitions are encoded as P-Skip macroblocks after calibration in the stego video. The reason is the steganography embedding modifies the MVs of blocks, which leads to the change of the optimal partition method of blocks. The situation when the QP is 35 (shown in Fig.3(b)) is similar to when QP is 25. Therefore, we can see that steganography indirectly affects the blocks' partition distribution in P-frames, which can be used as a steganalysis feature.
#### 3.2.2 Feature construction based on partition state transfer of P-Skip macroblocks
The video sequence is calibrated by recompression, and then the consecutive N P-frames are used as a feature extraction window. Let there be a total of \(m\) P-Skip macroblocks at the second compression, and denote these
Fig. 2: The MVP’s difference distribution of P-Skip macroblocks by recompression calibration. (a) QP=25. (b) QP=35.
blocks as \(\{C_{j}\}_{j=1}^{m}\). Let the possible partitions of these blocks at the original compression be the set _partition_:
\[partition=\{P-Skip,16\times 16,16\times 8,8\times 16,8\times 8,else\}. \tag{7}\]
Based on the above discussion, we can design a feature set using the probability of partition state transfer of the P-Skip macroblocks, formally expressed as follows.
\[f_{2}(k)=\Pr(partition(k-5))=\frac{\sum\limits_{j=1}^{m}\phi(C_{j}, partition(k-5))}{m}, \tag{8}\] \[(k=5,6,7,8,9,10),\]
where \(\phi(C,y)\)\(=1\) when the \(C\) block's partition is y, otherwise \(\phi(C,y)\)\(=0\). For example, when \(k=5\), \(f_{2}(5)=\Pr(P-Skip)\). The feature \(f_{2}(5)\) represents the probability of the original partition being P-Skip among \(m\) P-Skip macroblocks in the second recompression. Thus the feature set has six dimensions, corresponding to the probability that the blocks (whose partition is P-Skip in the second compression) is P-Skip, 16\(\times\)16, 16\(\times\)8, 8\(\times\)16, 8\(\times\)8, and else at the first compression, respectively.
### Feature Merging
We perform recompression calibration of the video to investigate the changes in various statistical features of P-Skip macroblocks before and after message embedding. The flow chart is shown in Fig.4. In step 1, the compressed H.264/AVC stream is decompressed to obtain the decoded YUV file. The relevant coding parameters are extracted during the decoding process, including the frame number, resolution, group of picture(GOP) structure, QP, bits rate, macroblock partition, and MVs. In step 2, the spatial YUV file is encoded for the second time using the parameters from the first encoding, and the recompressed video stream is obtained. Then, in step 3, the recompressed video stream is decoded to obtain the macroblock's partition and MVs. Finally in step 4, the statistical information in the two streams is analyzed to extract the feature set that can effectively distinguish the cover from the stego video. The corresponding algorithm is shown in Algorithm 1.
```
1:Input: Original video stream \(\mathbf{S}\).
2:Output: An 11-dimensional SMCF steganalysis feature set \(f\).
3:Steps:
4:The original video stream \(\mathbf{S}\) is decoded to obtain spatial video sequence and parameter set \(P_{\mathbf{S}}\);
5:Recoding the spatial sequence by \(P_{\mathbf{S}}\) to obtain the recompressed video stream \(\mathbf{S^{\prime}}\);
6:Take non-overlapping consecutive N P-frames in S and S' as one feature extraction unit. For each unit:
7:for\(i=1,...,n\)do
8: For the P-Skip macroblock \(B_{i}\) in \(\mathbf{S}\), find the corresponding block in \(\mathbf{S^{\prime}}\), and calculate the feature \(f_{1}\) according to formula (5),(6);
9:endfor
10:for\(j=1,...,m\)do
11: For the P-Skip macroblock \(C_{j}\) in \(\mathbf{S^{\prime}}\), find the corresponding block in \(\mathbf{S}\), and calculate the feature \(f_{2}\) according to formula (8);
12:endfor
13:Merge features \(f_{1}\) and \(f_{2}\) to obtain the 11-dimensional feature set \(f\).
```
**Algorithm 1** The extraction process of the proposed SMCF.
According to the recompression calibration, the proposed 11-dimensional steganalysis feature set consists of two sub-feature sets: a 5-dimensional MVP reversion feature and a 6-dimensional partition state transfer feature based on P-Skip macroblocks. Since the feature is mainly implemented based on recompression calibration and P-Skip macroblocks, we name it SMCF (Skipped Macroblocks based Calibrated Feature).
## 4 Experiments and Analysis
In this section, the experimental settings are first introduced. And then, in order to evaluate the performance of the proposed scheme, we present some experiments and analyses with different setups.
Fig. 3: The original partition distribution (in the first compression) of blocks (whose partition is P-Skip in the second compression). (a) QP=25. (b) QP=35.
### Experiments Setup
#### 4.1.1 Video database, H.264/AVC encoder, and decoder
As shown in Table III, we use two video databases for experiment. DB1: this database contains 34 well-known standard test video sequences [37] with a CIF resolution(352\(\times\)288), and each video sequence is cut into a fixed length by selecting its first 240 frames (so that we have a total of 8160 frames for experiments). DB2: this database contains 80 video sequences with different resolutions(from 416\(\times\)240 to 2560\(\times\)1600), which are downloaded from the internet, and each sequence is cut into a fixed length by selecting its first 100 frames (so the number of total frames is 8000). All the video sequences in DB1 and DB2 are stored in uncompressed file format, with YUV 4:2:0 color space.
The H.264/AVC encoder in our experiments for video compression, recompression, and message embedding is the high-performance encoder x264 [36] with the main profile. Unless otherwise specified, the GOP type of x264 is set to IPPPPPP with the fixed size six and leaving other settings on the default parameters. All P frames can be used for information embedding and feature extraction. The sub-block in P frames can be the variable size. The ME algorithm used in this paper is Hexagon-based Search (HEX) [41] with a search range of 16 pixels, and the ME resolution is quarter-pixel. Also, three different QPs (15, 25, 35) are considered.
All the decoding process and steganalysis feature are implemented based on a well-known decoder FFmpeg [42] and run on a desktop computer with a 1.8GHz Intel Core i7 CPU and 16 GB RAM. We use a sliding window with a length of 5 P-frames as a basic feature unit.
#### 4.1.2 Steganography methods
To evaluate the detectability of video steganalysis in MV domain, three state-of-the-art typical MV-based steganography methods are used for message embedding. Yao et al.'s method [14] (denoted as Tar1) is as the first one, which designed a distortion function based on the covariance matrix of MV residuals and inter-frame prediction errors. Zhang et al.'s method [17] (denoted as Tar2) is as the second one, which designed the distortion function from the perspective of MV's local optimality. And the third one is Li et al.'s method [19] (denoted as Tar3) based on cost assignment for MVs. We also used the traditional Aly's method [7](denoted as Tar4) to test the aplicability in B-frames. The embedding capacity bpsmv (bits per non-skip motion vector) shall be set at 0.05,0.1,0.2,0.3,0.4 in our experiments. All the steganography methods are also implemented based on the x264 encoder.
#### 4.1.3 Competitor steganalysis methods
We compare our proposed method with some state-of-the-art steganalytic methods, including the feature sets AoSO(Adding or Subtracting one) proposed by Wang et al. [26], the feature sets NPELO(Near-Perfect Estimation for Local Optimality) pro
Fig. 4: The flow chart of recompression calibration for feature set extraction.
posed by Zhang et al. [27] from the perspective of local optimality, and also the multi-domain feature sets MVC(Motion Vector Consistency) proposed by Zhai et al. [29].
#### 4.1.4 Training and classification
We implement training and classification using a Gaussian-kernel SVM (support vector machine) [43], whose penalty factor \(c\) and kernel factor \(\gamma\) are determined by a five-fold cross-validation. And the detection performance is measured by the accuracy rate, which is defined as the ratio of correctly classified samples to the total samples. The final accuracy rate is averaged over ten random splits of the database. 60% of the cover-stego video pairings are randomly chosen for training, and 40% are used for testing in each iteration.
### Steganalysis Performances
The detection accuracy against steganography is the most critical metric of steganalysis algorithms. Table IV shows the detection accuracy of the proposed algorithm SMCF against three steganography algorithms with different embedding capacities and QPs using database DB1.
Firstly, the detection accuracy of the proposed algorithm for the three steganographic algorithms is 72.83%, 74.88%, and 69.48% on average at the very low embedding capacity (0.05 bpmsmv). These results are a relatively high detection rate, showing the proposed algorithm's better performance at low embedding rates. As the embedding capacity increases, although the correct detection accuracy improves, the improvement is insignificant. The reason is the occurrence of P-Skip macroblocks in P-frames is aggregated, and once the MVP of one or more macroblocks is perturbed, the other macroblocks will also be perturbed with high probability. As the embedding capacity increases, the probability of the P-Skip macroblocks being perturbed does not increase significantly, despite the increase of the perturbed MVs. Secondly, the detection accuracy of the proposed SMCF does not differ significantly for different steganography algorithms. This is mainly because these steganography algorithms embed messages in P blocks. The proposed method do not extract feature directly on these blocks with MVs, but on the MVPs of the indirect P-Skip macroblocks, so they are not sensitive to different steganography algorithms. This also indicates that the feature proposed in this paper are more adaptable. Finally, for different QP, the detection accuracy of the proposed feature is higher at a QP of 25 than at QPs of 15 and 35. This is because at the smaller QP 15, the macroblocks are finer divided, and fewer macroblocks are partitioned as P-Skip, so fewer P-Skip macroblocks can be used for the proposed feature. While at larger QP 35, the video compression rate is larger. A large portion of macroblocks in P-frames are divided into P-Skip macroblocks, while the number of ordinary P blocks is small, and thus the number of MVs used for steganography is also small. At the same relative embedding capacity (bpnsmv), the MVs are subject to less steganography perturbation, so the steganography detection performance is reduced compared to that at a QP of 25.
Fig.5 shows the experimental comparison of the algorithm SMCF, AoSO, NPELO, and MVC against three different steganography methods with database DB1. Overall, the proposed scheme SMCF is significantly better than the AoSO and NPELO schemes under different conditions, which indicates that the method in this paper has better adaptability and detection performance. As the embedding capacity increases, the improvement rates of detection accuracy of the three steganalysis methods ( AoSO, NPELO, and MVC ) are all better than the proposed SMCF scheme. This is because all these three methods extract features directly in the MV domain. As the embedding capacity increases, the perturbation by the steganography operation to the MVs also increases, thus making it easier to be detected at large embedding capacities.
The detection effect of the proposed scheme SMCF is better than that of the MVC scheme at low embedding capacities (bpnsmv of 0.05, 0.1, 0.2). Still, the detection effect of SMCF is equal to or lower than that of the MVC scheme at high embedding capacities (bpnsmv of 0.3, 0.4). Taking Fig.5(b) as an example, the detection accuracy of SMCF is better than that of MVC at embedding capacity up to 0.3 bpnsmv; however, MVC outperforms SMCF at 0.3 bpnsmv and 0.4 bpnsmv. This is because MVC can better detect the perturbations in the Tar2 [17] algorithm for the consistency of the MVs within the macroblocks when the embedding capacity is larger.
The detection accuracy of all four steganalysis methods is slightly lower at a QP of 35 than at a QP of 25, but the proposed SMCF feature are less reduced. This is because, at large QP (35) the macroblocks are mostly partitioned into P-Skip macroblocks, and the number of ordinary P blocks is small. Thus the number of MVs for steganography is also small. Therefore, the MVs are also subject to less perturbation at the same relative embedding capacity. So, the detection accuracy at high QP is reduced for features such as AoSO, NPELO, MVC, etc., which extract information directly in the MV domain. In contrast, the proposed scheme SMCF is less affected by the change of the QP because the feature are not extracted directly from the MVs.
### Sub-feature Component Analysis
The proposed SMCF feature set consists of two components: the sub-feature based on MVP reversion of P-Skip macroblocks (denoted as SMCF-part1), and the sub-feature based on partition state transfer of P-Skip macroblocks (denoted as SMCF-part2). In order to analyze the impact of two sub-features on the detection performance of the overall feature, we use Tar3 as the embedding method, DB1 as the database, and the experimental results are shown in Table V. From the results, the detection performance of SMCF is better than its subsets SMCF-part1 and SMCF-part2, which indicates that there is no obvious conflict between the two sub-features and the combination of the two sub-features can effectively improve the comprehensive detection ability. At a QP of 25, the detection ability of SMCF-part1 and SMCF-part2 is on average 15% and 5% lower than that of SMCF, indicating that SMCF-part2 occupies a more important role. It implies that the steganographic operation produces a greater impact on the partition state of the P-Skip macroblocks than on its MVP.
The validity of SMCF-part2 in this experimental result also proves from the other side that it is reasonable to extract features from Skip macroblocks. Because the current main steganalysis methods, such as AoSO, NPELO and MVC, extract the MV information directly from non-skip macroblocks, which do not contain the partition state transfer features of the recompressed Skip blocks (SMCF-part2).
### Steganalysis Performance in the Case of Mismatch with QP Recompression
The steganalysis feature designed in this paper is implemented based on recompression calibration. The traditional MV calibration-based video steganalysis [24, 40] does not apply to the encoding standard of variable size macroblocks such as H.264/AVC, because the sub-block partition will change greatly after recompression and cannot accurately locate the MV for steganalysis. The proposed scheme in this paper only considers the P-Skip macroblock partition and does not extract feature from sub-blocks, so it is less affected by the recoding parameters. In addition, the coding parameters of the video can be estimated by parameter searching [40], which
Fig. 5: Accuracy rate of the proposed steganalysis against three steganography methods with different embedding capacities (in bpsmmv) and QPs using database DB1. (a) Tar1 [14], QP=25. (b) Tar2 [17], QP=25. (c) Tar3 [19], QP=25. (d) Tar1 [14], QP=35. (e) Tar2 [17], QP=35. (f) Tar3 [19], QP=35.
enables getting the main parameters (size, frame rate, QP, etc.). Nevertheless, different compression parameters (mainly the QP) still impact the distribution of the P-Skip macroblocks.
Fig.6 shows the experimental results of recompression calibration using a QP different from the original one, the video database is DB1, and the embedding algorithm is Tar3. Taking Fig.6(b) as an example, the QP used in the experiment for the original cover and the stego video is 25. Besides 25, the recompression calibration employs a QP of 22 and 28, which are close to 25 (the error in estimating the QP is not too large). As can be seen from the data in the figure, the highest detection performance of the proposed feature is achieved when the recompression QP is the same as the QP of the first compression (QP=25). Although the detection performance of the proposed feature decreases when the recompression QP (22, 28) is not the same as the QP of the first compression, the decrease is slight. This is because the proposed feature in this paper mainly consist of the MVP revision feature and the partition state transfer feature of the P-Skip macroblock. Although different QPs impact the number of P-Skip macroblocks (the bigger the QP, the larger the number of P-Skip macroblocks), they have a small impact on their distribution. Therefore, the proposed feature still maintain some applicability even if there is a certain degree of mismatch in the QP.
### The Complexity Analysis of the Proposed Feature
This subsection compares the time required to extract four steganalysis features with different QPs. Table VI shows the dimensionality and the average time needed to extract one CIF video sequence ( 352\(\times\)288, 240 frames) for the four steganalysis features. As seen from the data in the table, the highest performance is achieved by the MVC feature because this feature mainly extracts MVs within macroblocks or sub-blocks and calculates their correlation alone. This is because both AoSO and NPELO need to traverse the neighborhood values of the MVs to calculate their local optimality. The highest time complexity is the SMCF, which is because the SMCF is implemented based on recompression calibration, and the running time is mainly concentrated in the two video decoding operations and one video recompression operation. In addition, the AoSO, NPELO, and MVC features have higher performance as the QP increases. This is because the larger the QP, the smaller the number of MVs in the video stream and the smaller the amount of data to be processed. However, the performance difference of SMCF feature at different QPs is not very obvious because it does not extract feature directly from the MVs of macroblocks or sub-blocks. In general, since the scheme in this paper is based on recompression to extract feature, its computational complexity is high.
### Applicability of different video databases
For the purpose of testing the applicability of the proposed algorithm for videos with different resolutions and sources, this experiment is conducted using the database DB2. Table VII shows the detection results of AoSO, NPELO, MVC, and the proposed SMCF for the steganographic algorithm Tar3 [19] with different QPs and embedding capacities. From the experimental results, the detection performance of the proposed algorithm still outperforms the other three competing methods overall, proving that the proposed scheme has good applicability on different video databases. By comparing the data in Table VII with the data in Fig.5(c) and (f), the detection performance of all four steganalysis features on two databases, DB1 and DB2, does not differ significantly, which indicates that the size of the video resolution a small impact on the performance of steganalysis. The reason is that when steganography is performed using the relative embedding capacity bpnsmv, although the distribution of motion vectors is sparser for large-resolution videos, the number of their modified absolute motion vectors is higher, so the steganalysis features are still able to capture these steganographic perturbations.
Fig. 6: Accuracy rate of proposed method against Tar3[17] when using a different recompression QP to the original one. (a) Original QP=15. (b) Original QP=25. (c) Original QP=35.
### Applicability to B-frames
To test the applicability of the proposed algorithm in B-frames, this experiment verifies the detection performance of the proposed algorithm on the DB1 database. The GOP type of x264 is set to IPBBBPPBBBIPBBB..., i.e., 9-frame GOP with One I-frames, two P-frames, and three B-frames in one GOP. As the steganography methods Tar1, Tar2, and Tar3 do not present the detail of how to embed messages in B-frames, we use Tar4 as the embedding method. Since the B-Skip block in B-frames has two MVPs, the results of Eqs. (5) and (6) are averaged over the two lists when calculating the features. The data in Table VIII compare the performance of the steganalysis with different GOP structures. The experimental data show that the frame type has little effect on the performance of the proposed scheme, so it can be concluded that the proposed method can be applied to B-frames.
## 5 Conclusion
This paper analyzes the skipped macroblocks in the inter-frame coding using recompression calibration. Firstly, the MVP of the skipped macroblock tends to return to the original value after recompression calibration, and the steganography operation will perturb this tendency. Secondly, the steganography operation also perturbed the distribution of partition state transfer probability of the skipped macroblock before and after recompression calibration. Based on the above analysis, an 11-dimensional MV-domain video steganalysis method based on skipped macroblocks is proposed in this paper. The experimental results show that the proposed feature have a high correct detection accuracy for state-of-the-art MV-based steganography algorithms, especially in low embedding capacities. In addition, since the proposed scheme only considers P-Skip macroblocks and does not extract feature from sub-blocks, it is less affected by the recoding parameters. It remains effective in the case of QP mismatch. However, the computational complexity is high since the proposed feature are implemented based on recompression calibration (decoding and recompression operations are required).
The inter-frame encoding frame is mainly composed of P blocks and skipped macroblocks. Due to the limitation of space, this paper only extracts the feature from the P-Skip macroblocks without considering the perturbation of the MV of the P blocks themself. Therefore, to further improve the detection performance and applicability of the steganalysis algorithm, we will next study how to extract steganalysis feature from both the skipped macroblocks and the P blocks.
|
2303.00776 | Positive intermediate Ricci curvature with maximal symmetry rank | Generalizing the foundational work of Grove and Searle, the second author
proved upper bounds on the ranks of isometry groups of closed Riemannian
manifolds with positive intermediate Ricci curvature and established some
topological rigidity results in the case of maximal symmetry rank and positive
second intermediate Ricci curvature. Here, we recover even stronger topological
rigidity, including results for higher intermediate Ricci curvatures and for
manifolds with nontrivial fundamental groups. | Lee Kennard, Lawrence Mouillé | 2023-03-01T19:05:13Z | http://arxiv.org/abs/2303.00776v2 | # Positive intermediate Ricci curvature with maximal symmetry rank
###### Abstract.
Generalizing the foundational work of Grove and Searle, the second author proved upper bounds on the ranks of isometry groups of closed Riemannian manifolds with positive intermediate Ricci curvature and established some topological rigidity results in the case of maximal symmetry rank and positive second intermediate Ricci curvature. Here, we recover even stronger topological rigidity, including results for higher intermediate Ricci curvatures and for manifolds with nontrivial fundamental groups.
2020 Mathematics Subject Classification: 53C20 (Primary), 57S15 (Secondary)
## 1. Introduction
The study of manifolds with lower curvature bounds goes back to the origins of Riemannian geometry. In dimensions at least \(3\), there are many notions of lower curvature bounds, two of the most common being positive sectional curvature and positive Ricci curvature. For sectional curvature, the classification of positively curved manifolds is a wide open problem. However, apart from the compact rank one symmetric spaces \(S^{n}\), \(\mathbb{C}\mathrm{P}^{n}\), \(\mathbb{H}\mathrm{P}^{n}\), \(\mathbb{O}\mathrm{P}^{2}\), the only dimensions that are known to admit other closed, simply connected manifolds with positive sectional curvature are \(6\), \(7\), \(12\), \(13\), and \(24\); see [21] and the references therein. In contrast, for the weaker condition of positive Ricci curvature, there are far more known examples, including sequences of examples in any fixed dimension \(\geq 4\) with unbounded total Betti numbers [10]; see also [20, 21, 22, 19] and references therein.
For positive sectional curvature, the Grove Symmetry Program has resulted in major classification results [13, 14, 15, 16, 17, 18], constructions of new examples of manifolds with lower curvature bounds [1, 1, 16, 17, 18], and discoveries of unexpected and fundamental connections between curvature and topology [23, 24]. The overarching goal of this program, initiated by Karsten Grove in the 1990s, is to classify positively curved spaces with large isometry groups. A foundational result in the program is Grove and Searle's maximal symmetry rank theorem: Any closed, connected, \(n\)-dimensional manifold with positive sectional curvature has symmetry rank (i.e. rank of the isometry group) bounded above by \(\lfloor\frac{n+1}{2}\rfloor\), and in the case of equality (i.e. maximal symmetry rank), the manifold must be diffeomorphic to \(S^{n}\), \(\mathbb{R}\mathrm{P}^{n}\), \(\mathbb{C}\mathrm{P}^{n/2}\), or a lens space [1]. Galaz-Garcia later strengthened this conclusion to an equivariant diffeomorphism classification [1].
Because of the success of the Grove Symmetry Program, it is natural to ask which obstructions to positive sectional curvature generalize to weaker curvature conditions (e.g.
positive Ricci curvature, non-negative sectional curvature, quasipositive curvature, almost positive curvature). However, it is typical that tools from the positive sectional curvature setting do not carry over to weaker curvature conditions. One exception was established by the second author for positive intermediate Ricci curvature.
**Definition**.: Given an \(n\)-dimensional Riemannian manifold \(M\) and \(k\in\{1,\dots,n-1\}\), we say \(M\) has _positive \(k^{th}\)-intermediate Ricci curvature (\(Ric_{k}>0\))_ if, for every set of orthonormal vectors \(x,y_{1},\dots,y_{k}\) tangent to \(M\), the sum of sectional curvatures \(\sum_{i}\mathrm{sec}(x,y_{i})\) is positive.1
Footnote 1: This notion of positive intermediate Ricci curvature should not be confused with \(k\)-positive Ricci curvature as defined in [11]; see also [10, 12].
Note \(M\) has \(\mathrm{Ric}_{k}>0\) if and only if, for every unit vector \(x\) tangent to \(M\), the sum of any \(k+1\) eigenvalues of the Jacobi (directional curvature) operator \(y\mapsto R(y,x)x\) is positive; see [12, Lemma 1.2]. Thus \(\mathrm{Ric}_{1}>0\) is equivalent to positive sectional curvature, \(\mathrm{Ric}_{n-1}>0\) is equivalent to positive Ricci curvature, and if \(\mathrm{Ric}_{k}>0\) for some \(k\), then \(\mathrm{Ric}_{\ell}>0\) for all \(\ell\geq k\). Because the condition \(\mathrm{Ric}_{k}>0\) is vacuous for dimensions \(n\leq k\), we use the convention that an assumption of \(\mathrm{Ric}_{k}>0\) implies \(n\geq k+1\).
Several results from the setting of manifolds with sectional curvature lower bounds have been extended to intermediate Ricci curvature lower bounds. These include generalizations of the Synge theorem and Weinstein fixed point theorem [13], the Gromoll-Meyer theorem and Cheeger-Gromoll Soul theorem [14, 15], the quarter-pinched sphere theorem [14, 15, 16], and the Heintze-Karcher inequality [1]. Also, many comparison results have been established by Guijarro and Wilhelm [15, 16, 17], and examples and constructions can be found in [1, 2, 18, 19]. For a collection of publications and preprints concerning intermediate Ricci curvature, see [11].
In [11, 12], the second author shows that closed, connected, \(n\)-dimensional manifolds with \(\mathrm{Ric}_{2}>0\) have symmetry rank bounded above by \(\lfloor\frac{n+1}{2}\rfloor\), the same bound Grove and Searle established for positive sectional curvature. Furthermore, the author showed in [11] that odd-dimensional, closed, simply connected manifolds with \(\mathrm{Ric}_{2}>0\) and maximal symmetry rank are diffeomorphic to spheres.
In this article, we establish a classification of even-dimensional closed, simply connected manifolds with \(\mathrm{Ric}_{2}>0\) and maximal symmetry rank:
**Theorem A**.: _Let \(M\) be a closed, simply connected Riemannian manifold of dimension \(2n\) with \(\mathrm{Ric}_{2}>0\). If \(\mathsf{T}^{n}\) acts effectively by isometries on \(M\), then one of the following holds:_
1. \(M\) _is at least six-dimensional and is either diffeomorphic to_ \(S^{2n}\) _or homeomorphic to_ \(\mathbb{C}\mathrm{P}^{n}\)_,_
2. \(M\) _is six-dimensional and_ \(\chi(M)=0\)_, or_
3. \(M\) _is four-dimensional and is equivariantly diffeomorphic to a connected sum_ \(\#_{i=1}^{b}\mathbb{C}\mathrm{P}^{2}\) _for some_ \(b\geq 0\)
Since \(S^{2n}\) and \(\mathbb{C}\mathrm{P}^{n}\) admit metrics with positive sectional curvature and \(\mathsf{T}^{n}\) symmetry, Theorem A and the second author's result in odd dimensions may be viewed as providing a complete homeomorphism classification in dimensions at least seven of manifolds admitting metrics with \(\mathrm{Ric}_{2}>0\) and maximal symemtry rank. As our methods differ from those in Grove-Searle [10], we do not know whether it is possible to upgrade the rigidity to equivariant diffeomorphism as in the case of positive sectional curvature.
In dimension six, there is a metric on \(S^{3}\times S^{3}\) with \(\mathrm{Ric}_{2}>0\) and maximal symmetry rank; see [14, Example 2.3]. We remark that, if additionally \(M^{6}\) is \(2\)-connected, then the conclusion \(\chi(M^{6})=0\) in (2) is sufficient to imply that \(M^{6}\) is diffeomorphic to \(S^{3}\times S^{3}\) (see Remark 4.3 below). We do not know if other closed, simply connected \(6\)-manifolds with \(\chi(M)=0\) admit metrics with \(\mathrm{Ric}_{2}>0\). We remark that if \(\mathsf{T}^{3}\)-action \(M^{6}\) has a fixed point, then we prove in Proposition 4.1 below that \(M^{6}\) is diffeomorphic to either \(S^{6}\) or \(\mathbb{C}\mathrm{P}^{3}\).
In dimension four, Orlik and Raymond showed that a smooth, simply connected, closed four-manifold with \(\mathsf{T}^{2}\) symmetry is equivariantly diffeomorphic to a connected sum of copies of \(S^{2}\times S^{2}\), \(\mathbb{C}\mathrm{P}^{2}\), and \(\overline{\mathbb{C}\mathrm{P}^{2}}\) (i.e. \(\mathbb{C}\mathrm{P}^{2}\) with the opposite orientation). Theorem A shows that if additionally \(M^{4}\) admits a \(\mathsf{T}^{2}\)-invariant metric with \(\mathrm{Ric}_{2}>0\), then \(S^{2}\times S^{2}\) summands do not appear and moreover that all summands of \(\mathbb{C}\mathrm{P}^{2}\) come with the same orientation. However, we currently cannot rule out the cases \(b\geq 2\), and for these values, it is unknown whether any such manifold admits \(\mathrm{Ric}_{2}>0\), much less whether such a metric can be invariant under a \(\mathsf{T}^{2}\)-action. Finally, we note that though \(S^{2}\times S^{2}\) cannot admit a metric with \(\mathrm{Ric}_{2}>0\) and \(\mathsf{T}^{2}\)-symmetry, it does admit one with \(\mathsf{S}^{1}\)-symmetry; see [14, Example 2.3]. In fact, this \(\mathrm{Ric}_{2}>0\) metric on \(S^{2}\times S^{2}\) is invariant under a cohomogeneity one action by \(\mathsf{SO}(3)\).
We remark now on other known generalizations of Grove and Searle's maximal symmetry rank theorem. First, in an unpublished manuscript, Wilking proved that connected \(n\)-dimensional manifolds that have a point at which all sectional curvatures are positive must have symmetry rank bounded above by \(\lfloor\frac{n+1}{2}\rfloor\)[15]; for the proof, see [10, Theorem 1.3]. Galaz-Garcia then extended the Grove and Searle classification for maximal symmetry rank to manifolds with quasipositive curvature (sectional curvature non-negative everywhere and positive at a point) [11]. The second author established a generalized version of Wilking's symmetry rank bound for manifolds which have a point at which all intermediate Ricci curvatures are positive [14].
Second, for the case of positive weighted sectional curvature, the first author and Wylie proved the symmetry rank bound is the same as for positive sectional curvature, and they recover rigidity in the equality case up to homeomorphism; see [13].
Third, for non-negative sectional curvature, Galaz-Garcia and Searle conjectured a generalization of the maximal symmetry rank theorem [10], which was later reformulated and sharpened by Escher and Searle [16]. The present conjecture is that simply connected, closed \(n\)-dimensional manifolds with non-negative sectional curvature must have symmetry rank bounded above by \(\lfloor\frac{2n}{3}\rfloor\), and in the case of equality, must be equivariantly diffeomorphic to a product of spheres or a quotient thereof by a linear torus action. Galaz-Garcia and Searle proved this conjecture in dimensions up to \(6\)[10], and Escher and Searle showed that the conjectured symmetry rank bound holds up to dimension \(12\), and the classification up to dimension \(9\)[16]. With the added assumption that the maximal
torus action is isotropy-maximal, the above conjecture was shown by Escher and Searle to hold in all dimensions [10]. If instead the assumption of non-negative sectional curvature is replaced with rational ellipticity, which is expected to follow from non-negative sectional curvature by the Bott-Grove-Halperin ellipticity conjecture (see [1, 1, 11]), then the conjecture was established up to rational homotopy equivalence by Galaz-Garcia, Kerin, and Radeschi [1].
Finally, for the situation where the torus is replaced by an elementary \(p\)-group for some prime \(p\), Fang and Rong proved the optimal upper bound and obtained homeomorphism rigidity in the equality case for \(p\) larger than a constant depending only on the manifold dimension [10]. There are two reasonable analogues of this result for \(p=2\) (see [10] and [14, Theorems A and B]).
Our second main result is a rigidity statement for Riemannian manifolds with \(\operatorname{Ric}_{k}>0\) for larger values of \(k\). More precisely, given a closed connected \(n\)-dimensional Riemannian manifold \(M\), the second author proved that the symmetry rank of \(M^{n}\) is at most \(\left\lfloor\frac{n+k}{2}\right\rfloor-1\) if \(M^{n}\) has \(\operatorname{Ric}_{k}>0\) with \(k\geq 3\). This bound agrees with the classical bound of \(\left\lfloor\frac{n+1}{2}\right\rfloor\) for \(k=3\) and for \(k=4\) when \(n\) is odd. Since the condition \(\operatorname{Ric}_{k}>0\) grows weaker as \(k\) increases, the available tools also grow weaker for manifolds with \(\operatorname{Ric}_{k}>0\), and one should not expect to be able to prove an analogue of Theorem A in this setting without stronger hypotheses. We prove two results along these lines. The first is based on the model spaces of spheres and \(S^{3}\times S^{3}\).
**Theorem B**.: _Fix \(k\geq 3\), and assume \(M^{n}\) is a \((k-1)\)-connected, closed Riemannian manifold with \(n\neq 7\) if \(k=3\). If \(M^{n}\) has \(\operatorname{Ric}_{k}>0\) and admits an isometric \(\mathsf{T}^{r}\)-action with \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\), then one of the following occurs:_
1. \(M\) _is diffeomorphic to_ \(S^{n}\) _and_ \(k\leq 4\)_, with equality only if_ \(n\) _is odd._
2. \(M\) _is diffeomorphic to_ \(S^{3}\times S^{3}\) _and_ \(k=3.\)__
The conclusions in (1) and (2) are optimal in the sense that \(S^{n}\) and \(S^{3}\times S^{3}\) admit metrics with \(\operatorname{Ric}_{k}>0\) and maximal symmetry rank for all values of \(k\) shown. The second result for large values of \(k\) is modeled on complex projective space:
**Theorem C**.: _Fix \(k\geq 3\), and let \(M^{n}\) be a simply connected, closed Riemannian manifold. Assume further that \(M^{n}\) is an integral cohomology \(\mathbb{C}\mathrm{P}\) up to degree \(k+2\). If \(M\) has \(\operatorname{Ric}_{k}>0\) and admits an effective, isometric \(\mathsf{T}^{r}\)-action with \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\), then \(n\) is even, \(M^{n}\) is homeomorphic to \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\), and \(k=3\)._
By the assumption on the cohomology of \(M\), we mean that \(H^{1}(M;\mathbb{Z})\cong 0\), \(H^{2}(M;\mathbb{Z})\cong\mathbb{Z}\), and the map \(H^{i}(M;\mathbb{Z})\to H^{i+2}(M;\mathbb{Z})\) induced by multiplication by a generator \(x\in H^{2}(M;\mathbb{Z})\) is surjective for \(0\leq i<k\) and injective for \(0<i\leq k\). As with Theorem A, we cannot obtain rigidity up to diffeomorphism, and we do not know whether any exotic \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\) admits \(\operatorname{Ric}_{3}>0\) and maximal symmetry rank.
Though Theorems B and C partially generalize Theorem A under stronger topological assumptions, we note that there are potentially many more examples of manifolds that satisfy \(\operatorname{Ric}_{k}>0\) with \(k\geq 3\) than those listed in the conclusions of these results. For
example, under the respective product metrics, \(S^{2}\times S^{2}\) has \(\mathrm{Ric}_{3}>0\) with \(\mathsf{T}^{2}\)-symmetry, \(S^{3}\times S^{2}\) has \(\mathrm{Ric}_{4}>0\) with \(\mathsf{T}^{3}\)-symmetry, and \(S^{3}\times S^{3}\) has \(\mathrm{Ric}_{4}>0\) and \(\mathsf{T}^{4}\)-symmetry.
Finally, we analyze the case of non-trivial fundamental group:
**Theorem D**.: _Let \(M^{n}\) be a closed, connected Riemannian manifold with \(\mathrm{Ric}_{2}>0\) and \(\mathsf{T}^{r}\) symmetry with \(r=\left\lfloor\frac{n+1}{2}\right\rfloor\). If \(\pi_{1}(M)\) is non-trivial, then one of the following occurs:_
1. \(M\) _is homotopy equivalent to_ \(\mathbb{R}\mathrm{P}^{n}\) _or a lens space, or_
2. \(M\) _has dimension six and, if additionally the universal cover is_ \(S^{3}\times S^{3}\)_, then_ \(\pi_{1}(M)\cong\mathbb{Z}_{\ell}\times\mathbb{Z}_{m}\) _for some_ \(\ell,m\geq 1\)_._
The standard models in (1) can already be realized with positive sectional curvature, and in the case of maximal symmetry rank and positive sectional curvature, Grove and Searle's result recovers rigidity up to diffeomorphism instead of just homotopy, and Galaz-Garcia later strengthened this rigidity to equivariant diffeomorphism. Also note that our result rules out the possibility that the universal cover is homeomorphic to \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\) when \(M\) is not simply connected. Finally, we remark that the fundamental groups as in (2) can be realized by the known metric on \(S^{3}\times S^{3}\) with \(\mathsf{T}^{3}\) symmetry and \(\mathrm{Ric}_{2}>0\). Indeed, the product of the Hopf actions on the \(S^{3}\) factors gives rise to a \(\mathsf{T}^{2}\)-subaction of the \(\mathsf{T}^{3}\)-action. Since this action is free, we obtain products of lens space \(S^{3}/\mathbb{Z}_{\ell}\times S^{3}/\mathbb{Z}_{m}\) as examples, as well as quotients \((S^{3}\times S^{3})/\mathbb{Z}_{\ell}\) by possibly diagonal actions, for example, \((S^{3}\times S^{3})/\{\pm(1,1)\}=\mathsf{SO}(4)\). We also remark that many other finite groups act freely on \(S^{3}\times S^{3}\), including all finite subgroups of \(\mathsf{Spin}(4)\cong S^{3}\times S^{3}\) and, more surprisingly, the two-fold product \(S_{3}\times S_{3}\) of the symmetric group on three letters (see Davis [11] and [12]). Finally, we note that Dominguez-Vazquez, Gonzalez-Alvaro, and Rodriguez-Vazquez have determined that the Wallach flag manifold \(W^{6}=\mathsf{SU}(3)/\mathsf{T}^{2}\) with the normal homogeneous metric has \(\mathrm{Ric}_{2}>0\)[DGR]. This metric has a free, isometric \(S_{3}\)-action and has \(\mathsf{T}^{2}\)-symmetry, which is not maximal for \(\mathrm{Ric}_{2}>0\) in dimension \(6\).
Regarding the non-simply connected case of Theorems B and C, the conclusions are the same as in Part (1) of Theorem D under the modification that the cohomology of the _universal cover_ of \(M\) satisfies the topological assumptions stated in Theorems B and C. The proof is a straightforward modification of the proof we give for the case of \(k=2\), so it is omitted.
The key tool for establishing Theorem A is the Connectedness Lemma (Theorem 2.4). In dimension \(4\), we use this tool in conjunction with topological results by Orlik and Raymond [10]; see Section 3. In dimension \(6\), we break the argument into the cases according to whether the torus has a fixed point; see Section 4. If the torus has a fixed point, we employ an argument involving Euler characteristics to rule out connected sums of complex projective spaces. Curiously, this argument only eliminates such connected sums in dimensions strictly larger than four. If the torus has no fixed points, then it follows immediately that \(\chi(M)=\chi(M^{\mathsf{T}^{3}})=0\). For dimensions \(8\) or greater, we prove that the torus has a fixed point, and the result then follows by induction using the Connectedness Lemma, noting in dimension eight that the induced torus action on the \(6\)-dimensional submanifold involved in the proof has a fixed point; see Section 5.
To prove Theorems B and C, we show in most cases that such manifolds must have a circle action whose fixed-point set contains a component of codimension \(2\), and then we apply the Connectedness Lemma (Theorem 2.4); see Section 6. The demand for the additional topological assumptions is a consequence of the fact that the Connectedness Lemma provides less information about the topology of \(M\) as \(k\) increases.
Theorem D is proved in Section 7. It borrows standard and elementary results from group cohomology that have been used previously in the positive sectional curvature case together with special arguments in the cases where the universal cover \(\tilde{M}\) is diffeomorphic to \(S^{3}\times S^{3}\) or \(\mathbb{CP}^{2}\#\ldots\#\mathbb{CP}^{2}\).
### Acknowledgements
The authors would like to thank William Wylie, Jason DeVito, Marco Radeschi, and Fernando Galaz-Garcia for helpful discussions, along with Jason DeVito and Philipp Reiser for suggestions on a previous version of this article. The first author was funded by NSF Grant DMS-2005280, and the second author was funded by NSF Award DMS-2202826.
## 2. Preliminaries
We begin with a discussion of fixed-point sets. Given an isometric action of a Lie group \(\mathsf{G}\) on a Riemannian manifold \(M\), we let \(M^{\mathsf{G}}\) denote the fixed-point set of the \(\mathsf{G}\)-action on \(M\). Given a point \(p\in M^{\mathsf{G}}\), we denote the component of \(M^{\mathsf{G}}\) that contains \(p\) by \(M^{\mathsf{G}}_{p}\), and refer to it as a fixed-point component. The following is a foundational structure result for isometric torus actions.
**Lemma 2.1**.: _Let \(M\) be a closed Riemannian manifold. Assume a torus \(\mathsf{T}^{r}\) acts isometrically on \(M\), and let \(\mathsf{H}\) be a closed subgroup of \(\mathsf{T}^{r}\) whose fixed-point set \(M^{\mathsf{H}}\) is non-empty. Then every component of \(M^{\mathsf{H}}\) is an embedded, totally geodesic submanifold of even codimension in \(M\) that is invariant under the action of \(\mathsf{T}^{r}/\mathsf{H}\). Furthermore, given any fixed-point component \(M^{\mathsf{H}}_{p}\), the following hold:_
1. _If_ \(\mathsf{H}\) _is a torus and_ \(M\) _is orientable, then_ \(M^{\mathsf{H}}_{p}\) _is also orientable._
2. _If_ \(\dim\mathsf{H}\geq 2\)_, then there exists a circle subgroup_ \(\mathsf{S}^{1}\subset\mathsf{H}\) _whose fixed-point component_ \(M^{\mathsf{S}^{1}}_{p}\) _strictly contains_ \(M^{\mathsf{H}}_{p}\)_._
3. _If_ \(\mathsf{H}\) _is disconnected and is the isotropy group at_ \(p\)_, and if_ \(\dim\mathsf{H}\geq 1\)_, then there exists a non-trivial, finite isotropy group_ \(\Gamma\subseteq\mathsf{H}\) _whose fixed-point component_ \(M^{\Gamma}_{p}\) _strictly contains_ \(M^{\mathsf{H}}_{p}\)_._
For justification for Part (2), see, for example, [11, Proposition 8.3.8], and for Part (3), see [17, Lemma 1.10].
The next lemma, which was established by Conner [10], will be especially useful in establishing Theorem D.
**Lemma 2.2** (Betti Number Lemma).: _If a torus \(\mathsf{T}\) acts smoothly on a closed, smooth manifold \(M\), then \(\chi(M)=\chi(M^{\mathsf{T}})\), \(\sum b_{2i+1}(M^{\mathsf{T}})\leq\sum b_{2i+1}(M)\), and \(\sum b_{2i}(M^{\mathsf{T}})\leq\sum b_{2i}(M)\)._
The two main ways in which our positive curvature assumptions play a role is via the following two results. The first is a generalization of the Berger-Sugahara fixed-point theorem, which is stated in the next lemma. Part (1) was established by Berger [1], the \(k=1\) case of Part (2) by Sugahara [10], and the \(k\geq 2\) case by the second author [14].
**Lemma 2.3** (Isotropy Rank Lemma).: _Let \(M\) be a closed Riemannian manifold with \(\operatorname{Ric}_{k}>0\) and an isometric action by a torus \(\mathsf{T}^{r}\)._
1. _If_ \(k=1\) _and_ \(n\) _is even, then_ \(\mathsf{T}^{r}\) _has a fixed point._
2. _For any_ \(k\geq 1\) _and_ \(r\geq k\)_, there exists a subtorus_ \(\mathsf{T}^{r-k}\) _that has a fixed point._
The second main tool from positive curavture we use is Wilking's Connectedness Lemma (see [13]), which is the \(k=1\) statement of the following theorem. The generalization of the first part of (1) and of (2) to the case where \(k\geq 2\) is stated in [13, Remark 2.4]. For the second part of (1), the generalization to \(k\geq 2\) was proved by the second author (see [14]).
**Theorem 2.4** (Connectedness Lemma).: _Let \(M^{n}\) be a closed, connected Riemannian manifold with \(\operatorname{Ric}_{k}>0\)._
1. _If_ \(N^{n-d}\) _is an embedded totally geodesic submanifold of_ \(M^{n}\)_, then the inclusion_ \[N^{n-d}\hookrightarrow M^{n}\text{ is }(n-2d+2-k)\text{-connected}.\] _Furthermore, if there is a Lie group_ \(\mathsf{G}\) _acting on_ \(M^{n}\) _by isometries and fixing_ \(N^{n-d}\) _pointwise, then the inclusion is_ \((n-2d+2-k+\delta(\mathsf{G}))\)_-connected, where_ \(\delta(\mathsf{G})\) _is the dimension of the principal orbits of the_ \(\mathsf{G}\)_-action on_ \(M^{n}\)_._
2. _If_ \(N_{1}^{n-d_{1}}\) _and_ \(N_{2}^{n-d_{2}}\) _are embedded totally geodesic submanifolds with_ \(d_{1}\leq d_{2}\)_, then the intersection_ \(N_{1}^{n-d_{1}}\cap N_{2}^{n-d_{2}}\) _is also totally geodesic, and the inclusion_ \[N_{1}^{n-d_{1}}\cap N_{2}^{n-d_{2}}\hookrightarrow N_{2}^{n-d_{2}}\text{ is }(n-d_{1}-d_{2}+1-k)\text{-connected}.\]
The Connectedness Lemma forces restrictions at the level of cohomology when combined with the following lemma, which is a topological result about highly connected inclusions of Poincare duality spaces that was proved by Wilking [13]:
**Lemma 2.5** (Periodicity Lemma).: _Suppose \(N^{n-d}\hookrightarrow M^{n}\) is a \((n-d-\ell)\)-connected inclusion of connected, closed, orientable manifolds. If \(e\in H^{k}(M^{n};\mathbb{Z})\) denotes the Poincare dual of the image in \(H_{n-d}(M^{n};\mathbb{Z})\) of the fundamental class of \(N\), then the homomorphisms \(\cup e:H^{i}(M;\mathbb{Z})\to H^{i+d}(M;\mathbb{Z})\) given by \(x\mapsto x\cup e\) are surjective for \(\ell\leq i<n-d-\ell\) and injective for \(\ell<i\leq n-d-\ell\)._
Of particular importance to us is the case in the Periodicity Lemma when \(\ell=1\) and \(d=2\). Based on whether \(e\) is zero or non-zero, if \(M^{n}\) is simply connected, we find that \(M^{n}\) has the cohomology of \(S^{n}\), \(\mathbb{C}\mathbb{P}^{\frac{n}{2}}\), or more generally a finite connected sum \(\mathbb{C}\mathbb{P}^{\frac{n}{2}}\#\ldots\#\mathbb{C}\mathbb{P}^{\frac{n}{2}}\) with at least two summands. For the first two of these manifolds, it is well known that homotopy rigidity is automatic. One has some level of rigidity in the non-simply connected case as well, according to the following (for a proof, see [16, Theorem 3.4]):
**Theorem 2.6** (Cohomology-to-homotopy Lemma).: _Let \(M^{n}\) be a closed, smooth manifold. The following hold:_
1. _If_ \(\pi_{1}(M)\) _is cyclic and the universal cover_ \(\tilde{M}\) _is a cohomology sphere, then_ \(M\) _is homotopy equivalent to_ \(S^{n}\)_,_ \(\mathbb{R}\mathrm{P}^{n}\)_, or a lens space._
2. _If_ \(\pi_{1}(M)\) _is trivial and_ \(M\) _is a cohomology_ \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\)_, then_ \(M\) _is homotopy equivalent to_ \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\)_._
Finally, to upgrade further from homotopy rigidity to homeomorphism or diffeomorphism rigidity, we use the following two results. The first is for spheres and was proved by Montgomery and Yang [10], and the second is for complex projective spaces and was proved by Fang and Rong [11].
**Theorem 2.7** (Diffeomorphism rigidity for spheres).: _Suppose \(M\) is a homotopy sphere, and assume the circle \(\mathsf{S}^{1}\) acts smoothly on \(M\) such that the fixed-point set \(N\) is simply connected and of codimension \(2\). Then \(M\) is diffeomorphic to the standard sphere \(S^{n}\) such that the \(\mathsf{S}^{1}\)-action on \(M\) is smoothly equivalent to a linear circle action on \(S^{n}\)._
**Theorem 2.8** (Homeomorphism rigidity for complex projective spaces).: _Suppose \(M\) is a homotopy \(\mathbb{C}\mathrm{P}^{n}\), and assume a submanifold \(N\) of codimension \(2\) is homeomorphic to \(\mathbb{C}\mathrm{P}^{n-1}\). If the inclusion map \(N\hookrightarrow M\) is at least \(3\)-connected, then \(M\) is homeomorphic to \(\mathbb{C}\mathrm{P}^{n}\)._
## 3. Maximal symmetry rank for \(\mathbf{Ric_{2}>0}\) in dimension 4
In this section, we establish the four dimensional case of Theorem A. First, we survey the topology of the spaces in question without curvature considerations.
Throughout this section, let \(M\) be a closed, \(4\)-dimensional, simply connected, \(\mathsf{T}^{2}\)-manifold, and let \(M^{*}\) denote the orbit space \(M/\mathsf{T}^{2}\). The orbit structure of such spaces were studied by Orlik and Raymond in [12], which we will summarize here. For the manifolds \(M\) under consideration, the isotropy groups are connected, meaning possible isotropy groups are either trivial or isomorphic to \(\mathsf{S}^{1}\) or \(\mathsf{T}^{2}\); see Lemma 5.2 in [12]. The orbit space \(M^{*}\) is homeomorphic to a closed \(2\)-dimensional disk, and the boundary \(\partial M^{*}\) consists of a cycle (graph) with the number of vertices equal to the Euler characteristic of \(M\). We will assume an orientation on \(M^{*}\), and hence on \(\partial M^{*}\), and we will accordingly fix an enumeration for the vertices of \(\partial M^{*}\): \(f_{0}^{*},f_{1}^{*},\ldots,f_{t-1}^{*}\), where \(t=\chi(M)\). Each vertex \(f_{i}^{*}\) in \(\partial M^{*}\) corresponds to an isolated fixed point \(f_{i}\in M\) of the \(\mathsf{T}^{2}\)-action. Let \(\Sigma_{i}^{*}\) denote the edge \(f_{i}^{*}\) to \(f_{i+1}^{*}\) (counting mod \(t\)). Points along \(\Sigma_{i}^{*}\) correspond to \(1\)-dimensional orbits in \(M\), all of which have the same isotropy group, which is isomorphic to \(\mathsf{S}^{1}\). In other words, \(\Sigma_{i}^{*}\) corresponds to a \(2\)-dimensional sphere \(\Sigma_{i}\) in \(M\) that is fixed by a circle subgroup of \(\mathsf{T}^{2}\). Fixing a parametrization \((z_{1},z_{2})\) of \(\mathsf{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}\), each \(\mathsf{S}^{1}\) isotropy is equal to \(\{(z_{1},z_{2}):mz_{1}+nz_{2}=0\}\) for some relatively prime integers \(m\) and \(n\). For a given \(\mathsf{S}^{1}\) isotropy, the associated vector \((m,n)\in\mathbb{Z}^{2}\) is unique up to sign. Given an edge \(\Sigma_{i}^{*}\) of \(\partial M^{*}\), we will call the vector \((m_{i},n_{i})\) that corresponds to the \(\mathsf{S}^{1}\) isotropy of the edge the _weight_ of \(\Sigma_{i}^{*}\); see Figure 1.
Given two adjacent edges \(\Sigma_{i-1}^{*}\) and \(\Sigma_{i}^{*}\), the common vertex \(f_{i}^{*}\) corresponds to a point of intersection \(f_{i}\) between the two spheres \(\Sigma_{i-1}\) and \(\Sigma_{i}\) in \(M\). Define the determinant of the
weights of these edges
\[\varepsilon_{i}\coloneqq\det\begin{bmatrix}m_{i-1}&m_{i}\\ n_{i-1}&n_{i}\end{bmatrix}.\]
Because the \(\mathsf{S}^{1}\) isotropies of these spheres must generate the homology of \(\mathsf{T}^{2}\), it follows that the determinant of the weights must satisfy \(\varepsilon_{i}=\pm 1\).
More generally, given (not necessarily adjacent) edges \(\Sigma_{i}^{*}\) and \(\Sigma_{j}^{*}\) in \(\partial M^{*}\), we will denote the determinant of their weights by
\[r_{i,j}\coloneqq\det\begin{bmatrix}m_{i}&m_{j}\\ n_{i}&n_{j}\end{bmatrix}.\]
Notice that \(r_{i-1,i}=\varepsilon_{i}=\pm 1\) for all \(i\) (counting mod \(t\)), and if \(r_{i,j}=0\) for some \(i\) and \(j\), then the corresponding spheres \(\Sigma_{i}\) and \(\Sigma_{j}\) in \(M\) are fixed by the same circle subgroup of \(\mathsf{T}^{2}\).
For the next few remarks, the isomorphisms mentioned are in the category of equivariant diffeomorphisms. If the number of fixed points \(t=2\), then \(M\cong S^{4}\). If \(t=3\) and \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}=1\) (resp. \(-1\)), then \(M\cong\mathbb{C}\mathrm{P}^{2}\) (resp. \(\overline{\mathbb{C}\mathrm{P}^{2}}\)). Note, when a parametrization of \(\mathsf{T}^{2}\) is specified, an orientation of \(M^{*}\) determines an orientation of \(M\), and vice versa. For the case \(t=4\), \(M\cong\mathbb{C}\mathrm{P}^{2}\#\mathbb{C}\mathrm{P}^{2}\), \(\overline{\mathbb{C}\mathrm{P}^{2}}\#\overline{\mathbb{C}\mathrm{P}^{2}}\), \(S^{2}\times S^{2}\), or \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) depending on the values of \(\varepsilon_{0},\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},r_{0,2}\), and \(r_{1,3}\). The conditions on \(\varepsilon_{i}\) and \(r_{i,j}\) that determine \(M\) for \(t\leq 4\) are given in Table 3 (see [10, page 552]):
Figure 1. Weighted orbit space of closed simply connected \(4\)-dimensional \(\mathsf{T}^{2}\)-manifold.
For the cases when \(t\geq 5\), \(M\) is equivariantly diffeomorphic to a connected sum of finitely many copies of \(S^{2}\times S^{2}\), \(\mathbb{C}\mathrm{P}^{2}\), and \(\overline{\mathbb{C}\mathrm{P}^{2}}\). In particular, for every pair of adjacent edges \(\Sigma_{i-1}^{*}\) and \(\Sigma_{i}^{*}\) in \(\partial M^{*}\), there exists a third distinct edge \(\Sigma_{j}^{*}\) such that
1. \(r_{i,j}=\pm 1\) and \(\Sigma_{j}^{*}\) is not adjacent to \(\Sigma_{i}^{*}\), or
2. \(r_{i-1,j}=\pm 1\) and \(\Sigma_{j}^{*}\) is not adjacent to \(\Sigma_{i-1}^{*}\).
In the case of (1), one can connect an interior point of edge \(\Sigma_{i}^{*}\) to an interior point of edge \(\Sigma_{j}^{*}\) using a simple curve \(L^{*}\) through the interior of \(M^{*}\). This curve \(L^{*}\) separates \(M^{*}\) into two disjoint regions, the closures of which we will denote by \(X_{1}^{*}\) and \(X_{2}^{*}\). For \(k=1\) or \(2\), consider \(N_{k}^{*}\coloneqq X_{k}^{*}/\{L^{*}\sim\mathrm{pt}.\}\), i.e. the disk obtained from \(X_{k}^{*}\) by identifying the portion of its boundary containing \(L^{*}\) to a point. The edges of \(\partial N_{k}^{*}\) inherit weights from corresponding edges on \(\partial M^{*}\); see Figure 2. Then \(N_{k}^{*}\) corresponds to the orbit space \(N_{k}/\mathsf{T}^{2}\) of some closed, simply connected, \(4\)-dimensional \(\mathsf{T}^{2}\)-manifold with \(3\leq\chi(N_{k})\leq\chi(M)-1\). The curve \(L^{*}\) in \(M^{*}\) corresponds to an invariant \(3\)-sphere \(L\) in \(M\). In particular, \(M\) is equivariantly diffeomorphic to the connected sum \(N_{1}\#N_{2}\), where the gluing occurs along \(L\). Case (2) above leads similarly to a decomposition of \(M\) into a connected sum \(N_{1}^{\prime}\#N_{2}^{\prime}\) for some \(N_{k}^{\prime}\) with \(3\leq\chi(N_{k}^{\prime})\leq\chi(M)-1\).
Notice that in \(N_{1}^{*}\), the value of \(\varepsilon\) for the edges that meet at the point corresponding to \(L^{*}\) (namely \(\Sigma_{i}^{*}\) and \(\Sigma_{j}^{*}\)) will be opposite of the value of \(\varepsilon\) at \(L^{*}\) in \(N_{2}^{*}\). Note that this type of decomposition of \(M\) can also be carried out if \(M\cong\mathbb{C}\mathrm{P}^{2}\#\mathbb{C}\mathrm{P}^{2}\) or \(\overline{\mathbb{C}\mathrm{P}^{2}}\#\overline{\mathbb{C}\mathrm{P}^{2}}\), but not for \(S^{2}\times S^{2}\) or \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\); c.f. Remark 5.10 in [1]. Therefore, if \(\chi(M)\geq 5\), then by repeating the procedure outlined above, \(M^{*}\) can always be partitioned into finitely many pieces, each corresponding to \(\mathbb{C}\mathrm{P}^{2}\), \(\overline{\mathbb{C}\mathrm{P}^{2}}\), or \(S^{2}\times S^{2}\). Furthermore, in such a decomposition \(M\cong N_{1}\#\dots N_{m}\), given a pair \(N_{i},N_{i+1}\) for \(1\leq i\leq m-1\), the decomposition is done in such a way that the vertices on \(\partial N_{i}^{*}\) and \(\partial N_{i+1}^{*}\) at which the gluing \(N_{i}\#N_{i+1}\) occurs have opposite signs for \(\varepsilon\).
Now we establish curvature obstructions for the above spaces. In particular, our key observation is the following:
**Lemma 3.1**.: _Let \(M\) be a compact, connected, \(4\)-dimensional Riemannian manifold with \(\mathrm{Ric}_{2}>0\). If \(\mathsf{S}^{1}\) acts effectively and by isometries on \(M\) and fixes a \(2\)-dimensional submanifold \(N\) pointwise, then \(N\) must be connected._
\begin{table}
\begin{tabular}{||c|c|c||} \hline \(t\) & \(M\) & _Condition_ \\ \hline \hline \(2\) & \(S^{4}\) & \\ \hline \(3\) & \(\overline{\mathbb{C}\mathrm{P}^{2}}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}=+1\) \\ & \(\overline{\mathbb{C}\mathrm{P}^{2}}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}=-1\) \\ \hline \(4\) & \(\overline{\mathbb{C}\mathrm{P}^{2}}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}=+1\), and \(r_{1,3}\in\{\varepsilon_{2}\varepsilon_{3},2\varepsilon_{2}\varepsilon_{3}\}\) \\ & \(\overline{\mathbb{C}\mathrm{P}^{2}}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}=+1\), and \(r_{1,3}\in\{-\varepsilon_{2}\varepsilon_{3},-2\varepsilon_{2}\varepsilon_{3}\}\) \\ & \(S^{2}\times S^{2}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}=-1\), and both \(r_{0,2}\) and \(r_{1,3}\) are even (at least one is \(0\)) \\ & \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) & \(-\varepsilon_{0}\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}=-1\), and either \(r_{0,2}\) or \(r_{1,3}\) is odd (the other one is \(0\)) \\ \hline \end{tabular}
\end{table}
Table 3.1. Equivariant diffeomorphism classes of simply connected \(4\)-dimensional \(\mathsf{T}^{2}\)-manifolds with Euler characteristic \(t\leq 4\).
Proof.: Since \(\mathsf{S}^{1}\) acts effectively on \(M\), the principal orbits are \(1\)-dimensional. By Theorem 2.4, the inclusion \(N\hookrightarrow M\) is \(1\)-connected. In particular, \(N\) is connected since \(M\) is.
**Corollary 3.2**.: _Let \(M\) be a compact, simply connected, \(4\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{2}>0\). If \(\mathsf{T}^{2}\) acts effectively and by isometries on \(M\), and if \((m_{i},n_{i})\) and \((m_{j},n_{j})\) are weights for non-adjacent edges of \(\partial M^{*}\), then_
\[r_{i,j}=\det\begin{bmatrix}m_{i}&m_{j}\\ n_{i}&n_{j}\end{bmatrix}\neq 0.\]
_In particular, neither \(S^{2}\times S^{2}\) nor \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) admit a metric with \(\operatorname{Ric}_{2}>0\) that is invariant under a \(\mathsf{T}^{2}\)-action._
Proof.: The first statement follows from Lemma 3.1 and the definition of weights. From Table 3, the orbit structure of \(S^{2}\times S^{2}\) and \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) require that \(r_{i,j}=0\) for \((i,j)=(0,2)\) or \((1,3)\). Thus, these manifolds cannot support a metric with \(\operatorname{Ric}_{2}>0\) that is invariant under a \(\mathsf{T}^{2}\)-action.
We can now establish the four-dimensional case of Theorem A.
**Theorem 3.3**.: _Let \(M\) be a closed, simply connected, \(4\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{2}>0\). If \(\mathsf{T}^{2}\) acts effectively and by isometries on \(M\), then \(M\) is equivariantly diffeomorphic to \(\#_{i=1}^{b}\mathbb{C}\mathrm{P}^{2}\) for some \(b\geq 0\)._
Proof.: Fix a parametrization of \(\mathsf{T}^{2}\) and an orientation of \(M\), which then fixes an orientation of \(M^{*}\). If \(\chi(M)\leq 4\), then the only candidates are \(S^{4}\), \(\mathbb{C}\mathrm{P}^{2}\), \(S^{2}\times S^{2}\), \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\), or \(\mathbb{C}\mathrm{P}^{2}\#\mathbb{C}\mathrm{P}^{2}\), up to a change in orientation. However, by Corollary 3.2, neither \(S^{2}\times S^{2}\) nor \(\mathbb{C}\mathrm{P}^{2}\#\overline{\mathbb{C}\mathrm{P}^{2}}\) can admit an invariant metric with \(\operatorname{Ric}_{2}>0\).
If \(\chi(M)\geq 5\), then following the procedure outlined in the beginning of this section, there exist non-adjacent edges of the boundary \(\partial M^{*}\) of the orbit space whose respective weights have determinant \(\pm 1\). The orbit space \(M^{*}\) can then be separated along a curve joining these
Figure 2. Decomposing the orbit space of a simply connected \(4\)-dimensional \(\mathsf{T}^{2}\)-manifold with Euler characteristic \(\geq 5\).
two edges, and accordingly, \(M\) decomposes as \(N_{1}\#N_{2}\) for some closed, simply connected, \(4\)-dimensional \(\mathsf{T}^{2}\)-manifolds \(N_{k}\) with \(3\leq\chi(N_{k})\leq\chi(M)-1\), for \(k=1,2\). The weights of the edges of the boundaries \(\partial N_{k}^{*}\) are inherited from the corresponding edges in \(\partial M^{*}\), along with the orientations of their boundaries. This process can be repeated until \(M\) is written as a connected sum \(N_{1}\#\ldots\#N_{m}\) such that each \(N_{k}\) if equivariantly diffeomorphic to \(\mathbb{C}\mathrm{P}^{2}\), \(\overline{\mathbb{C}\mathrm{P}^{2}}\), or \(S^{2}\times S^{2}\).
Because the weights of the edges of the boundaries \(\partial N_{k}^{*}\) are inherited from edges of \(\partial M^{*}\), if \(\partial N_{k}^{*}\) has non-adjacent edges whose weights have determinant zero for some \(k\), then so does \(\partial M^{*}\). Thus, by Corollary 3.2, each space \(N_{k}\) must be a complex projective space, and furthermore, they all have the same orientation.
## 4. Maximal symmetry rank for \(\operatorname{\mathbf{Ric}}_{2}>\mathbf{0}\) in dimension 6
In this section, we establish the six-dimensional case of Theorem A. In contrast to the case of positive sectional curvature (\(\operatorname{\mathrm{Ric}}_{1}>0\)), the \(\mathsf{T}^{3}\)-action on \(M^{6}\) need not have a fixed point. Additionally, it does not follow immediately from the Connectedness and Periodicity Lemmas that the second Betti number satisfies \(b_{2}(M)\leq 1\) as it does in the positive sectional curvature case. The following is the case in which we can argue that \(b_{2}(M)\leq 1\):
**Proposition 4.1**.: _Let \(M\) be a \(6\)-dimensional, closed, simply connected Riemannian manifold with \(\operatorname{\mathrm{Ric}}_{2}>0\). Suppose \(\mathsf{T}^{3}\) acts effectively and by isometries on \(M\). If the \(\mathsf{T}^{3}\)-action has a fixed point, then \(M\) is diffeomorphic to \(S^{6}\) or \(\mathbb{C}\mathrm{P}^{3}\)._
Proof.: Suppose \(p\) is a fixed point for the \(\mathsf{T}^{3}\)-action on \(M\). Then at \(p\), the isotropy representation \(\mathsf{T}^{3}\to\mathsf{U}(3)\subset\mathsf{O}(6)\) is of the form \((z_{1},z_{2},z_{3})\mapsto\operatorname{\mathrm{diag}}(z_{1},z_{2},z_{3}) \in\mathsf{U}(3)\) for some choice of parametrization of \(\mathsf{T}^{3}\) and basis in \(T_{p}M\). For each \(i\in\{1,2,3\}\), define \(\mathsf{S}^{1}_{i}\subset\mathsf{T}^{3}\) to be the circle subgroup parametrized by \(z_{i}\) in this representation, and let \(N_{i}\) denote the four-dimensional fixed-point component \(M_{p}^{\mathsf{S}^{1}_{i}}\).
By Theorem 2.4, the inclusions \(N_{i}^{4}\hookrightarrow M^{6}\) are \(3\)-connected. This implies that \(N_{i}\) is simply connected since \(M\) is, that \(H^{2}(N_{i};\mathbb{Z})\cong H^{2}(M;\mathbb{Z})\) for all \(i\), and that \(H^{3}(M^{6};\mathbb{Z})=0\) since it injects into \(H^{3}(N_{i}^{4};\mathbb{Z})\), which is zero by Poincare duality. In particular, defining \(b=b_{2}(M^{6})\), we have \(\chi(M^{6})=2+2b\) and \(\chi(N_{i}^{4})=2+b\) for all \(i\).
Furthermore, by Theorem 2.4, the inclusions \(N_{i}\cap N_{j}\hookrightarrow N_{j}\) are \(1\)-connected, and in particular, \(N_{i}\cap N_{j}\) is connected for all \(i\neq j\). Also, each \(2\)-dimensional intersection \(N_{i}\cap N_{j}\) is orientable by Lemma 2.1 and has an effective \(\mathsf{S}^{1}\) action with non-empty fixed-point set that contains \(p\). Thus, each \(N_{i}\cap N_{j}\) is a \(2\)-sphere, and \(\chi(N_{i}\cap N_{j})=2\) for all \(i\neq j\).
Because \((N_{1}\cup N_{2}\cup N_{3})\mathsf{T}^{3}\subseteq M^{\mathsf{T}^{3}}\), by Lemma 2.2 we have
\[\chi(M) \geq\chi(N_{1}\cup N_{2}\cup N_{3})\] \[=\sum_{i}\chi(N_{i})-\sum_{i<j}\chi(N_{i}\cap N_{j})+\chi(N_{1} \cap N_{2}\cap N_{3})\] \[=3(2+b)-3(2)+\chi(N_{1}\cap N_{2}\cap N_{3}).\]
Since \(\chi(M)=2+2b\) and \(N_{1}\cap N_{2}\cap N_{3}\) is a non-empty collection of isolated fixed points for the \(\mathsf{T}^{3}\)-action on \(M\), we have
\[b_{2}(M)=b\leq 2-\chi(N_{1}\cap N_{2}\cap N_{3})\leq 1.\]
It follows that \(M\) has the homology groups of \(S^{6}\) or \(\mathbb{C}\mathrm{P}^{3}\), and moreover by Lemma 2.5, \(M\) has the cohomology of one of these spaces. Finally, Theorems 2.6, 2.7, and 2.8 along with the classification of closed, simply connected \(6\)-manifolds imply that \(M\) is diffeomorphic to \(S^{6}\) or \(\mathbb{C}\mathrm{P}^{3}\).
To finish the proof of Theorem A in dimension six, it suffices to consider the case where the torus action does not have a fixed point. We seek to show that \(\chi(M^{6})=0\) and, moreover, that \(M^{6}\) is diffeomorphic to \(S^{3}\times S^{3}\) if the second Betti number of \(M\) vanishes. In the interest of potentially proving \(M\) is diffeomorphic to \(S^{3}\times S^{3}\) without the assumption that \(b_{2}(M)\) vanishes, we present the following partial progress:
**Proposition 4.2**.: _Let \(M\) be a \(6\)-dimensional, closed, simply connected Riemannian manifold with \(\mathrm{Ric}_{2}>0\). Suppose \(\mathsf{T}^{3}\) acts effectively and by isometries on \(M\). If the \(\mathsf{T}^{3}\)-action has no fixed points, then \(\chi(M)=0\)._
_Moreover, the \(\mathsf{T}^{3}\)-action is not free, all non-trivial isotropy groups are isomorphic to \(\mathsf{S}^{1}\), and the singular \(\mathsf{T}^{3}\)-orbits are isolated and diffeomorphic to \(\mathsf{T}^{2}\). In particular, the orbit space \(M^{*}=M/\mathsf{T}^{3}\) is homeomorphic to \(S^{3}\)._
Proof.: The conclusion \(\chi(M)=0\) follows immediately from the equality \(\chi(M)=\chi(M^{\mathsf{T}^{3}})\) from Lemma 2.2. First, we claim that all isotropy groups must be connected. Otherwise, suppose \(\Gamma\subset\mathsf{T}^{3}\) is a finite isotropy group at a point \(p\in M\). Then the torus \(\mathsf{T}^{3}/\Gamma\cong\mathsf{T}^{3}\) acts effectively and by isometries on the totally geodesic fixed-point component \(N=M_{p}^{\Gamma}\). Because the action is effective, \(3\leq\dim(N)\leq 5\). Since \(N\) is totally geodesic in \(M\), \(N\) has \(\mathrm{Ric}_{2}>0\). Then by the symmetry rank bound for \(\mathrm{Ric}_{2}>0\), we have \(3\leq\lfloor\frac{\dim(N)+1}{2}\rfloor\), which implies that \(\dim(N)=5\). Then by Theorem 2.4, the inclusion \(N\hookrightarrow M\) is \(4\)-connected. Because \(M\) is simply connected, so is \(N\), and hence \(N\) is orientable. Thus by Lemma 2.5, \(H^{3}(M;\mathbb{Z})\cong H^{2}(M;\mathbb{Z})\cong H^{1}(M;\mathbb{Z})\cong 0\). Hence, it follows from Poincare duality and the Universal Coefficients theorem that \(\chi(M)>0\), which contradicts the hypothesis that the \(\mathsf{T}^{3}\)-action on \(M\) has no fixed points. Therefore, all isotropy groups must be connected.
Next we claim that the components of the fixed-point set of any non-trivial isotropy group must be \(2\)-dimensional. Otherwise, there exists a connected isotropy group \(\mathsf{T}^{3}_{p}\) that fixes a connected submanifold \(F\) of dimension \(0\) or \(4\). If \(\dim(F)=0\), then the induced action of \(\mathsf{T}^{3}\) on \(F\) is trivial, and hence the \(\mathsf{T}^{3}\)-action on \(M\) has a fixed point, which is again a contradiction. If \(\dim(F)=4\), then because \(F\) is fixed by \(\mathsf{T}^{3}_{p}\), which is isomorphic to \(\mathsf{S}^{1}\) or \(\mathsf{T}^{2}\), the inclusion \(F\hookrightarrow M\) is at least \(3\)-connected. Thus, \(F\) is simply connected, and by Poincare duality, has \(\chi(F)>0\). Hence, \(\mathsf{T}^{3}\) has a fixed point in \(F\subset M\), which again is a contradiction. Therefore, the components of the fixed-point set of any non-trivial isotropy group must indeed be \(2\)-dimensional.
It then follows from Part (2) of Lemma 2.1 that each non-trivial isotropy group is isomorphic to \(\mathsf{S}^{1}\). Thus, each singular orbit of the \(\mathsf{T}^{3}\)-action is diffeomorphic to \(\mathsf{T}^{2}\) and coincides with a component of the fixed point set of some \(\mathsf{S}^{1}\) isotropy group. Furthermore, given a
point \(p\) on a singular orbit, because circles are the only possible isotropy groups and components of their fixed point sets are only \(2\)-dimensional, \(\mathsf{T}^{2}:=\mathsf{T}^{3}/\mathsf{S}^{1}\) must act freely on the normal space to the singular orbit at \(p\). In particular, the singular orbits are isolated.
Since \(M\) is simply connected and all \(\mathsf{T}^{3}\)-orbits are connected, the orbit space \(M^{*}=M/\mathsf{T}^{3}\) is a simply connected \(3\)-manifold (see [1, Corollary IV.4.7]). Because the \(\mathsf{T}^{3}\)-action on \(M\) only has \(\mathsf{S}^{1}\) isotropy groups whose fixed-point components are \(2\)-dimensional and isolated, it follows that \(M^{*}\) has no boundary, and by the resolution to the Poincare conjecture [1, 13, 14], we have that \(M^{*}\) is homeomorphic to \(S^{3}\).
We remark that Galaz-Garcia and Searle show if \(M^{n}\) is closed, simply connected, and has \(\mathsf{T}^{n-3}\)-symmetry, if the orbit space \(M^{*}=M^{n}/\mathsf{T}^{n-3}\) is homeomorphic to \(S^{3}\), and if all non-trivial isotropy groups are isomorphic to \(\mathsf{S}^{1}\), then \(\pi_{2}(M^{n})\cong\mathbb{Z}^{s-n+2}\), where \(s\) is the number of isolated singular orbits [1, Proposition 4.5]. It then follows from [1, Lemma 2-6] that if \(r=3\) and \(M^{6}\) has non-negative sectional curvature, then \(s\leq 4\), and hence \(\pi_{2}(M^{6})\cong 0\) (see [1, Proposition 4.12]). Escher and Searle then use these observations to prove such a manifold \(M^{6}\) must be diffeomorphic to \(S^{3}\times S^{3}\)[1, Proposition 4.13]. Their conclusion relies on the fact that \(M^{*}\) has non-negative curvature in the sense of Alexandrov geometry. Since our condition of \(\operatorname{Ric}_{2}>0\) allows for some negative sectional curvatures, we do not know whether it is possible to establish an upper bound of \(s\) in our case. This leaves us with the following:
_Remark 4.3_.: If \(M\) is a manifold as in Proposition 4.2, then by [1, Proposition 4.5], \(\pi_{2}(M)\cong\mathbb{Z}^{s-4}\), where \(s\) is the number of isolated singular orbits. If one could show that \(s=4\), then it would follow as in the proof of [1, Proposition 4.13] that \(M\) is diffeomorphic to \(S^{3}\times S^{3}\). Such a result, along with those established here, would imply that the only closed, simply connected \(6\)-manifolds with \(\operatorname{Ric}_{2}>0\) and \(\mathsf{T}^{3}\)-symmetry are \(S^{6}\), \(\mathbb{C}\mathrm{P}^{3}\), and \(S^{3}\times S^{3}\).
Finally, we include the following observation, as it is used in the proof of Theorem D on the non-simply connected case.
**Corollary 4.4**.: _If \(S^{3}\times S^{3}\) is equipped with a metric having \(\operatorname{Ric}_{2}>0\) that is invariant under an effective \(\mathsf{T}^{3}\)-action, then every non-trivial isotropy group is isomorphic to \(\mathsf{S}^{1}\), and the fixed-point set of any such \(\mathsf{S}^{1}\) is connected and diffeomorphic to \(\mathsf{T}^{2}\)._
Proof.: In Proposition 4.2, we established that every non-trivial isotropy group is isomorphic to \(\mathsf{S}^{1}\), and the components of the fixed-point sets of of these \(\mathsf{S}^{1}\) isotropies must be isolated and diffeomorphic to \(\mathsf{T}^{2}\). Now given an arbitrary \(\mathsf{S}^{1}\) isotropy group, by Lemma 2.2, we have
\[\sum b_{i}\left((S^{3}\times S^{3})^{\mathsf{S}^{1}}\right)\leq\sum b_{i}(S^{3 }\times S^{3})=4.\]
Therefore, \((S^{3}\times S^{3})^{\mathsf{S}^{1}}\) must consist of a single torus \(\mathsf{T}^{2}\).
## 5. Maximal symmetry rank for \(\operatorname{Ric}_{2}>0\) in dimensions \(\mathbf{2n\geq 8}\)
In this section, we finish the proof of Theorem A by induction. The result in dimension six is used to prove dimension eight, and this result is then used as our base for higher dimensions.
**Theorem 5.1**.: _Let \(M\) be closed, simply connected Riemannian manifold of even dimension \(2n\geq 8\) with \(\operatorname{Ric}_{2}>0\). If \(\mathsf{T}^{n}\) acts effectively and by isometries on \(M\), then \(M\) is either diffeomorphic to \(S^{2n}\) or homeomorphic to \(\mathbb{C}\mathrm{P}^{n}\)._
Proof.: Because \(n\geq 4\), by Lemma 2.3, there exist circle subgroups of \(\mathsf{T}^{n}\) whose fixed-point sets are non-empty. Among all the circle subgroups and all components of their fixed-point sets, choose a subgroup \(\mathsf{S}^{1}\) and a component \(N\) of its fixed-point set such that \(N\) has maximal dimension. By Lemma 2.1, \(N\) is invariant under the action of \(\mathsf{T}^{n-1}=\mathsf{T}^{n}/\mathsf{S}^{1}\), and because \(N\) was chosen to be maximal, the \(\mathsf{T}^{n-1}\)-action on \(N\) must be almost effective. Thus \(\dim N\geq n-1\geq 3\). Because \(N\) is totally geodesic, \(N\) has \(\operatorname{Ric}_{2}>0\). Thus the symmetry rank of \(N\) is at least \(n-1\) and at most \(\left\lfloor\frac{\dim N+1}{2}\right\rfloor\). Because \(\dim M=2n\) and \(N\) has even codimension in \(M\) (Lemma 2.1), it follows that \(\dim N=2n-2\). Thus by Theorem 2.4, the inclusion \(N\hookrightarrow M\) is \((2n-3)\)-connected, and thus \(N\) is simply connected. Note moreover that the odd Betti numbers of \(M\) vanish by the Periodicity Lemma. In particular, \(\chi(M)>0\) and hence the torus action has a fixed point.
We will now prove Theorem 5.1 by induction. For the base case, assume \(\dim M=8\). Then \(N\) is a closed, \(6\)-dimensional, simply connected manifold with \(\operatorname{Ric}_{2}>0\) and maximal symmetry rank. Moreover, the induced torus action on \(N\) has a fixed point, so the proof in dimension six implies that \(N^{6}\) is homeomorphic to \(S^{6}\) or \(\mathbb{C}\mathrm{P}^{3}\). The Connectedness Lemma now implies that \(M\) has the cohomology of \(S^{8}\) or \(\mathbb{C}\mathrm{P}^{4}\) up to degree \(5\), and it follows from Poincare duality \(M\) is a cohomology \(S^{8}\) or \(\mathbb{C}\mathrm{P}^{4}\) (see [12, Lemma 4.8.(1)]). Because \(M\) is simply connected, it then follows that \(M\) is either a homotopy \(S^{8}\) or \(\mathbb{C}\mathrm{P}^{4}\), and it follows from Theorems 2.7 and 2.8 that \(M\) is either homeomorphic to \(S^{8}\) or diffeomorphic to \(\mathbb{C}\mathrm{P}^{4}\).
For dimensions \(\geq 10\), the result follows by induction by essentially the same argument since \(N\) has codimension \(2\) in \(M\), \(N\) must be either diffeomorphic to \(S^{2n-2}\) or homeomorphic to \(\mathbb{C}\mathrm{P}^{n-1}\) by the induction hypothesis, and the inclusion \(N\hookrightarrow M\) is \((2n-3)\)-connected.
## 6. Maximal symmetry rank for \(\operatorname{Ric}_{k}>0\) with \(\mathbf{k\geq 3}\)
In this section, we prove Theorems B and C. First, we establish general results for manifolds with \(\operatorname{Ric}_{k}>0\) for \(k\geq 3\) that have maximal symmetry rank. The second author shows in [11] that any closed, connected, \(n\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{k}>0\) for some \(k\in\{3,\ldots,n-1\}\) has symmetry rank bounded above by \(\left\lfloor\frac{n+k}{2}\right\rfloor-1\). Notice if \(k=3\), or if \(k=4\) and \(n\) is odd, then this upper bound is equal to \(\left\lfloor\frac{n+1}{2}\right\rfloor\), which is the same bound as for positive sectional curvature or \(\operatorname{Ric}_{2}>0\). Our first lemma applies the Isotropy Rank Lemma to show that many manifolds with \(\operatorname{Ric}_{k}>0\) and maximal symmetry rank have circle actions with codimension \(2\) fixed-point components.
**Lemma 6.1**.: _Let \(M\) be a closed, \(n\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{k}>0\), and assume \(3\leq k\leq n-5\). If a torus \(\mathsf{T}^{r}\) of rank \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\) acts effectively and by isometries
_on \(M\), then there exists a circle \(\mathsf{S}^{1}\subset\mathsf{T}^{r}\) whose fixed point set contains a component of codimension \(2\) in \(M\)._
Proof.: Because \(n\geq k+5\), it follows from the equation \(r=\lfloor\frac{n+k}{2}\rfloor-1\) that \(r\geq k+1\). Thus by Lemma 2.3, there exists a circle subgroup of \(\mathsf{T}^{r}\) with non-empty fixed point set in \(M\). Now among all the components of fixed point sets for all circle subgroups of \(\mathsf{T}^{r}\), choose a component \(F^{f}\) whose dimension \(f\) is maximal, and let \(\mathsf{S}^{1}\) denote a circle that fixes \(F\). Because the \(\mathsf{T}^{r}\) action on \(M\) is effective, and since the codimension of \(F\) is even (Lemma 2.1), we must have \(f\leq n-2\). By Lemma 2.1, since the dimension of \(F\) is maximal, the action of \(\mathsf{T}^{r-1}:=\mathsf{T}^{r}/\mathsf{S}^{1}\) on \(F\) must be almost effective. Thus, \(f\geq r-1=\lfloor\frac{n+k}{2}\rfloor-2\). In particular, since \(n\geq k+5\), we have \(f\geq\lfloor\frac{n+k}{2}\rfloor-2\geq\lfloor\frac{2k+5}{2}\rfloor-2=k\). Now if \(f=k\), then the constraints \(\lfloor\frac{n+k}{2}\rfloor=\lfloor\frac{2k+5}{2}\rfloor\) and \(n\geq k+5\) imply that \(n=k+5=f+5\), which contradicts the fact that \(F\) has even codimension in \(M\). Hence, we must have \(f\geq k+1\), and because \(F\) is totally geodesic in \(M\), \(F\) has \(\operatorname{Ric}_{k}>0\). Then by the Isotropy Rank Lemma, \(r-1=\lfloor\frac{n+k}{2}\rfloor-2\leq\lfloor\frac{f+k}{2}\rfloor-1\). Therefore, because \(n+k\equiv f+k\mod 2\) and \(f\leq n-2\), it follows that \(f=n-2\).
Next, we establish a rigidity result for highly connected manifolds with \(\operatorname{Ric}_{k}>0\) that have a codimension-two circle fixed-point component.
**Proposition 6.2**.: _Let \(M\) be a closed, \(n\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{k}>0\). Assume \(3\leq k\leq\frac{n-1}{2}\) and \(M\) is \((k-1)\)-connected. If \(\mathsf{S}^{1}\) acts effectively and by isometries on \(M\) and its fixed-point set contains a component \(N\) of codimension \(2\) in \(M\), then \(M\) is diffeomorphic to \(S^{n}\)._
Proof.: By the Connectedness Lemma, the inclusion \(N\hookrightarrow M\) is \((n-k-1)\)-connected. In particular, since \(3\leq k\leq\frac{n-1}{2}\) implies that \(k\leq n-3\), we have that \(N\) is simply connected.
We claim that \(M^{n}\) is a cohomology sphere. Given the claim, Theorems 2.6 and 2.7 imply that \(M\) is diffeomorphic to \(S^{n}\). To prove the claim, we apply the Periodicity Lemma to the inclusion \(N\hookrightarrow M\), which has codimension two and is \((n-k-1)\)-connected. We then have \(e\in H^{2}(M)\) that induces periodicity from degree \(k-1\) to degree \(n-(k-1)\). That is, the map \(H^{i}(M)\to H^{i+2}(M)\) induced by multiplication by \(e\) is surjective for \(k-1\leq i<n-(k-1)-2\) and injective for \(k-1<i\leq n-(k-1)-2\). Because \(M\) is \((k-1)\)-connected and \(k\geq 3\), we have \(e=0\), so combining with the injectivity property implies that \(H^{i}(M)=0\) for all \(0<i\leq n-(k-1)-2\). Since \(n-(k-1)-2\geq\frac{n-1}{2}\), Poincare duality implies that \(M\) is cohomology sphere, as claimed.
Now we prove our second main result:
**Theorem 6.3** (Theorem B).: _Let \(M^{n}\) be a \((k-1)\)-connected, closed Riemannian manifold with \(\operatorname{Ric}_{k}>0\) for some \(k\geq 3\). If \(n\neq 7\) and if \(M^{n}\) admits an effective, isometric \(\mathsf{T}^{r}\)-action with \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\), then one of the following occurs:_
1. \(M\) _is diffeomorphic to_ \(S^{n}\) _and_ \(k\leq 4\)_, with equality only if_ \(n\) _is odd._
2. \(M\) _is diffeomorphic to_ \(S^{3}\times S^{3}\) _and_ \(k=3\)
Proof.: The claim on \(k\) in (1) holds from the equation \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\) and the classical upper bound on the rank of a smooth torus action on a homology sphere, i.e., the bound \(r\leq\left\lfloor\frac{n+1}{2}\right\rfloor\). The claim on \(k\) in (2) follows from the assumption that \(M\) is \((k-1)\)-connected. It suffices to prove the diffeomorphism claims.
First, assume \(k>\frac{n}{2}\). By the assumption that \(M\) is \((k-1)\)-connected and Poincare duality, we see that \(M\) is a homology sphere. It follows for topological reasons that \(\mathsf{T}^{r}\) has a fixed point in even dimensions and that a codimension one subtorus \(\mathsf{T}^{r-1}\) has a fixed point in odd dimensions. Indeed, this follows immediately in even dimensions since \(\chi(M)=2\), and it follows by a straightforward induction argument in odd dimensions using the classical fact proved by that \(\mathsf{T}^{2}\) cannot act freely on a homology sphere; see [1, Chapter III Theorem 8.1]. In particular, \(r\leq\frac{n+1}{2}\), and there exists a fixed-point component \(N\) of a circle in \(\mathsf{T}^{r}\) with codimension two. Smith's results also imply that \(N\) is a homology sphere, and the Connectedness Lemma implies that \(N\) is simply connected. Therefore \(M\) is diffeomorphic to \(S^{n}\) by Theorems 2.6 and 2.7.
Second, assume \(k\leq\frac{n-1}{2}\). Since \(n\geq 2k+1\geq 7\) and \(n\neq 7\), we have \(k\leq n-5\). Hence Lemma 6.1 implies the existence of a circle \(\mathsf{S}^{1}\) with fixed point set of codimension two, and Proposition 6.2 implies \(M\) is diffeomorphic to \(S^{n}\).
Third, assume \(k=\frac{n}{2}\), that \(k=3\) (meaning \(n=6\) and \(r=3\)). If \(\mathsf{T}^{3}\) has no fixed point in \(M^{6}\), then since \(M^{6}\) is \(2\)-connected, we have \(0=\chi(M)=2-b_{3}(M)\). Hence \(b_{3}(M)=2\), \(M\) is a homology \(S^{3}\times S^{3}\), and hence \(M\) is diffeomorphic to \(S^{3}\times S^{3}\) by [1, Corollary 2.6]. Assume instead that \(\mathsf{T}^{3}\) has a fixed point in \(M^{6}\). As in the proof of Proposition 4.1, there exists a circle \(\mathsf{S}^{1}\) with a fixed-point component \(N^{4}\) of codimension two. By the Connectedness Lemma, \(N\hookrightarrow M\) is \(2\)-connected, and hence \(N\) is simply connected. Thus by Poincare duality, \(b_{1}(N)=0=b_{3}(N)\), and hence \(\chi(N)=2+b_{2}(N)\). Because \(M\) is \(2\)-connected, it follows from the estimate on the sum of even Betti numbers (Lemma 2.2) that \(N\) is the only connected component of \(M^{\mathsf{S}^{1}}\), and \(b_{2}(N)=0\). Thus, \(2=\chi(N)=\chi(M^{\mathsf{S}^{1}})=\chi(M)=2-b_{3}(M)\), and hence \(b_{3}(M)=0\). Therefore, \(M\) and \(N\) are both cohomology spheres, and by Theorems 2.6 and 2.7, \(M\) is diffeomorphic to \(S^{6}\).
Finally, assume \(k=\frac{n}{2}\) and \(k\geq 4\). Since \(n\) is even and \(k\geq 4\), the equation \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\) implies that \(r>\left\lfloor\frac{n+1}{2}\right\rfloor\). Now if \(T^{r}\) had a fixed point \(F^{f}\), then for any \(p\in F^{f}\), because the isotropy representation \(\mathsf{T}^{r}\to(T_{p}F^{f})^{\perp}\) would be faithful, it would follow that \(r\leq\frac{n-f}{2}\leq\left\lfloor\frac{n+1}{2}\right\rfloor\), a contradiction. Hence \(\mathsf{T}^{r}\) does not have a fixed point, and \(\chi(M)=0\). On the other hand, \(\chi(M)=2+(-1)^{k}b_{k}(M)\) since \(M^{2k}\) is \((k-1)\)-connected. So it follows that \(k\) is odd, \(k\geq 5\), and \(b_{k}(M)=2\). Now Lemma 6.1 applies since \(n=2k\geq k+5\), and we get a circle \(\mathsf{S}^{1}\) whose fixed-point set has a component \(N^{2k-2}\) of codimension two. Because the \(\mathsf{T}^{r}\)-action on \(M\) does not have a fixed point, neither does the induced \(\mathsf{T}^{r}\)-action on \(N\). By the Connectedness Lemma, the inclusion \(N\hookrightarrow M\) is \((k-1)\)-connected, and hence the submanifold \(N^{2k-2}\) is \((k-2)\)-connected. But now since \(k\) is odd, we have \(\chi(N^{2k-2})=2+b_{k-1}(N)>0\). Hence the induced \(\mathsf{T}^{r}\)-action on \(N\) has a fixed point, a contradiction, so this case does not occur.
Next we apply Lemma 6.1 to prove our third main result. Recall that \(M\) being an integral cohomology \(\mathbb{C}\mathrm{P}\) up to degree \(k+2\) means that \(H^{1}(M;\mathbb{Z})\cong 0\), \(H^{2}(M;\mathbb{Z})\cong\mathbb{Z}\), and the
map \(H^{i}(M;\mathbb{Z})\to H^{i+2}(M;\mathbb{Z})\) induced by multiplication by a generator \(x\in H^{2}(M;\mathbb{Z})\) is surjective for \(0\leq i<k\) and injective for \(0<i\leq k\).
**Theorem 6.4** (Theorem C).: _Fix \(k\geq 3\), and let \(M^{n}\) be a simply connected, closed Riemannian manifold. Assume further that \(M^{n}\) is an integral cohomology \(\mathbb{C}\mathrm{P}\) up to degree \(k+2\). If \(M\) has \(\mathrm{Ric}_{k}>0\) and admits an effective, isometric \(\mathsf{T}^{r}\)-action with \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\), then \(n\) is even, \(M^{n}\) is homeomorphic to \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\), and \(k=3\)._
Proof.: First, we claim it suffices to prove that \(M\) has the cohomology of \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\) in all degrees by Theorems 2.6 and 2.8. Indeed, if \(M\) is a cohomology \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\), it follows that \(\chi(M)>0\), that \(\mathsf{T}^{r}\) has a fixed point, and hence that \(r\leq\frac{n}{2}\). From the equation \(r=\left\lfloor\frac{n+k}{2}\right\rfloor-1\), since \(n\) is even and \(k\geq 3\), we have \(k=3\) and \(r=\frac{n}{2}\). As in the proof of Proposition 4.1, the isotropy representation \(\mathsf{T}^{r}\to\mathsf{U}(r)\) at a fixed point of the \(\mathsf{T}^{r}\)-action is of the form \((z_{1},\ldots,z_{r})\mapsto\mathrm{diag}(z_{1},\ldots,z_{r})\) for a certain choice of coordinates. For each \(j\in\{1,\ldots,r\}\), let \(\mathsf{S}^{1}_{j}\subset\mathsf{T}^{r}\) denote the circle subgroup parametrized \(z_{j}\), and let \(N_{j}\) denote the fixed point set of \(\mathsf{S}^{1}_{j}\), which is \((n-2)\)-dimensional. Defining \(F^{2i}=\bigcap_{j=1}^{r-i}N_{j}\) for each \(i\in\{2,\ldots,r-1\}\), we have a chain of inclusions
\[F^{4}\subset F^{6}\subset\ldots\subset F^{n-2}\subset M^{n}.\]
Because each space \(F^{2i}\) is the fixed by a circle action on the subsequent space in the chain, it follows from [10] that each \(F^{2i}\) is a cohomology \(\mathbb{C}\mathrm{P}^{2i}\), and the generator of \(H^{2}(F^{2i};\mathbb{Z})\) restricts to a generator of \(H^{2}(F^{2i-2};\mathbb{Z})\) for all \(i\). In particular, each inclusion induces isomorphisms on cohomology in all degrees less than the dimension of the submanifold. By the universal coefficients theorem and Hurewicz' theorem, it follows that each inclusion is \(3\)-connected. Now \(F^{4}\) is then homeomorphic to \(\mathbb{C}\mathrm{P}^{2}\) by Freedman's classification in dimension four, so we can apply Theorems 2.6 and 2.8 to conclude that \(M^{n}\) is homeomorphic to \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\).
We now proceed to the proof that \(M\) has the cohomology of \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\) in all degrees. Since we already know the integral cohomology is correct in degrees up to \(k+2\), the rest follows by Poincare duality if \(k+2\geq\frac{n+3}{2}\) (for proof of a similar fact in rational cohomology, see [11, Lemma 4.8.(1)]). We may therefore assume \(k\leq\frac{n-2}{2}\). In particular, we have \(n\geq 2k+2\geq 8\), and hence \(k\leq\frac{n-2}{2}<n-4\). Lemma 6.1 therefore implies the existence of a circle \(\mathsf{S}^{1}\) containing a fixed-point component \(N^{n-2}\) of codimension two. By Theorem 2.4, the inclusion \(N\hookrightarrow M\) is \((n-k-1)\)-connected, and by Lemma 2.5, there exists \(e\in H^{2}(M)\) such that the homomorphism \(\cup e:H^{i}(M)\to H^{i+2}(M)\) is surjective for \(k-1\leq i<n-k-1\) and injective for \(k-1<i\leq n-k-1\). Because \(M\) is an integral cohomology \(\mathbb{C}\mathrm{P}\) up to degree \(k+2\geq 5\), if \(x\in H^{2}(M)\cong\mathbb{Z}\) denotes a generator, then \(e=\lambda x\) for some \(\lambda\in\mathbb{Z}\). We will show that \(\lambda=\pm 1\).
Define \(\ell:=\lfloor\frac{k}{2}\rfloor\). Then taking \(i=2\ell\) above, we have that \(\cup(\lambda x):H^{2\ell}(M)\to H^{2\ell+2}(M)\) is an isomorphism if \(k\) is even and an epimorphism if \(k\) is odd. However, because \(M\) is an integral cohomology \(\mathbb{C}\mathrm{P}\) up to degree \(k+2\), it follows that this map is in fact an isomorphism \(\mathbb{Z}\to\mathbb{Z}\) in either case. Thus, since \(x^{\ell}\) and \(x^{\ell+1}\) are a generators of \(H^{2\ell}(M)\) and \(H^{2\ell+2}(M)\), respectively, it follows that \(\lambda x^{\ell+1}=\pm x^{\ell+1}\), and hence \(\lambda=\pm 1\). Therefore, \(M\) has the cohomology of \(\mathbb{C}\mathrm{P}^{n/2}\), and the result follows.
## 7. Fundamental groups for maximal symmetry rank
In this section, we prove our last main result, which deals with fundamental groups of manifolds with \(\operatorname{Ric}_{2}>0\) and maximal symmetry rank:
**Theorem 7.1** (Theorem D).: _Let \(M^{n}\) be a closed, connected Riemannian manifold with \(\operatorname{Ric}_{2}>0\) and \(\mathsf{T}^{r}\) symmetry with \(r=\left\lfloor\frac{n+1}{2}\right\rfloor\). If \(\pi_{1}(M)\) is non-trivial, then one of the following occurs:_
1. \(M\) _is homotopy equivalent to_ \(\mathbb{R}\mathrm{P}^{n}\) _or a lens space, or_
2. \(M\) _has dimension six and, if additionally the universal cover is_ \(S^{3}\times S^{3}\)_, then_ \(\pi_{1}(M)\cong\mathbb{Z}_{\ell}\times\mathbb{Z}_{m}\) _for some_ \(\ell,m\geq 1\)_._
Proof.: We may assume \(n\geq 3\), since otherwise the condition \(\operatorname{Ric}_{2}>0\) is vacuous. We set \(\Gamma=\pi_{1}(M)\) and pullback the metric and the torus action to the universal cover \(\tilde{M}\). We get a \(\mathsf{T}^{r}\)-action on \(\tilde{M}\) that commutes with the free action of \(\Gamma\) on \(\tilde{M}\) by deck transformations. Note that \(\Gamma\) is finite by Myers' theorem.
Following the proof in the simply connected case, we arrive at one of the following situations:
1. There exists \(\mathsf{S}^{1}\subseteq\mathsf{T}^{r}\) such that the fixed-point set \(\tilde{M}^{\mathsf{S}^{1}}\) has a unique component \(N^{n-2}\) with codimension two. Moreover, \(\tilde{M}\) is a cohomology \(S^{n}\) or \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\).
2. The universal cover is \(S^{3}\times S^{3}\), and the torus \(\mathsf{T}^{3}\) contains a circle \(\mathsf{S}^{1}\) whose fixed-point set is connected and diffeomorphic to \(\mathsf{T}^{2}\).
3. The universal cover is \(\mathbb{C}\mathrm{P}^{2}\#\ldots\#\mathbb{C}\mathrm{P}^{2}\), and the torus \(\mathsf{T}^{2}\) contains \(t=\chi(M)\) distinct isotropy groups \(\mathsf{S}^{1}_{1},\ldots,\mathsf{S}^{1}_{t}\) whose fixed-point sets have a unique \(S^{2}\) component.
Indeed, if the \(\mathsf{T}^{r}\) action has no fixed point, then (ii) holds by Corollary 4.4. If instead the \(\mathsf{T}^{r}\) action has a fixed point, then the existence statements of (i) and (iii) were established in the proofs of the simply connected cases and the uniqueness statements follow from the generalizations of Frankel's theorem provided by Part 2 of Theorem 2.4 and Lemma 3.1, respectively.
Suppose first we are in Case (i) and that \(\tilde{M}\) is a cohomology \(S^{n}\), then \(N\) is a cohomology \(S^{n-2}\) by the sum of Betti numbers estimate (Lemma 2.2). Since the actions by \(\Gamma\) and \(\mathsf{T}^{r}\) commute, \(\Gamma\) acts freely on both \(S^{n}\) and \(S^{n-2}\). It follows that \(\Gamma\) is cyclic (see [13, Lemma 1.8]) in general and moreover \(\mathbb{Z}_{2}\) if \(n\) is even. By [13, Theorem 3.4], it follows that \(M\) is homotopy equivalent to real projective space or a lens space, as required.
Next suppose we are in Case (i) and that \(\tilde{M}\) is a cohomology \(\mathbb{C}\mathrm{P}^{\frac{n}{2}}\). By [13, Theorem 7.2], \(N\) is a cohomology \(\mathbb{C}\mathrm{P}^{\frac{n}{2}-1}\). Once again, \(\Gamma\) acts freely on both of these manifolds. In particular, the order of \(\Gamma\) divides both of their Euler characteristics. Since these differ by one, \(\Gamma\) is trivial, a contradiction.
Next suppose we are in Case (ii). Let \(\mathsf{S}^{1}\) be a non-trivial isotropy group in \(\mathsf{T}^{3}\) whose fixed-point set \(F\coloneqq M^{\mathsf{S}^{1}}\) consists of a unique \(2\)-dimensional torus. Fix any \(x\in F\), and consider the diffeomorphism \(\mathsf{T}^{2}\coloneqq\mathsf{T}^{3}/\mathsf{S}^{1}\to F\) given by \(g\mapsto g\cdot x\). Using the inverse of this map, we obtain another a map \(\Gamma\to\mathsf{T}^{2}\) denoted by \(\gamma\mapsto g_{\gamma}\) and determined by the
property that \(\gamma\cdot x=g_{\gamma}\cdot x\). We claim that \(\Gamma\to\mathsf{T}^{2}\) is a group homomorphism. Given \(\alpha,\beta\in\Gamma\), we find that
\[g_{\alpha\beta}\cdot x=(\alpha\beta)\cdot x=\alpha\cdot(\beta\cdot x)=\alpha \cdot(g_{\beta}\cdot x)=g_{\beta}\cdot(\alpha\cdot x)=g_{\beta}\cdot(g_{\alpha }\cdot x)=(g_{\alpha}g_{\beta})\cdot x,\]
where we have used that \(\mathsf{T}^{2}\) is abelian and that the \(\mathsf{T}^{2}\)- and \(\Gamma\)-actions commute. By the injectivity of the map \(\mathsf{T}^{2}\to F\), we find that \(g_{\alpha\beta}=g_{\alpha}g_{\beta}\), and hence the map \(\Gamma\to\mathsf{T}^{2}\) is indeed a group homomorphism. Since \(\Gamma\) acts freely, this map is an injection. Hence \(\Gamma\) may be regarded as a subgroup of \(\mathsf{T}^{2}\), and it follows then that \(\Gamma\) is either cyclic or a two-fold product of cyclic groups.
Finally, suppose we are in Case (iii). It suffices to prove that \(\chi(M)=2\), since then \(M=S^{4}\) and we are in the situation of (i). We suppose then that \(\chi(M)\geq 3\) and seek a contradiction. After possibly relabeling, we may assume that the first two circles, \(\mathsf{S}^{1}_{1}\) and \(\mathsf{S}^{1}_{2}\), have the property that their respective \(S^{2}\) fixed-point components contain \(\{f_{0},f_{1}\}\) and \(\{f_{1},f_{2}\}\), respectively, where \(f_{0}\), \(f_{1}\), and \(f_{2}\) are distinct isolated fixed points of the \(\mathsf{T}^{2}\)-action (see Section 3). Since the free \(\Gamma\)-action commutes with the \(\mathsf{T}^{2}\)-action, \(\Gamma\) acts freely on both of the sets \(\{f_{0},f_{1}\}\) and \(\{f_{1},f_{2}\}\), which contradicts the assumption that \(\Gamma\) is non-trivial.
|
2302.03156 | Novel Building Detection and Location Intelligence Collection in Aerial
Satellite Imagery | Building structures detection and information about these buildings in aerial
images is an important solution for city planning and management, land use
analysis. It can be the center piece to answer important questions such as
planning evacuation routes in case of an earthquake, flood management, etc.
These applications rely on being able to accurately retrieve up-to-date
information. Being able to accurately detect buildings in a bounding box
centered on a specific latitude-longitude value can help greatly. The key
challenge is to be able to detect buildings which can be commercial,
industrial, hut settlements, or skyscrapers. Once we are able to detect such
buildings, our goal will be to cluster and categorize similar types of
buildings together. | Sandeep Singh, Christian Wiles, Ahmed Bilal | 2023-02-06T23:30:51Z | http://arxiv.org/abs/2302.03156v1 | # Novel Building Detection and Location Intelligence in Aerial Satellite Imagery
###### Abstract
Building structures detection and information about these buildings in aerial images is an important solution for city planning and management, land use analysis. It can be the centerpiece to answer important questions such as planning evacuation routes in case of an earthquake, flood management, etc. These applications rely on being able to accurately retrieve up-to-date information. Being able to accurately detect buildings in a bounding box centered on a specific latitude-longitude value can help greatly. The key challenge is to be able to detect buildings which can be commercial, industrial, hut settlements, or skyscrapers. Once we are able to detect such buildings, our goal will be to cluster and categorize similar types of buildings together.
## 1 Introduction
We plan on reproducing semantic segmentation CNN models based on U-Net Ronneberger et al. (2015) and Res-U-net Diakogiannis et al. (2020) algorithms, trained by transfer learning using the ImageNet dataset. In addition, we will optimize these different models with focal loss, dice loss, cross entropy loss, hierarchical loss and differently weighted intersection-over-union (IoU) loss to overcome issues of scale difference [3] in building detection. To clearly delineate each individual member's contribution, we will be organizing our paper by each member's contribution.
## 2 Methodology
We have decided to work on three different models in parallel, which are inherently different on following parameters:
1. Architecture of models: We have decided to use different variations of UNet architectures, which might help us focus of different patterns easily.
2. Pre-training usage of the models: We have employed training from scratch as well using pre-trained weight for the encoder layers across out 3 of the models.
3. Loss functions being employed by models: We have used different loss functions in all the different models.
4. Image sampling and augmentations employed: We have used totally different techniques to samples the images from the data sets available to introduce the component of randomness in data distribution across models. While model 1 and 2 have used random resized crops of size 224x224 from big 5000x5000 images and masks; model 3 has resorted to fixed size split of big 5000x5000 images in 512x512 tiles. Both these sampling techniques have again used totally different set of transforms with different quantifiers.
While we have details of all the models provided in the sections 3,4 and 5, here is a quick overview of the ensemble technique exploited by us:
1. Get prediction from all 3 models. All predictions must of same size as input image. We output thresholded mask as tensor of softmax probabilities.
2. Chose the most confident pixel from each of these masks from 3 models for every pixel location's softmax probability. This is will be single merged mask of same size as input.
3. From merged mask predicted, drop anything that is not at least 0.75 confident. Rational behind doing this is that we have already chose most confident pixel locations from 3 models masks, So, all the softmax value at this stage must be pretty confident ones at least in one of the models.
As seen in the fig 1. We have final mask, which is without any less confident pixel.
## 3 Standard U-Net Architecture
As all three models are derived from U-net, we will begin with a discussion of the canonical U-net architecture. U-net is a modular model largely consisted of encoder blocks and decoder blocks. A graphical depiction of a u-net model can be found in Figure 5.
### Encoder blocks
The encoder layer is primarily responsible for detecting the 'what' elements of the images. The goal is to be able to extract features in the image at different scales and different levels of abstraction. As such, at every steps of the encoder, two 2D convolutional blocks are used to extract information from the image and double the size of the feature space. At each encoder layer, we used a maxpool layers of 2x2 kernel size and a stride of 2 for down sampling spatially. This allowed us to increase the number of filters at each of our encoder layers without being extremely computationally expensive and increase the receptive field of our filters with deeper layers allowing for segment detection at multiple scales.
### Decoder blocks
The decoder layers are the up sampling layers in the model. The primary purpose of these layers is to localize the features extracted in the encoder block. This information is essential in our semantic segmentation in order to be able to output an image with the buildings detected localized in the right spaces in our output mask. For up sampling, we used Transposed Convolution layers. This allows us to ultimately assign class labels to each pixel in our image as part of our semantic segmentation.
At each of our decoder layers, we also make use of skipped connection given to us by the respective encoder layer for our decoder layer. The skipped connections cross from same sized part in the encoders to the decoders. The skipped connections allow us to overcome problem of vanishing gradient, increasing dimensionality and help regain the initial spatial information that we lost during the encoding path.
### Integration
A full u-net model is composed of N encoder blocks, and N-1 decoder blocks. The feature space of the first encoder block is a hyperparameter but seems to be often set to 64 or 128. Save for the last encoder block, the output of the final convolutional layer in each encoder block is cropped and concatenated with the output of the transposed convolutional layer of the decoder block. At the end of the model, a 1x1 convolutional layer is used to create a classifier head with the same features space as the number of classes. This can be passed to a softmax layer to produce class probabilities.
## 4 Model 1 approach (Christian)
### Data Pipeline and Exploration
For this project, we used the Inria Aerial Image Labelling Dataset for training [1]. The dataset consists of 360 (180 train and 180 test) 5000x5000 pixel full-color images with corresponding masks indicating the presence of building or non-building pixels. Only the training set had ground-truth segmentation masks available. Images were taken from a variety of settings, including rural and urban cities from different continents. A few problems had to be solved to enable training with this dataset: data augmentation and quick random access.
#### 4.1.1 Data pre-processing and augmentation
To generate more data for training, data augmentation was undertaken. Pytorch's standard transformation library does not make allowances for maintaining consistent transformations between an image and a segmentation mask, so the functional transforms library was used, which allows for the randomness to be provided by external variables, which can be held static between the ground truth and full-color images. For each of the 180 input training images, 350 224x224x3 image patches were created. This was intended to allow transfer learning for networks trained on ImageNet. A patch was taken from the image with random width (between 100 and 500 px), height (within +/- 10% of the random width), and image origin. This image was then randomly flipped horizontally and/or vertically and normalized by ImageNet standard deviation and mean. The intention of this was to teach the model scale and orientation invariant features. Once the incoming data was processed, it was split 80/20 into train and validation sets. Data from all 5 cities in the dataset was randomly selected for both validation and training set, as the test set consists of different cities and would be usable for testing how
well the model can generalize. A manual seed was set such that this split was repeatable if restarting training from a checkpoint.
#### 4.1.2 Caching
It was found that performing the loading of the full-color images and performing transformations on the fly was too computationally intensive. To alleviate this problem, after the first time the transformations were done, the resulting input and target tensors were saved to disk. This reduced the time to train significantly, as a 70 MB image did not have to be loaded and manipulated thousands of times per epoch, but instead a 600 KB tensor could be used.
#### 4.1.3 Other Considerations
It was also deemed important to add support for visualization of training-related metrics. Tensorboard support was added to the project to track training and validation set loss and accuracy, precision-recall curves for the validation set, and visualization of the forward pass of the model on the validation set. One last consideration made was the use of a seed when splitting the training and validation set. It was noticed that when resuming training from a checkpoint that the validation set was not the same as before the checkpoint. By maintaining the constant seed, a barrier was maintained between the two sets.
#### 4.1.4 Dataset Statistics
As is often the case with segmentation tasks, the dataset was not balanced between building pixels and non-building pixels. The training dataset was analyzed to determine the prevalence of each class. The findings are below.
### U-net from scratch in Pytorch
To test the hypothesis that building detection was a sufficiently specific domain to merit training from scratch, a u-net was created with random Xavier-initialized weights. 4 encoder and 3 decoder blocks were used, with the first encoder block having a hidden dimension of 64 features. Both the original U-net paper (Ronneberger et al., 2015) and Johannes Schmidt's blog posts (Schmidt, 2021) were consulted in the creation of the model.
Two major deviations were attempted from the models mentioned above. Both u-nets resulted in a cropped image with every convolution due to the use of valid padding. By using same padding, on convolutions and transposed convolutions, we can return an image that is of equal size to the input. This may result in slightly worse accuracy in the extremities of the image due to the extrapolation employed by same padding, but does simplify some aspects of the analysis, as every mask pixel has a corresponding prediction.
Secondly as the output of a 2-class softmax classifier only has 1 degree of freedom, it was attempted to perform classification as a single-class regression, with the output of the regression put through a sigmoid function. This one-channel output can then be interpreted as p(building). This approach was eventually discarded, as it had a very small impact on model size due to only affecting the final 1x1 convolutional layer, and adding a second classifier dimension increased model performance by a few percentage points.
Finally, batch normalization was added between the convolutional and activation layers in encoder and decoder blocks. These recenter the distribution of the output of the convolutional layers and add
Figure 1: Ensemble method being employed. Output softmax probabilities are compared and chosen to select most confident pixels of 2 models and thresholded to be at least 75% confident to be considered for final mask.
to stability in training as seen in Santurkar et al. (2019).
To discourage overfitting, dropout was added between the output of the final decoder layer and the 1x1 convolution.
#### 4.2.1 Loss function
3 different loss funcitons were attempted with this model. With the regression-based approach, weighted mean square error was used due to the class imbalance. On a per-batch basis, the effective number of building and non-building pixels was calculated on a per-batch basis, similar to the methodology in Cui et al. (2019). This weighting was weigh the loss on the minority class (buildings) more heavily.
Once the model had progressed to a two-class method, two losses were pursued: weighted cross-entropy loss and dice loss. Dice loss was pursued due to its background in segmentation tasks and invariance with respect to class imbalance (as dice loss is related to the size of the true positive region). Binary cross-entropy weighted by the inverse of effective number was also explored. This provides a more convex loss function that should be easier and more stable to train.
#### 4.2.2 Training
Training was performed over 20 epochs with the Adam optimizer. No learning rate scheduler was used to govern learning rate as epochs progressed, as Adam should manage its own learning rates on a per-parameter basis Kingma and Ba (2017). A batch size of 20 was used to fit in GPU memory. The model state was saved to disk whenever validation accuracy exceeded the previous maximum to allow for training to be resumed later.
#### 4.2.3 Results
Weighted cross-entropy resulted in marginally better training efficiency and overall metrics, but by an almost negligible amount. For both approaches, overfitting does not appear to be a concern, as the validation and training accuracy are almost identical.
Though this model performed worse than Model 2, it is hard to say whether it is due to differences in pre-training or the squeeze-and-attention layers. It is likely, however, that due to learning features from scratch, it may provide diversity in the ensemble that can help in overall accuracy.
## 5 Model 2 approach (Ahmed)
### Model Specifications
#### 5.1.1 Double Convolution Blocks
The Unet build consisted of a double convolution layer, where each convolution layer consisted of a kernel size of 3, stride and padding of 1. We set the bias to false in order to add a BatchNorm layer, which is then followed by a ReLU activation layer. We settled on a small 3X3 kernel receptive field in our convolution layers in order to be able to detect very small edges and shapes in our aerial images. Doing so is especially relevant for our aerial images as there is a lot of noise in the images and our model needs to be able to use small edges and shapes to detect buildings as buildings appear in many different sizes in our input images.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Loss Fn & Accuracy & IOU Score & F1-Score \\ \hline Dice & 94.9\% & 0.717 & 0.836 \\ \hline BCE & 95.0\% & 0.726 & 0.841 \\ \hline \end{tabular}
\end{table}
Table 1: Model 1 Validation Set results after 20 Epochs
Figure 3: Validation accuracy over 20 epochs for model 1. Weighted CE in orange, Dice Loss in red.
Figure 2: Training accuracy over 20 epochs for model 1. Weighted CE in orange, Dice Loss in red.
#### 5.1.2 Encoder Layers
The authors in _U-Net: Convolutional Networks for Biomedical Image Segmentation_[14] recommend encoding layers with output channels 64,128,256, 512 and 1024. However, we found more success with output channel layers 16,32,64,128 and 256. We believe this is because lower output channels of 16 and 32 in the start allow us to detect really small building segments with a small receptive field. In addition, the 512 and 1024 channel layers were not leading to any significant performance gains in our testing.
#### 5.1.3 Decoder Layers
At each of our decoder layers, we also make use of skipped connection given to us by the respective encoder layer for our decoder layer. The skipped connections cross from same sized part in the encoders to the decoders. The skipped connections allow us to overcome problem of vanishing gradient, increasing dimensionality and help regain the initial spatial information that we lost during the encoding path.
### Pre-trained ResNet34 Encoder Specifications
We now add ResNet 34 Encoder layers to the model. As such, we are now performing the Double Convolution blocks 3,4,6, and 3 times at each encoder layer level, using skipped connections between encoder layers, and using a higher stride to down sample instead of max pooling. These encoder layers are also pre-trained on image-net dataset.
#### 5.2.1 Encoder Modifications
After finding success in our U-Net built with 16,32 and 64 output channel initial encoder layers, we replace the initial ResNet convolution, ReLU and Max Pool layers with our U-Net 16,32 and 64 output channel encoder layers with skipped connections in order to preserve a lot of the small shapes and edges information in our images.
#### 5.2.2 Attention Mechanism
For the loss function, we will be using the Dice Loss to create cleaner mask segments to represent the buildings. In order to supplement our model in reducing the Dice Loss, we also include an attention mechanism using spatial and channel'squeeze & excitation' Blocks. This is done to aid our encoder layers in spatial encoding for more accurate mask prediction and better network flow. The authors in _Recalibrating Fully Convolutional Networks with Spatial and Channel 'Squeeze & Excitation' Blocks_[17] found a reduction of 4-9% in the Dice Loss. We see similar results in our testing.
#### 5.2.3 Results
In our testing, we saw the pre-trained image-net backbone significantly increase the model performance. After 15 epochs, we saw the following results.
Detailed Model 2 results and mask outputs available in appendix Section F
## 6 Model 3 Distinctive Approach (Sandeep)
### Data Sampling Strategy
For this model, we have used "Progressive Resizing." This is a training technique where we purposefully change the contents of image by resizing the images to contain more area. Instead of randomized crops of size 224x224, we created 512x512 non-overlapping and contiguous tiles, only then resizing them to 224x224 input images. On average, each tile has almost 4 times more buildings in each tile compared to model 1 and 2. Hence, the model more easily learns smaller buildings in more crowded areas [1].
### Architectural Considerations
For model 3, we have evaluated 3 types of encoders (Resnet18/34/50) and choose ResNet34 as ResNet18 has shown to be struggling to encode features of smaller buildings successfully. ResNet34 and 50 have shown very similar, but resnet 50 slow performance in detected buildings without any marginal increase in performance. One more significant improvement in model 3 was the use of we used Pixel Shuffle up-sampling in the decoder blocks, as provided by shuffleblock implementation [1].
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Set & Acc & Loss & IOU Score & F1-Score \\ \hline Train & 0.965 & 0.116 & 0.749 & 0.856 \\ \hline Val & 0.961 & 0.129 & 0.702 & 0.824 \\ \hline \end{tabular}
\end{table}
Table 2: Validation Set results after 15 Epochs
### Training and Validation Split
For model 3, we have done the data split on the basis of geography, instead of random ratio split. Out of training data from 5 cities as: Austin, Chicago, Kitsap, Tyrol, Vienna. Different cities are included in each of the subsets. e.g., images over Chicago are included in the training set, but not on the test set. Also, images over San Francisco are included on the test set but not on the training set. At the same time, we have tried to include the training data all type of structures of building. e.g. low rise vs high rise vs community living buildings apartment complexes.
### Custom Loss Function Design
Model 3 did not used CrossEntropy loss. Instead, we have written our own custom loss function, which has helped us predict foreground pixel with higher softmax confidence. We implemented Combined Loss of Dice Loss and Focal Loss with equal weights. Focal loss penalizing more confident wrong predictions more heavily. We have used gamma value of focal loss as 2. Also, Dice score has provided feed back to strive to keep precision and recall both highest possible. Also, for model 3, we have used the Dice Score metric.
### Training Convergence
For model 3, we used a LR finder scheduler before starting to fine tune the pretrained weights of ResNet34 encoder. This has helped of find the most appropriate maximum LR value. The second innovative technique employed by us was "Fit-once" (Smith, 2018) to achieve super convergence. In this technique, we increase LR to maximum value in initial batches before start to anneal the learning rates. Please refer to figure for LR finder and fit one cycle both (Sylvain Gugger, 2018). This technique is taken from Leslie Smith iconic Super Convergence paper. Also, We trained with total 40 epochs without over-fitting and saved the model only when better score on validation was seen without over-fitting, while Dice score was approaching 92%.
### Post Processing Enhancements
As can be seen in some our validation images, the masks created by this model are not always crisp and polygonal. Please refer to appendix figure for Model 3's traditional computer vision techniques of segmentation and attempt to fuse the results with Unet predicted mask. We have tried to experiment with Otsu's threshold, Watershed segmentation, SLIC Super pixel algorithm for segmentation. We have selected all the coincidental mask segments from super pixel algorithms' output with Unet's mask and tried to shape correct polygon for buildings with sharp edges (see fig. 8).
## 7 Experiences and Challenges
### Challenges
Satellite images are very noisy and affected by many factors such as weather, zoom level, resolution, trees, and cost to obtain (Wikipedia, 2018). After evaluating, we used Google Static maps API for pulling additional data because of quality and ease of use.
### Project Success Criterion
We were able to build model with Dice score more than 90 percent on test set and model 2 achieving detection of more 96 percent ground truth pixels in valid set. With help of Softmax based adaptive selection and ensemble, we have achieved detection of more 93 percent ground truth pixels in test set.
### Conclusion and Future Aspirations
We have explored and confirmed that deep learning based segmentation is very effective in segmenting buildings. We could extend these models for clustering similar buildings, classifying residential vs commercial, predicting future constructions or detecting illegal construction or activities. Potential technical improvements include learned polygonization, elastic transformations during training, and model compression for cheaper inference.
Figure 4: Model 3 Training Curves.
\begin{table}
\begin{tabular}{||c c c||} \hline Train\_Loss & Valid\_Loss & Dice\_Score \\ \hline
0.102414 & 0.115400 & 0.920870 \\ \hline \end{tabular}
\end{table}
Table 3: Model 3 Losses and Metrics Values |
2309.01433 | Lifting the Reasoning Level in Generic Weak Memory Verification
(Extended Version) | Weak memory models specify the semantics of concurrent programs on multi-core
architectures. Reasoning techniques for weak memory models are often
specialized to one fixed model and verification results are hence not
transferable to other memory models. A recent proposal of a generic
verification technique based on axioms on program behaviour expressed via
weakest preconditions aims at overcoming this specialization to dedicated
models. Due to the usage of weakest preconditions, reasoning however takes
place on a very low level requiring the application of numerous axioms for
deriving program properties, even for a single statement. In this paper, we
lift reasoning in this generic verification approach to a more abstract level.
Based on a view-based assertion language, we provide a number of novel proof
rules for directly reasoning on the level of program constructs. We prove
soundness of our proof rules and exemplify them on the write-to-read causality
(WRC) litmus test. A comparison to the axiom-based low-level proof reveals a
significant reduction in the number of required proof steps. | Lara Bargmann, Heike Wehrheim | 2023-09-04T08:30:00Z | http://arxiv.org/abs/2309.01433v1 | # Lifting the Reasoning Level in Generic Weak Memory Verification (Extended Version)+
###### Abstract
Weak memory models specify the semantics of concurrent programs on multi-core architectures. Reasoning techniques for weak memory models are often specialized to one fixed model and verification results are hence not transferable to other memory models. A recent proposal of a _generic_ verification technique based on _axioms_ on program behaviour expressed via weakest preconditions aims at overcoming this specialization to dedicated models. Due to the usage of weakest preconditions, reasoning however takes place on a very low level requiring the application of numerous axioms for deriving program properties, even for a single statement.
In this paper, we lift reasoning in this generic verification approach to a more abstract level. Based on a view-based assertion language, we provide a number of novel _proof rules_ for directly reasoning on the level of program constructs. We prove soundness of our proof rules and exemplify them on the write-to-read causality (WRC) litmus test. A comparison to the axiom-based low-level proof reveals a significant reduction in the number of required proof steps.
Keywords:Axiomatic Reasoning Concurrency Verification Weak Memory Models.
## 1 Introduction
The behaviour of concurrent programs running on modern multi-core processors is influenced by the (weak) _memory model_ of the processor. A memory model fixes how concurrent threads can access shared variables, in particular which values of shared variables a thread can read. The behaviour of weak memory models differs from the often assumed _sequential consistency_ (SC) [21] in which an execution is simply an interleaving of sequential executions of threads following their program order.
As weak memory models deviate from sequential consistency, verification techniques for concurrent programs like rely-guarantee [28] or Owicki-Gries reasoning [24] become unsound on weak memory models. Consequently, past years have seen the development of numerous reasoning approaches _specific_ to a memory model (like, e.g., [4, 9, 10, 20, 25]). The drawback of all these techniques is
that a correctness proof for a concurrent program running on one memory model is not directly transferable to other memory models.
To alleviate this problem, Doherty et al. [12] propose a _generic_ reasoning technique for weak memory models provided these have a _view-based semantics_[13]. A view of a thread specifies which write events to shared variables a thread can observe (and hence read from). The core of the reasoning technique is the concept of threads being _view-maximal_ and memory-model internal steps to not invalidate view-maximality. On top of such novel concepts, [12] simply builds on standard Owicki-Gries reasoning for concurrent programs [24]. So far, memory models SC, TSO [26], PSO [1] and C11 RAR [9] have been shown to fall into this category. Reasoning (about single program instructions) then proceeds by applying low-level axioms based on weakest preconditions. The result is a correctness proof of a concurrent program (a proof outline) which is sound for _every_ memory model satisfying the axioms.
While providing a memory-model independent approach, the technique however suffers from the need to apply very low-level, detailed axioms combined with standard properties of weakest preconditions. Moreover, reasoning engines (like Isabelle, as used in [10]) might not record the axioms employed for a specific proof. This hinders transferability to memory models fulfilling only a _subset_ of the axioms: we do not know anymore whether a proof is or is not valid on such a partially fitting model.
To improve on these shortcomings, we propose a lifting of the reasoning technique to a higher level. Starting from a view-based language for formulating assertions on concurrent programs, we develop several novel proof rules for program statements. We prove soundness of each of these rules via the low-level axioms. Moreover, together with every new rule we list the required axioms. This enables us to directly see whether a proof is transferable to a memory model which only partially fulfills the axiom set. We exemplify our new proof rules on the write-to-read causality litmus test (see, e.g., [5]) for which we provide both the low-level and the novel high-level reasoning steps. This demonstrates a significant reduction in the number of required proof steps.
## 2 Program Syntax
We start by introducing the syntax of concurrent programs. We define a concurrent program as a parallel composition of sequential programs. Each thread \(t\in\mathsf{Tid}\) runs a sequential program \(\mathit{Com}\) and with the function \(\Pi:\mathsf{Tid}\to\mathit{Com}\) we model a concurrent program over threads \(\mathsf{Tid}\). We let \(\mathsf{Var}_{\mathsf{G}}\) be the set of global variables and \(\mathsf{Var}_{\mathsf{L}}\) the set of local variables (or _registers_) with \(\mathsf{Var}_{\mathsf{G}}\cap\mathsf{Var}_{\mathsf{L}}=\varnothing\) and \(\mathsf{Var}=\mathsf{Var}_{\mathsf{G}}\cup\mathsf{Var}_{\mathsf{L}}\). We assume that initially all variables have the value \(0\).
For \(x\in\mathsf{Var}_{\mathsf{G}}\), \(r\in\mathsf{Var}_{\mathsf{L}}\) and value \(v\in\mathsf{Val}\) the following grammar defines \(\mathit{Com}\):
\[E ::=\mathit{v}\ |\ e\] \[\mathit{com} ::=\mathit{skip}\ |\ \mathit{fnc}\ |\ r:=E\ |\ r:=x\ |\ r:=^{\mathsf{RS}}x\ |\ x:=E\ |\ x:=^{\mathsf{WS}}E\]
\(\{[x=0]_{1}\cap[x=0]_{2}\cap[y=0]_{3}\cap r_{1}=0\cap r_{2}=0\}\)
**Thread \(1\)**
\(\{P_{1,1}:[x=0]_{1}\)\(\cap[x\not\approx 1]_{2}\cap r_{1}\neq 1\}\)
\(\begin{array}{l}\{\mbox{\bf 1}:x:=1;\\ \{P_{1,2}:\mbox{\it true}\}\end{array}\)
\(\begin{array}{l}\{P_{2,2}:[y\not\approx 1]_{3}\cap r_{2}\neq 1\\ \cap(r_{1}\neq 1\cup[x=1]_{2})\}\\ \{\mbox{\bf 3}:y:=\mbox{\tt WS 1};\\ \{P_{2,3}:\mbox{\it true}\}\end{array}\)
\(\begin{array}{l}\{P_{3,1}:r_{1}\neq 1\\ \cup(y=1)^{\mbox{\tt S}}[x=1]_{3}\}\\ \{\mbox{\bf 4}:r_{2}:=\mbox{\tt^{RS}}y;\\ \{P_{3,2}:r_{1}\neq 1\\ \cup r_{2}\neq 1\cup[x=1]_{3}\}\\ \{\mbox{\bf 5}:\mbox{\tt^{r_{3}}}:r_{3}:=x;\\ \{P_{3,3}:r_{1}\neq 1\\ \cup r_{2}\neq 1\cup r_{3}=1\}\end{array}\)
\(\begin{array}{l}\{\mbox{\bf 1}\neq 1\cup r_{2}\neq 1\cup r_{3}=1\} \end{array}\)
The semantics of programs \(\mathit{Com}\) depends on the specific memory model a program runs on. In general, such semantics are typically defined in the following way (see, e.g., [12]): First, a semantics for the _local_ part, i.e., the registers, is defined. As registers are not shared among threads, every thread directly writes to and reads from its registers. For shared variables, the local semantics simply assumes that any value can be read. In a next step, the local semantics is combined with a specific memory model semantics that details which values can actually be read by which threads in some given state. As we develop a generic reasoning approach here, we cannot further detail the semantics (we have no fixed memory model).
## 3 Axiomatic Reasoning
Instead of trying to provide separate correctness proofs for WRC for all memory models, we could employ the generic approach in [12] and construct _one_ proof which is then valid for all memory models fulfilling the axioms employed _in this proof_. To this end, the generic reasoning technique abstracts from the semantics (and thus from a concrete memory model) and bases reasoning on _axioms_.
### Axioms
The approach of [12] reasons about arbitrary transition systems \(\mathit{TS}\mathrel{\widehat{=}}(\mathsf{Act}_{\mathsf{ext}},\Sigma,\)\(I,\)\(T)\()\) where \(\mathsf{Act}_{\mathsf{ext}}\) is the set of actions, \(\Sigma\) a set of states, \(I\subseteq\Sigma\) a set of initial states and \(\mathit{T}\in\mathsf{Tid}\times\mathsf{Act}\to 2^{\Sigma\times\Sigma}\) a set of transitions. The axiomatisation is build upon the _weakest liberal precondition transformer_ (wlp) [11], which is used both as a basis for property specification and verification. For a relation \(\mathit{R}\) and set of states \(\mathit{P}\) (representing a predicate), we let \(\mathsf{wlp}:2^{\Sigma\times\Sigma}\times 2^{\Sigma}\to 2^{\Sigma}\) be
\[\mathsf{wlp}(\mathit{R},\mathit{P})\mathrel{\widehat{=}}\{\sigma\in\Sigma \mid\forall\,\sigma^{\prime}:(\sigma,\sigma^{\prime})\in\mathit{R}\Rightarrow \sigma^{\prime}\in\mathit{P}\}\]
Figure 2 details some properties of wlp where \(\mathrel{\widehat{=}}\) denotes relational composition and \(\mathit{R}[\cdot]\) relational image. Here, \(\mathit{R}\) typically is the relation \(\mathit{T}(\mathit{t},\mathit{a})\), \(\mathit{t}\in\mathsf{Tid},\mathit{a}\in\mathsf{Act}_{\mathsf{ext}}\). We say \(\mathit{R}\) is _disabled_ in a state \(\sigma\) iff \(\sigma\in\mathsf{dis}(\mathit{R})\) holds, where
\(\mathsf{wlp}(R,\varnothing)\). This will in particular be employed for read actions, to state that it is impossible for a thread \(t\) to read a certain value of a shared variable.
The core concept of reasoning is the idea of _views_ of threads. In weak memory models, threads _observe_ global variables to have certain values (namely the values of write actions); a thread might observe several different values at a time and different threads might have different such observations. This differs from sequential consistency in which all threads have the same observation and can only see one value at a time. We say that a thread is _view maximal_, \(vmax(t,a)\) (on an action \(a\) operating on a variable \(x\in\mathsf{Var}_{\mathsf{G}}\)), if it has the "most up-to-date" view on this variable. While non view maximal threads might be able to read older values of \(x\), thread \(t\) reads the most up-to-date value.
Example 3: As an example, consider the WRC program after the execution of line 1 (\(x:=1\)). In SC, all threads observe \(x\) to be 1 (only). In TSO, in which written values are first placed in thread-local store buffers before being flushed to main memory, there is a state in which thread 1 observes \(x\) to be 1 while threads 2 and 3 still see \(x\) to be 0. In such a state, we, e.g., have \(\mathsf{dis}(\mathit{T}(2,Rd_{|x}[1])\). In C11, there is even a state in which threads 2 and 3 can see \(x=1\) and \(x=0\)_at the same time_. In all these models, we have \(vmax(1,\mathit{rd}(x,\cdot,\cdot))\) (thread 1 is view maximal on \(x\)) in that state.
A specific memory model will give rise to some concrete definition of \(vmax\). For the axiomatisation it is only important to guarantee that memory model internal steps preserve view maximality in the sense of view-preserving simulations.
Definition 1: For a transition system \(TS=(\mathsf{Act},\Sigma,\mathit{I},\mathit{T})\), a _view-preserving simulation_, denoted \(\beta\), is the weakest relation \(R\) satisfying for all threads \(t\in\mathsf{Tid}\) and all actions \(a\in\mathsf{Act}\)
\[\mathit{R}\,\mbox{\tiny$\,\mbox{\tiny$\,\mbox{\tiny$\,\,\,$}}$} \mathit{T}(t,a) \subseteq\mathit{T}(t,a)\,\mbox{\tiny$\,\mbox{\tiny$\,\mbox{\tiny$ \,\,\,$}}$}\mathit{R}\] (semi-commutation) \[\mathit{vmax}(t,a) \subseteq\mathsf{wlp}(R,vmax(t,a))\] (view maximality)
A view-preserving simulation keeps view maximality of threads and semi-commutes with the transition relation.
Figure 3: Core axioms
Example 4: A view-preserving simulation for SC is the identity relation. For TSO it is the flushing of contents of store buffers to main memory. For C11 in which all write events to the same variable \(x\) are ordered in some _modification order_, it is the advancement of a thread's observation on \(x\) (a write to \(x\) of a value) to another write which occurs later in modification order.
The concept of views is inherent to the axiomatic reasoning and hence is also employed for property specification. As threads might observe more than one value for a variable, the ordinary first-order logic assertions on program variables of Hoare-logic [15] need to be replaced by _view-based_ assertions.
Definition 2: For a thread \(t\), a variable \(x\in\mathsf{Var}_{\mathsf{G}}\) and values \(u,v\in\mathsf{Val}\) we define
\[\begin{array}{rcll}[x\not\approx v]_{t}&\widehat{=}&\mathsf{dis}(T(t,Rd_{|x }[v]))&\text{(Impossible value)}\\ [x\equiv v]_{t}&\widehat{=}&\bigcap_{u\neq v}[x\not\approx u]_{t}&\text{(Definite value)}\\ x_{\uparrow t}&\widehat{=}&\bigcap_{a\in\mathsf{Act}_{|x}}vmax(t,a)&\text{(Maximal view)}\\ [x=v]_{t}&\widehat{=}&[x\equiv v]_{t}\cap x_{\uparrow t}&\text{(Maximal value)}\\ \langle y=u\rangle^{\mathsf{S}}[x=v]_{t}&\widehat{=}&\mathsf{wlp}(T(t,\mathit{ rd^{\mathsf{RS}}}(y,\cdot,u)),[x=v]_{t})&\text{(Synced conditional}\\ \langle x=v\rangle[x=v]_{t}&\widehat{=}&\mathsf{wlp}(\,T(t,\mathit{rd}(x,\cdot,v )),[x=v]_{t})&\text{(Conditional}\\ &&\text{observation)}\end{array}\]
Figure 4: Axioms on shared variables
Example 5: Consider the state of WRC after executing lines 1, 2 and 3 (in this order). In SC, we then have \([x=1]_{t}\) for all threads \(t\) (same for \(y\)). In _TSO_ (when store buffer contents has not been flushed yet), we, e.g., have \([x=1]_{1}\), \([y=1]_{2}\) and \([x\not\approx 1]_{3}\) (thread 3 cannot read \(x\) to be 1). In C11, we might have \([y\approx 0]_{3}\) and \([y\approx 1]_{3}\) (thread 3 can read both 0 and 1). Moreover, the following synced conditional observation is valid in all three memory models: \(\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\) (by a synchronized read of \(y\) to be 1, thread 3 becomes view maximal on \(x\) and definitely observes the value 1 for \(x\)).
We let \(\mathcal{G}\) be the set of (all logical combinations of) such _global_ assertions. In our proof outlines (like in the one of WRC) we also allow for normal Hoare-like assertions on local registers (e.g. \((r_{1}=1)\in\mathit{BExp}\)), and define the logical combinations of such _local_ (\(\mathcal{L}\)) assertions and the global assertions to be the set \(\mathcal{A}\) of all assertions.
Assertions define sets of states. Of particular interest are \(\beta\)-stable assertions.
Definition 3: Any predicate \(P\in 2^{\Sigma}\) is \(\beta\)-stable iff \(P\subseteq\mathsf{wlp}(\beta,P)\).
All assertions in \(\mathcal{G}\) are \(\beta\)-stable (see [12]). The axioms furthermore make use of an _interference relation_\(\mathit{interf}\in\mathsf{Tid}\times\mathsf{Act}\to 2^{\Sigma\times\Sigma}\) which (together with \(\beta\)) provides an _overapproximation_ of the transition relation \(\mathit{T}(t,a)\) in order to abstract from details of the memory model and to regain standard properties of reasoning (like writes and reads on different variables commuting). Figure 3 gives all core axioms; Figure 4 gives axioms concerning read and write actions on shared variables.
We only briefly explain the axioms; an example application of the axioms for reasoning about WRC is given below. Axiom **C1** states that initially all threads are view maximal w.r.t. all actions. Axiom **C2** describe the independence of actions w.r.t. thread identifiers (where additional \(\beta\) steps are required). Axiom **C3** states that _interf_ together with \(\beta\)_over-approximates_ the behaviour of an action. Axiom **C4** states that the interference relation preserves every view-maximality property of the thread performing the interference (of the action).
Axiom **SV1** is a weakening of the commutation property present in SC. **SV2** states that a view-maximality property of any thread is stable under actions on any other variable. Axioms **RW1** and **RW2** capture semi-commutativity properties for writes and reads, respectively, and are analogous to **SV1**. Axiom **RW3** states that view-maximality on a variable is preserved by reading the variable. Axiom **RW4** states that it is always possible to read some value of a variable, and **RW5** states that a thread \(t\) writing some value can afterwards read it. **RW6** states that whenever \(t\) is view maximal on actions over variable \(x\), then \(t\) has a definite value assertion over _some_ value for \(x\) (i.e., can only read one value for \(x\)). Axiom **RW7** considers a situation in which thread \(t\) is \(vmax\) on a variable \(x\) but \(t^{\prime}\) cannot read a specific value for this variable. We then obtain view-maximality of \(t^{\prime}\) on \(x\) after \(t\) has performed the write \(a_{w}\) and \(t^{\prime}\) has read this write's value.
Finally, the axiom set contains one specific axiom for _fences_ and one for _message passing_. Fence instructions are employed in weak memory models to
make programs behave more like SC. The fence axiom given below states this by saying that a fence in a thread being view maximal on some action \(a\) makes all other threads view maximal on \(a\) as well.
**FNC**: \(\forall\,a\in\mathsf{Act}\), \(t,t^{\prime}\in\mathsf{Tid}\): \(vmax(t,a)\subseteq\mathsf{wlp}(\,T(t,\mathit{fence}),vmax(t^{\prime},a))\).
**MP**: For \(\mathit{a}_{w},\mathit{a}_{r},b\in\mathsf{Act}\) and \(t,t^{\prime}\in\mathsf{Tid}\) such that \((\mathit{a}_{w},\mathit{a}_{r})\in\mathit{sync}\), \(\mathit{var}(\mathit{a}_{w})=\mathit{var}(\mathit{a}_{r})\), \(\mathit{wrval}(\mathit{a}_{w})=\mathit{rdval}(\mathit{a}_{r})\), \(\mathit{var}(\mathit{b})\neq\mathit{var}(\mathit{a}_{w})\), and \(t\neq t^{\prime}\), we have
\[vmax(t,b)\cap\mathsf{wlp}(\,T(t^{\prime},\mathit{a}_{r}),vmax(t^{ \prime},b))\] \[\subseteq\mathsf{wlp}(\,T(t,\mathit{a}_{w}),\mathsf{wlp}(\,T(t^{ \prime},\mathit{a}_{r}),vmax(t^{\prime},b))).\]
The message passing axiom **MP** describes the _passing of knowledge on variable values_ from one thread to another upon _synchronization_. Synchronization is incorporated here by requiring \((\mathit{a}_{w},\mathit{a}_{r})\in\mathit{sync}\) which is achieved when the write has a WS and the read an RS annotation. More specifically, it describes a situation where a thread \(t\) is maximal on some action \(b\) (\(vmax(t,b)\)) and thread \(t^{\prime}\) upon executing action \(\mathit{a}_{r}\) would become view maximal on \(b\) as well. Then, writing the value to be read (i.e., \(\mathit{T}(\mathit{t},\mathit{a}_{w})\)) followed by reading this value (\(\mathit{T}(\mathit{t}^{\prime},\mathit{a}_{r})\)) makes thread \(t^{\prime}\) view maximal on \(b\).
As a first result, we restate two lemmas stating the stability of global assertions under fence and read actions.
Lemma 1 ([3]): _Assume the axioms **C3**, **SV1** and **SV2** hold. For all \(P\in\mathcal{G}\) and threads \(t\), \(P\subseteq\mathsf{wlp}(\,T(t,\mathit{fence}),P)\)._
Lemma 2 ([12]): _Assume the axioms **C3**, **SV1**, **SV2**, **RW2** and **RW3** hold. For all \(P\in\mathcal{G}\), threads \(t\) and \(\mathit{a}_{r}\in Rd\), \(P\subseteq\mathsf{wlp}(\,T(t,\mathit{a}_{r}),P)\)._
Note that - contrary to [3, 12] - we name the axioms required for the proof in the lemmata. This is of importance for dealing with memory models which only fulfill part of the axioms (so that we can see whether a generic proof is transferable to such a memory model).
### Reasoning Example on Axiom Level
Next, we employ the axioms for showing one step in the correctness proof of WRC. Note that the proof of WRC in the generic framework has not appeared before. In general, such proofs involve proof steps of the form
\[P\subseteq\mathsf{wlp}(\,T(t,a),\mathit{Q})\]
for actions \(a\) belonging to program instructions \(\mathit{com}_{t}\), where \(P\in\mathcal{A}\) is the pre-assertion before and \(\mathit{Q}\in\mathcal{A}\) the post-assertion after the instruction. We also write these as Hoare-triples
\[\{P\}\ \mathit{com}_{t}\ \{\mathit{Q}\}\.\]
Such steps need to be performed to show local and global correctness (as of Owicki-Gries' approach [24]).
**Definition 4**.: _A thread \(t\) is locally correct in a proof outline if \(\{P\}\mbox{com}_{t}\{Q\}\) holds for every program command com in \(t\) with pre-assertion \(P\) and post-assertion \(Q\)._
_A proof outline is globally correct (interference-free) if for every pair of threads \(t,t^{\prime}\), \(\{R\cap P\}\mbox{com}_{t^{\prime}}\{R\}\) holds for every assertion \(R\) in the proof outline of \(t\) and command com with pre-assertion \(P\) in thread \(t^{\prime}\)._
We exemplify one such proof step for the proof outline in Fig. 1, which is part of the local correctness of thread 3.
\[\{\,r_{1}\neq 1\cup\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\}\ r_{2}:=_{3}^{ \mathsf{RS}}y\ \{\,r_{1}\neq 1\cup r_{2}\neq 1\cup[x=1]_{3}\}\]
For this we have to prove for every \(\mathit{v}\in\mathsf{Val}\)
\[r_{1}\neq 1\cup\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\subseteq\mathsf{wlp}( \mathit{T}(3,\mathit{rd}^{\mathsf{RS}}(y,r_{2},v)),r_{1}\neq 1\cup r_{2}\neq 1 \cup[x=1]_{3})\]
Because of the disjunctivity of \(\mathsf{wlp}\) (see Fig. 2), we can divide the proof in two parts
1. \(r_{1}\neq 1\subseteq\mathsf{wlp}(\mathit{T}(3,\mathit{rd}^{\mathsf{RS}}(y,r_{2}, v)),r_{1}\neq 1)\)
2. \(\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\subseteq\mathsf{wlp}(\mathit{T}(3, \mathit{rd}^{\mathsf{RS}}(y,r_{2},v)),r_{2}\neq 1\cup[x=1]_{3})\)
For reasoning about local registers, we employ a version of the standard technique of backward substitution from the rule of assignment of Hoare-logic1, i.e.,
\[e[r:=\mathit{v}]\subseteq\mathsf{wlp}(\mathit{T}(t,\mathit{rd}^{\mathsf{RS}}( x,r,v))),e)\]
where \(e\in\mathit{Exp}\) is an expression on local variables only and \([r:=\mathit{v}]\) means replacing all occurrence of \(r\) by value \(\mathit{v}\). For (i) we then have
\[(r_{1}\neq 1)=(r_{1}\neq 1[r_{2}:=\mathit{v}])\subseteq\mathsf{wlp}(\mathit{T}( 3,\mathit{rd}^{\mathsf{RS}}(y,r_{2},v)),r_{1}\neq 1)\]
For (ii) we look at two cases. First, let \(\mathit{v}=1\). Using the monoticity of \(\mathsf{wlp}\) we get
\[\langle y=1\rangle^{\mathsf{S}}[x=1]_{3} =\mathsf{wlp}(\mathit{T}(3,\mathit{rd}^{\mathsf{RS}}(y,r_{2},1) ),[x=1]_{3})\] \[\subseteq\mathsf{wlp}(\mathit{T}(3,\mathit{rd}^{\mathsf{RS}}(y,r_ {2},v)),r_{2}\neq 1\cup[x=1]_{3})\]
In the case \(\mathit{v}\neq 1\), we need Lemma 2 and therefore the axioms C3, SV1, SV2, RW2, and RW3 have to hold. Because of the disjunctivity of \(\mathsf{wlp}\) we get
\[\langle y=1\rangle^{\mathsf{S}}[x=1]_{3} \subseteq\Sigma\] \[\stackrel{{ v\neq 1}}{{=}}(r_{2}\neq 1[r_{2}:=\mathit{v}]) \cup[x=1]_{3}\] \[\subseteq\mathsf{wlp}(\mathit{T}(3,\mathit{rd}^{\mathsf{RS}}(y,r_ {2},v)),r_{2}\neq 1\cup[x=1]_{3})\]
Many steps of such correctness proofs are complex, time consuming and repetitive. In the next section we summarize multiple such steps into _proof rules_ and thereby lift reasoning to the higher level of syntactic assertions, not employing weakest preconditions anymore.
## 4 Rules
In this section we explain our novel proof rules for the axiomatic reasoning. Remember that for a program command \(\mathit{com}_{t}\) in a thread \(t\), we prove \(\{P\}\ \mathit{com}_{t}\ \{Q\}\) for assertions \(P\), \(Q\in\mathcal{A}\) by showing
\[P\subseteq\mathsf{wlp}(\mathit{T}(t,a),\mathit{Q})\]
where \(a\) is the action in \(\mathit{com}_{t}\). Some interim results of those proofs can be generalised and lifted to the higher level of syntactic assertions. We formalise them in the form of rules which then can be used to directly prove the correctness of a proof outline without the need of weakest preconditions.
We start by giving general rules (Fig. 5) which hold regardless of the validity of axioms. Those rules are all in the original Hoare-logic form [15] and are here translated to our setting. For the rules True and False note that the assertions \(\{\mathit{true}\}\) and \(\{\mathit{false}\}\) describe the set of states \(\Sigma\) and the empty set, respectively. With that in mind both rules follow directly from our definition of Hoare-triple. The intuitive idea of the Mono rule are that a Hoare-triple still holds if the pre-assertion becomes stronger or the post-assertion weaker. The first follows by definition and the second from the monotonicity of \(\mathsf{wlp}\) (see Fig. 2). Analogously, the rules Conj and Disj formalise the conjunctivity and disjunctivity properties of Figure 2. Hence we get the following Theorem.
Theorem 4.1: _The general proof rules in Figure 5 are sound._
The proof of the theorem can be found in the appendix. Note that these rules can be used to combine different Hoare-triples from other rules.
Next we look at rules specific to a certain program command and start with fence actions. If we formalise the property given in Lemma 1, we get the first rule of Figure 6: Fence1. Note that with regard to showing global correctness, the rule implies the following lemma.
Lemma 3: _In every proof outline fence actions are globally correct for \(\beta\)-stable assertions, i.e., for every assertions \(\mathit{G}\in\mathcal{G}\) and \(P\in\mathcal{A}\): \(\{\mathit{G}\cap P\}\ \mathit{fnc}_{t}\ \{\mathit{G}\}\)._
Figure 5: General rules
The **FNC** Axiom is formalised in Fence2 and if we additionally assume the axioms **C2** and **RW6** we can not only pass view-maximality to a different thread, but also the value that can be read. In the appendix we show the following theorem.
Theorem 4.1: _The fence proof rules in Figure 6 are sound._
For read actions (Fig.7) we similarly formalise Lemma 2 in rule Read1 and get the following lemma.
Lemma 4: _In every proof outline read actions are globally correct for \(\beta\)-stable assertions, i.e., for every assertions \(\mathit{G}\in\mathcal{G}\) and \(\mathit{P}\in\mathcal{A}\colon\{\mathit{G}\cap\mathit{P}\}\ \mathit{r}:=_{t}\mathit{v}\ \{\mathit{G}\}\)._
The rules Read2, Read3, ConRead1 and ConRead2 describe how we replace different global assertions (containing \(x\)) by local ones (containing \(r\)) after reading the value of \(x\) to \(r\). Here Read2 says that if thread \(t\) cannot read \(\mathit{v}\) for \(x\), then after reading \(x\) to \(r\), \(r\) cannot be \(\mathit{v}\). Analogous in Read3 where \(t\) cannot read a value different from \(\mathit{v}\) for \(x\) and is view maximal (which means that \(t\) can read the most up-to-date value for \(x\)), after the read, \(r\) has to be equal to \(\mathit{v}\). If we have a conditional observation assertion \(\langle x=\mathit{v}\rangle[x=\mathit{v}]_{t}\) and read in the same thread from \(x\), then either we do not read \(\mathit{v}\) or \([x=\mathit{v}]_{t}\) holds afterwards (ConRead1). We get a similar rule for the synchronized read and the synced conditional observation (ConRead2). ReadReg tells us that a local assertion remains unchanged after a read to a register which is not included in the assertion. In LocRead we describe that an assertion will not change, if we read a local expression to a register. In this case the register must not be
Figure 6: Fence rules
Figure 7: Read rules (_reg_(\(\mathit{P}\)) being the local registers occurring in \(\mathit{P}\))
included in the assertion. Note that by \(\mathit{reg}(P)\) we mean the set of registers in \(P\). Summarised we get
Theorem 3.1: _The read proof rules in Figure 7 are sound._
which we also proved in the appendix.
In Figure 8 we formalised rules for write actions. There we differentiate between global assertions about the variable written to and about other variables. In both cases we need the **C3** axiom. This allows us to apply a hand full of axioms that describe properties of _interf_, e.g., **SV1** and **SV2**. For different variables we can pass readability of a value with the axiom **SV1** (see rules Write1 and Write2). If we want to pass view-maximality (Write3), we need **SV2**. The rule Write4 combines Write2 and Write3. In the case where the assertion contains the same variable as the write action, we can use the axiom **C4** to pass view-maximality (Write5). If we additionally assume **RW5** and **RW6**, we can update the value thread \(t\) can read (Write6). If we write a new value to \(x\) (which means that before the write, \(t\) and \(t^{\prime}\) could not read \(v\)) in a view-maximal thread \(t\), then if \(t^{\prime}\) can read \(v\), it also has to be view-maximal. This behaviour is decribed in rule ConWrite1. We need to assume **RW7** to pass the conditional view-maximality to a different thread. The rule ConWrite2 describes message passing. If \(t^{\prime}\) can read \(v\) for \(x\) and \(t\) can read \(u\) for \(y\) and is view-maximal in \(t\), then if we write \(v\) to \(x\) in \(t\), \([y=u]_{t^{\prime}}\) only holds if we can read \(v\) for \(x\) in \(t^{\prime}\). This behaviour only differs from ConWrite1 by allowing different variables. Because of this, we cannot apply **RW7** and need **MP**. Hence this rule only holds for synchronised writes. The last rule of Figure 8 (WriteReg) formalises the
Figure 8: Write rules
fact that a write will not change the value of a register. In the appendix, we prove
Theorem 4.1: _The write proof rules in Figure 8 are sound._
With all these rules being sound, we can now prove correctness much easier and shorter. Also we then know exactly which axioms we need for a certain proof outline to be valid.
## 5 Correctness Proof of WRC via Proof Rules
In this section, we finally apply our rules to the correctness proof of the WRC example in Figure 1.
Lemma 5: _The proof outline in Figure 1 is valid under the axioms **C2**, **C3**, **C4**, **SV1**, **SV2**, **RW2**, **RW3**, **RW5**, **RW6**, **RW7** and **MP**._
This means that the proof outline holds for every memory model that satisfies the axioms named.
To prove this lemma, we have to check every Hoare-triple that we need for local and global correctness (see Def. 4). Starting with local correctness, Table 1 gives us an overview of every Hoare-triple we need to prove. In there we see which proofs require which rules and thus which axioms. For better readability, we have omitted the use of the rules Mono, Conj and Disj. One of the Hoare-triples is
\[\{\tau_{1}\neq 1\cup\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\}\ r_{2}:= \stackrel{{\mathsf{RS}}}{{{3}}}\ y\ \{\tau_{1}\neq 1\cup r_{2}\neq 1\cup[x=1]_{3}\}\]
which we already proved at the end of Section 2. With our novel proof rules at hand we can show its validity with a fewer number of steps. As written in Table 1 we need the rules ReadReg and ConRead2. The first one tells us
\[\{r_{1}\neq 1\}\ r_{2}:=\stackrel{{\mathsf{RS}}}{{{3}}}\ y\ \{r_{1}\neq 1\}\]
and with ConRead2 we get
\[\{\langle y=1\rangle^{\mathsf{S}}[x=1]_{3}\}\ r_{2}:=\stackrel{{ \mathsf{RS}}}{{{3}}}\ y\ \{r_{2}\neq 1\cup[x=1]_{3}\}\]
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \(com_{t}\) & \multicolumn{2}{c|}{Hoare-Triples} & \multicolumn{1}{c|}{Proof Rules} & \multicolumn{1}{c|}{Axioms} \\ \hline \(x:=_{1}1\) & \(\{P_{1,1}\}\ x:=_{1}1\ \{P_{1,2}\}\) & True & \\ \hline \(r_{1}:=_{2}x\) & \(\{P_{2,1}\}\ r_{1}:=_{2}x\ \{P_{2,2}\}\) & Read1, ReadReg, Read3, ConRead1 & C3, SV1, SV2, RW2, RW3 \\ \hline \(y:=\stackrel{{\mathsf{WS}}}{{{2}}}1\) & \(\{P_{2,2}\}\ y:=_{2}1\ \{P_{2,3}\}\) & True & \\ \hline \(r_{2}:=\stackrel{{\mathsf{RS}}}{{{3}}}y\ \{P_{3,1}\}\ r_{2}:=_{3}y\ \{P_{3,2}\}\) & ReadReg, ConRead2 & \\ \hline \(r_{3}:=_{3}x\ \{P_{3,2}\}\ r_{3}:=_{3}x\ \{P_{3,3}\}\) & ReadReg, Read3 & \\ \hline \end{tabular}
\end{table}
Table 1: Rules employed for showing local correctness of the WRC proof outline
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \(com_{t}\) & Hoare-Triples & Proof Rules & Axioms \\ \hline \(x:=_{1}1\) & \(\{P_{1,1}\cap P_{2,1}\}\)\(x:=_{1}1\)\(\{P_{2,1}\}\) & Write1, & C2, C3, C4, SV1, \\ & & WriteReg, & SV2, RW2, RW3, \\ & & ConWrite1 & RW5, RW6, RW7 \\ & \(\{P_{1,1}\cap P_{2,2}\}\)\(x:=_{1}1\)\(\{P_{2,2}\}\) & Write1, WriteReg & C3, SV1 \\ & \(\{P_{1,1}\cap P_{2,3}\}\)\(x:=_{1}1\)\(\{P_{2,3}\}\) & True & \\ & \(\{P_{1,1}\cap P_{3,1}\}\)\(x:=_{1}1\)\(\{P_{3,1}\}\) & WriteReg & \\ & \(\{P_{1,1}\cap P_{3,2}\}\)\(x:=_{1}1\)\(\{P_{3,2}\}\) & WriteReg & \\ & \(\{P_{1,1}\cap P_{3,3}\}\)\(x:=_{1}1\)\(\{P_{3,3}\}\) & WriteReg & \\ \hline \(r_{1}:=_{2}x\) & \(\{P_{2,1}\cap P_{1,1}\}\)\(r_{1}:=_{2}x\)\(\{P_{1,1}\}\) & Read1, & C3, SV1, SV2, \\ & & Read3 & RW2, RW3 \\ & \(\{P_{2,1}\cap P_{1,2}\}\)\(r_{1}:=_{2}x\)\(\{P_{1,2}\}\) & True & \\ & \(\{P_{2,1}\cap P_{3,1}\}\)\(r_{1}:=_{2}x\)\(\{P_{3,1}\}\) & Read1 & C3, SV1, SV2, \\ & \(\{P_{2,1}\cap P_{3,2}\}\)\(r_{1}:=_{2}x\)\(\{P_{3,2}\}\) & ReadReg & \\ & \(\{P_{2,1}\cap P_{3,3}\}\)\(r_{1}:=_{2}x\)\(\{P_{3,3}\}\) & ReadReg & \\ \hline \(y:=_{2}^{\text{SW}}1\) & \(\{P_{2,2}\cap P_{1,1}\}\)\(y:=_{2}1\)\(\{P_{1,1}\}\) & Write1, Write4, & C3, SV1, SV2 \\ & & WriteReg & \\ & \(\{P_{2,2}\cap P_{1,2}\}\)\(y:=_{2}1\)\(\{P_{1,2}\}\) & True & \\ & \(\{P_{2,2}\cap P_{3,1}\}\)\(y:=_{2}1\)\(\{P_{3,1}\}\) & WriteReg, & C2, C3, SV1, SV2, \\ & & ConWrite2 & RW2, RW3, RW6, MP \\ & \(\{P_{2,2}\cap P_{3,2}\}\)\(y:=_{2}1\)\(\{P_{3,2}\}\) & WriteReg & \\ & \(\{P_{2,2}\cap P_{3,3}\}\)\(y:=_{2}1\)\(\{P_{3,3}\}\) & WriteReg & \\ \hline \(r_{2}:=_{3}^{\text{SR}}y\)\(\{P_{3,1}\cap P_{1,1}\}\)\(r_{2}:=_{3}y\)\(\{P_{1,1}\}\) & Read1, & C3, SV1, SV2, \\ & \(\{P_{3,1}\cap P_{1,2}\}\)\(r_{2}:=_{3}y\)\(\{P_{1,2}\}\) & True & \\ & \(\{P_{3,1}\cap P_{2,1}\}\)\(r_{2}:=_{3}y\)\(\{P_{2,1}\}\) & Read1, & C3, SV1, SV2, \\ & \(\{P_{3,1}\cap P_{2,2}\}\)\(r_{2}:=_{3}y\)\(\{P_{2,2}\}\) & Read1, ReadReg, & C3, SV1, SV2, \\ & \(\{P_{3,1}\cap P_{2,3}\}\)\(r_{2}:=_{3}y\)\(\{P_{2,3}\}\) & True & \\ \hline \(r_{3}:=_{3}x\)\(\{P_{3,1}\cap P_{1,1}\}\)\(r_{3}:=_{3}x\)\(\{P_{1,1}\}\) & Read1, & C3, SV1, SV2, \\ & \(\{P_{3,1}\cap P_{1,2}\}\)\(r_{3}:=_{3}x\)\(\{P_{1,2}\}\) & True & \\ & \(\{P_{3,1}\cap P_{2,1}\}\)\(r_{3}:=_{3}x\)\(\{P_{2,1}\}\) & Read1, & C3, SV1, SV2, \\ & \(\{P_{3,1}\cap P_{2,2}\}\)\(r_{3}:=_{3}x\)\(\{P_{2,2}\}\) & ReadReg & RW2, RW3 \\ & \(\{P_{3,1}\cap P_{2,3}\}\)\(r_{3}:=_{3}x\)\(\{P_{2,3}\}\) & True & \\ \hline \end{tabular}
\end{table}
Table 2: Rules employed for showing global correctness of the WRC proof outline
Applying the Disj-rule we are done. Analogously we can now prove every Hoare-triple. In this way we need significantly fewer steps to prove one triple than we did in Section 2. Hence the entire correctness proof (which contains the proof of 31 Hoare-triple for Figure 1) is easier and shorter to prove, simply by applying the abstract proof rules. An overview of all the rules used for global correctness is given in Table 2. Due to the non-interference condition in Owicki-Gries style proofs, there are still a number of proof steps to be done, however significantly fewer than on the level of axioms. The number of proof steps could furthermore be reduced by employing a compositional proof technique like rely-guarantee reasoning [28]. For this, the same proof rules are sound.
## 6 Related Work
There are a number of approaches which propose novel program logics for weak memory models. The view-based logic we employ here has first appeared in [13, 9] for C11 RAR and has then been generalized to the generic reasoning approach in [12]. The work in [4] uses (and extends) view-based assertions to persistent memory, but does not develop a memory model independent technique. Similarly, Lahav et al. [18] propose a new program logic for the strong-release-acquire model of [17] and employ rely-guarantee reasoning. While the rely-guarantee framework is independent of a concrete memory model, the program logic is not.
Besides that there are verification techniques which are applicable to several memory models. Alglave and Cousot [2] present an invariance proof method which shows that a given program is correct w.r.t. a given memory model and an invariant specification of that program. It does so by first proving that a so-called communication specification is sufficient for the program's invariant. If a memory model guarantees the communication, the program is correct under that model. Ponce de Leon et al. [22] and Gavrilenko et al. [14] present generic bounded model checkers which translate a given program under a given memory model into an SMT formula. They are generic because their input contains not only the program but also the memory model, formalised in CAT as a set of relations. Kokologiannakis et al. [16] developed a generic model checker that transforms a given program into an execution graph to check its correctness under a given memory model with an axiomatic semantics. Colvin [6] proposes a special sequential composition operator which mimics the reordering behaviour of many weak memory models. Coughlin et al. [7, 8] discuss rely-guarantee reasoning for weak memory models in general and introduce a specific new verification condition called reordering-interference-freedom. This technique can be instantiated to memory models with a reordering semantics.
Our approach discussed here lifts the generic reasoning technique of [12] to the syntactic level, allowing to construct proof outlines operating on the level of program instructions and view-based assertions. Thereby, we avoid low-level reasoning about weakest preconditions while still preserving genericity.
Conclusion
In this paper, we have proposed high level proof rules lifting the generic reasoning principle of [12] to a more abstract level. Similar to standard Hoare-logic, our proof rules allow to reason on the syntactic level of assertions, departing from the semantic level of weakest preconditions. This significantly simplifies reasoning, and moreover allows us to directly see which axioms have been used in a proof. We have exemplified our proof technique by providing a generic correctness proof for the WRC litmus test. By the results of [12] (showing that SC, TSO and C11 RAR instantiate all axioms), this proof is valid for WRC running on a sequentially consistent as well as the TSO and C11 memory models.
|
2301.02718 | CGM$^2$ $+$ CASBaH: The Mass Dependence of H~I Ly$α$-Galaxy
Clustering and the Extent of the CGM | We combine datasets from the CGM$^{2}$ and CASBaH surveys to model a
transition point, $R_{\rm cross}$, between circumgalactic and intergalactic
media (CGM and IGM, respectively). In total, our data consist of 7244 galaxies
at z < 0.5 with precisely measured spectroscopic redshifts, all having impact
parameters of 0.01 - 20 comoving Mpc from 28 QSO sightlines with
high-resolution UV spectra that cover H I Ly$\alpha$. Our best-fitting model is
an exclusionary two-component model that combines a 3D absorber-galaxy cross
correlation function with a simple Gaussian profile at inner radii to represent
the CGM. By design, this model gives rise to a determination of $R_{\rm cross}$
as a function of galaxy stellar mass, which can be interpreted as the boundary
between the CGM and IGM. For galaxies with $10^8 \leq M_{\star}/M_{\odot} \leq
10^{10.5}$, we find that $R_{\rm cross}(M_{\star}) \approx 2 \pm 0.6 R_{\rm
vir}$. Additionally, we find excellent agreement between $R_{\rm
cross}(M_{\star})$ and the theoretically-determined splashback radius for
galaxies in this mass range. Overall, our results favor models of galaxy
evolution at z < 0.5 that distribute $T \approx 10^{4}$K gas to distances
beyond the virial radius. | Matthew C. Wilde, Kirill Tchernyshyov, Jessica K. Werk, Todd M. Tripp, Joseph N. Burchett, J. Xavier Prochaska, Nicolas Tejos, Nicolas Lehner, Rongmon Bordoloi, John M. O'Meara, Jason Tumlinson, J. Christopher Howk | 2023-01-06T21:22:40Z | http://arxiv.org/abs/2301.02718v1 | CGM\({}^{2}\) + CASBaH: The Mass Dependence of H I Ly\(\alpha\)-Galaxy Clustering and the Extent of the CGM
###### Abstract
We combine datasets from the CGM\({}^{2}\) and CASBaH surveys to model a transition point, \(R_{\rm cross}\), between circumgalactic and intergalactic media (CGM and IGM, respectively). In total, our data consist of 7244 galaxies at z \(<\) 0.5 with precisely measured spectroscopic redshifts, all having impact parameters of 0.01 \(-\) 20 comoving Mpc from 28 QSO sightlines with high-resolution UV spectra that cover H I Ly\(\alpha\). Our best-fitting model is an exclusionary two-component model that combines a 3D absorber-galaxy cross correlation function with a simple Gaussian profile at inner radii to represent the CGM. By design, this model gives rise to a determination of \(R_{\rm cross}\) as a function of galaxy stellar mass, which can be interpreted as the boundary between the CGM and IGM. For galaxies with \(10^{8}\leq M_{\star}/M_{\odot}\leq 10^{10.5}\), we find that \(R_{\rm cross}(M_{\star})\approx 2\pm 0.6R_{\rm vir}\). Additionally, we find excellent agreement between \(R_{\rm cross}(M_{\star})\) and the theoretically-determined splashback radius for galaxies in this mass range. Overall, our results favor models of galaxy evolution at z \(<\) 0.5 that distribute \(T\approx 10^{4}\)K gas to distances beyond the virial radius.
0000-0002-4882-8879]Matthew C. Wilde
0000-0002-4882-8879]Kirill Tchernyshov
0000-0002-4882-8879]Jessica K. Werk
0000-0002-4882-8879]Todd M. Tripp
0000-0002-4882-8879]Joseph N. Burchett
0000-0002-4882-8879]J. Xavier Prochaska
0000-0002-4882-8879]Nicolas Tejos
0000-0002-4882-8879]Nicolas Lehner
0000-0002-4882-8879]Rongmon Bordoloi
0000-0002-4882-8879]John M. O'Meara
0000-0002-4882-8879]Jason Tumlinson
0000-0002-4882-8879]J. Christopher Howk
## 1 Introduction
The formation and evolution of galaxies involves a complex interplay between gravitational collapse of gas from the intergalactic medium (IGM), galaxy mergers, and feedback due to stellar evolution and active galactic nuclei (AGN) that drive gaseous outflows and change the ionization state of the galaxies' gaseous halos. Together, these processes drive the "cosmic baryon cycle" which takes place largely in the region of a galaxy referred to as the circumgalactic medium (CGM). Indeed, understanding the CGM is critical for developing a complete theory of galaxy evolution, as highlighted by the recent decadal survey (National Acadamy of Sciences, 2021). In particular, the extent of the gaseous CGM relative to the extent of the dark matter halo is a subject of great interest for models that aim to reproduce the properties of gaseous halos.
The existence of the CGM, first predicted by Bahcall & Spitzer (1969), was initially revealed by detection of Mg ii and H i absorption at large projected distances (\(R_{\perp}>20\) kpc) from \(L*\) galaxies (Bergeron, 1986; Morris et al., 1993; Bergeron & Boisse, 1991; Lanzetta et al., 1995; Chen et al., 2005), and subsequently traced via higher-energy metal-line transitions such as Si iii, C iv and O vi that are observed to correlate with galaxies and their global properties (e.g. Tripp & Savage, 2000; Tripp et al., 2008; Prochaska et al., 2011; Tumlinson et al., 2011; Werk et al., 2013). Within 0.5 \(R_{\rm vir}\) of L \(\sim\) L* galaxies, the metal line incidence is found to be \(60-90\) % for a range of ionized metal species (Werk et al., 2013). Con
versely, Berg et al. (2022) find an 80% chance of finding a massive galaxy nearby to any high-metallicity absorber. The CGM of \(M_{\star}>10^{8}\)\(M_{\odot}\) galaxies is now well-established to be metal-enriched (Liang and Chen, 2014; Bordoloi et al., 2014; Prochaska et al., 2017; Berg et al., 2022), and to extend to at least 1 \(R_{\rm vir}\), and very likely beyond it (Wakker and Savage, 2009; Burchett et al., 2015; Finn et al., 2016; Wilde et al., 2021; Borthakur, 2022).
Generally, hydrodynamical simulations of galaxy evolution, which exhibit complex interactions between gravitational collapse from the cosmological large scale structure and subsequent feedback from supernovae and AGN-driven winds that heat and enrich the CGM and IGM (EAGLE, Schaye et al., 2015; IllustrisTNG, Pillepich et al., 2018; SIMBA, Dave et al., 2019; and CAMELS, Villaescusa-Navarro et al., 2022), are consistent with the range of observations of the CGM in absorption. Yet these models still rely on simplistic implementations of the "sub-grid" physics in order to model entire galaxies (e.g. Ford et al., 2013; Hummels et al., 2013), and physical properties of the CGM are dependent on the simulation resolution (Hummels et al., 2019; Peeples et al., 2019). More sensitive observations of the CGM, including the ability to detect the diffuse gas in emission, are needed both to break degeneracies in these models, e.g., between heating and cooling mechanisms, and to develop a flexible parametric model of the CGM (Singh et al., 2021).
The two-point correlation function between H i absorption along QSO sightlines and galaxies has proven to be an essential tool to understand the connection of galaxies to the IGM (e.g. Morris et al., 1993; Chen et al., 2005; Ryan-Weber, 2006; Prochaska et al., 2011; Tejos et al., 2014; Prochaska et al., 2019). The primary advantages of leveraging the clustering of these two entities over one-to-one association analyses is that it provides results for large scales (1-10 Mpc) as well as the relatively smaller scales where the baryonic processes associated with the CGM play out, and the correlation function statistically characterizes absorber-galaxy relationships when multiple galaxies are close to the sightline and a one-to-one assignment is ambiguous. Since H i traces both enriched material from galaxies as well as primordial accretion from the IGM, observations of the CGM, IGM, and galaxies in the same volume are fundamental to both testing the predictions of galaxy evolution models and providing a means to differentiate between them (e.g. Fumagalli et al., 2011; Oppenheimer et al., 2012; Stinson et al., 2012; Ford et al., 2013; Hummels et al., 2013; Butsky et al., 2020; Singh et al., 2021).
Understanding the physical profile and size of the CGM sheds light on the non-linear processes of galaxy formation: on what spatial scale(s) do virialization, accretion, and feedback transform these galactic atmospheres? Astronomers have long used some version of the virial radius as an estimator for the size of galaxy halos, but this estimate is somewhat arbitrary and is based on the distibution of unobservable dark matter. By observing the radial gas profile around galaxies out to large scales, we can effectively map the gaseous halo, which in turn constrains the physics of galaxy-scale feedback processes. Observationally determining the galactic atmosphere's extent has additional implications for constraining galaxy evolution and assembly models. For example, the galaxy baryon and metal budgets require a scale to integrate the total mass (e.g. Peeples et al., 2014; Werk et al., 2014). Furthermore, the gaseous halo likely plays an important role in the quenching of dwarf satellite galaxies as they become stripped by ram-pressure in a low-density CGM (Putman et al., 2021), and it is useful to constrain where this occurs, i.e., the extent of the CGM, and how this depends on central galaxy mass.
The presence of H i absorption beyond the virial radius is now widely accepted for a range of galaxy stellar masses (e.g. Prochaska et al., 2011; Tejos et al., 2012, 2014; Wilde et al., 2021; Bouma et al., 2021; Borthakur, 2022). In Wilde et al. (2021) (Paper I) we found an empirical relation between galaxy stellar mass and the extent of the CGM as indicated by H i covering fractions. For galaxies with stellar masses \(10^{8}<M_{\star}/M_{\odot}<10^{11.5}\), we found that the CGM extends to two times the virial radius. In this paper, we focus on the functional forms of the mass dependence of the H i-traced CGM using a power-law model similar to the 2-halo correlation function. We also investigate other two-component models that differentiate the CGM from the IGM. We combine the CGM\({}^{2}\) Survey, which focuses on sightlines at low galaxy impact parameters (\(<1\) Mpc), with the _COS Absorption Survey of Baryon Harbors_ (CASBaH) that probes larger spatial scales (\(<20\) Mpc). In doing so, we greatly increase the absorber-galaxy sample from 543 spectroscopically-confirmed absorber-galaxy pairs to 7244 pairs spanning \(0.003<z<0.48\). Our goal is to provide the most reliable constraints to date on the spatial extent of the CGM as traced by H i absorption.
The paper is structured as follows: In Section 2, we briefly review each of the galaxy-absorber surveys and discuss their combined properties. In Section 3, we introduce two models of the H i-galaxy correlation functions and cover our main results in Section 4. We compare our results with simulations and previous results and discuss their implications for galaxy evolution models in Section 5. Finally, we summarize our results in Section 6.
## 2 Data - Combining CGM2 and Casbah
Both surveys feature far-ultraviolet spectroscopy of QSOs with _HST_, using both the _Cosmic Origins Spectrograph_(COS, Green et al., 2012) and the _Space Telescope Imaging Spectrograph_(STIS, Woodgate et al., 1998), and deep, ground-based optical spectroscopy of foreground galaxies in the QSO fields. CASBaH is well suited to the study of the interface between the CGM and the IGM, at scales \(\gtrsim\) 1 Mpc. CGM2 provides a relatively more complete mapping of the inner CGM at scales \(\lesssim\) 1 Mpc. By combining CGM2 and CASBaH data, we leverage the strengths of each survey, as described below. Figure (1) shows the distributions of galaxy stellar masses and impact parameters versus redshift from both surveys out to z = 0.5. Together, the surveys allow us to probe the CGM as it transitions into the IGM for a large sample of galaxies.
Footnote 2: [https://github.com/mattewilde/vetrr](https://github.com/mattewilde/vetrr)
### Cgm2
The CGM2 survey, first presented in Wilde et al. (2021), includes precise spectroscopic redshifts and bulk galaxy properties (e.g. stellar masses, M\({}_{*}\), and star formation rates, SFR) from a combination of Gemini GMOS spectra and deep, broadband photometry for \(\sim\)1000 galaxies in the foreground of 22 QSOs, each with S/N \(\approx\)10 HST/COS G130M+G160M spectra. By matching galaxy and absorber redshifts in \(\pm\)500 km s\({}^{-1}\) windows, the CGM2 survey is ultimately a large collection of measurements pertaining to the CGM of z \(<\) 1 galaxies over a wide range of stellar masses, 10\({}^{8}\)\(\lesssim M_{\star}/M_{\odot}\lesssim 10^{11.5}\). The data acquisition and analysis are explained in detail in Wilde et al. (2021). Here we present a brief overview of the survey data relevant to the present analysis.
Footnote 2: [https://github.com/mattewilde/vetrr](https://github.com/mattewilde/vetrr)
The CGM2 galaxy spectra were obtained using Gemini-GMOS spectrographs on the twin Gemini North and South telescopes (Hook et al., 2004; Gimeno et al., 2016). Galaxy redshifts were inferred from the template fitting code, Redrock1 (v0.14) and manually inspected with VETRR2. The typical statistical uncertainly of our redshifts is \(\sigma_{z}\sim\) 50-100 km s\({}^{-1}\) (\(z\simeq\) 0.00016-0.00030). Photometry of the CGM2 galaxy catalog was obtained from the Gemini-GMOS pre-imaging in \(g\) and \(i\) bands as well as all available bands from DESI Legacy Imaging Surveys Data Release 8 (DR8) (Dey et al., 2019), WISE (Cutri et al., 2013), Pan-STARRS Data Release 2 (Chambers et al., 2016), and SDSS DR14 (Abolfathi et al., 2018).
Figure 1: **Top**: Distribution of the combined CGM2 (blue dots) and CASBaH (purple dots) data sets in both logarithmic impact parameter, and redshift. The data are roughly uniform in redshift space but we can see the relative contributions of the data sets in impact parameter space; CGM2 is highly concentrated at lower impact parameters while CASBaH explores much greater impact parameters. **Bottom**: Galaxy stellar mass distribution as a function of redshift for the two data sets.
The 22 QSOs included in the CGM\({}^{2}\) survey have _HST_/COS spectra selected from the COS-Halos (GO11598, GO13033; Tumlinson et al., 2013) and COS-Dwarfs (GO12248; Bordoloi et al., 2014) surveys. In general, the CGM\({}^{2}\) QSO targets have \(z_{\rm QSO}>0.6\) and available HST imaging, which permits detailed analysis of absorption-hosting galaxies with \(z<0.5\). All COS spectra include both the G130M and G160M gratings, and have a S/N \(\simeq 8-12\) per resolution element (FWHM \(\simeq\) 16-18 km s\({}^{-1}\)) or better over 1150-1800 A. The COS data and their reduction are presented in detail in Tumlinson et al. (2013) and Bordoloi et al. (2014) and follows the same method used by Tripp et al. (2011), Meiring et al. (2011), Tumlinson et al. (2011) and Thom et al. (2012).
### CASBaH
The CASBaH program was designed to take advantage of the multitude of resonance transitions at rest-frame wavelengths \(<912\) A to probe the physical conditions, metallicity, and physics of the multiphase CGM. A wide variety of elements and ionization stages have resonance lines only at \(\lambda<912\) A (see, e.g., Verner et al., 1994), so observations of this wavelength range provide new diagnostics and precise constraints using banks of adjacent ions such as N i through N v, O i through O vi, and Ne ii through Ne viii (see Tripp et al., 2011, for examples of lines detected by CASBaH). The Ne viii 770.4, 780.3 A doublet has received particular attention as a probe of warm-hot gas at \(\approx 10^{5}-10^{6}\) K (e.g., Savage et al., 2005; Burchett et al., 2019; Wijers et al., 2020). In many contexts such as the Milky Way interstellar medium, these lines are inaccessible because they are blocked by the H i Lyman limit. CASBaH overcomes this limitation by observing QSO absorbers with sufficient redshift to bring the lines into the observable band of HST.
The motivation and design of the CASBaH program is summarized in section 1 of Haislmaier et al. (2021), and the CASBaH galaxy redshift survey is presented in Prochaska et al. (2019). Briefly, CASBaH obtained both _HST_/COS and _HST_/STIS spectra of nine QSOs at \(0.92<z_{\rm QSO}<1.48\), with two primary selection criteria. First, since some of the most important target lines (e.g., Ne viii) are weak, the QSOs were required to be UV-bright so that good signal-to-noise and sensitivity to weak lines would be attained. Second, the targets were required to have \(z_{\rm QSO}>0.9\) to provide a total redshift path that is sufficient to accumulate a statistically useful sample of absorbers of interest. No considerations were given to known foreground galaxies or absorbers, so the targets were not selected in a way that would favor particular types of foreground absorbers or galaxies, except that sightlines with known black Lyman limits at \(\lambda_{\rm ob}>1150\) A were excluded to avoid using HST time on sightlines that would not contribute useful pathlengths to the samples (see Burchett et al., 2019). The CASBaH UV spectra were reduced in the same way as the CGM\({}^{2}\) data.
The CASBaH galaxy-redshift survey (Prochaska et al., 2019) measured thousands of redshifts in the fields of seven of the CASBaH QSOs using the Keck DEIMOS and MMT Hectospec spectrographs, with typical redshift uncertainties of \(\approx 30\) km s\({}^{-1}\). The survey used a wedding-cake strategy with the Hectospec covering galaxies in the \(\approx 1^{\circ}\) fields centered on the QSOs and the DEIMOS survey providing a deeper survey with a smaller field of view (81.5 arcmin\({}^{2}\)) (see Prochaska et al., 2019). Using the CASBaH galaxy database, supplemented with data from public surveys such SDSS, we selected a sample of 6701 galaxies with spectroscopic redshifts \(z<0.481\) and comoving impact parameters less than 13 cMpc, appropriate for the H i analysis presented here.
### Synergy of CGM\({}^{2}\) + CASBaH
The CASBaH and CGM\({}^{2}\) surveys have complementary designs. On the one hand, CGM\({}^{2}\) is built on COS-Halos and thus favors at least one \(L*\) galaxy close to the sightline. CGM\({}^{2}\) also covers a smaller FOV. On the other hand, CASBaH is a blind survey that covers a larger FOV. Consequently, CASBaH provides more information about galaxies and large-scale structures at larger impact parameters, but as a blind survey, it is cross-section weighted in favor of galaxies at larger impact parameters. Also, since CASBaH avoided sightlines with black Lyman limits in the HST band (i.e., at \(\lambda_{\rm ob}\geq 1150\) A), it will not include galaxies at \(z_{\rm gal}>0.26\) that harbor absorbers with \(N\)(H i) \(\gtrsim 10^{17}\) cm\({}^{-2}\). Thus, CGM\({}^{2}\) probes the inner CGM including higher \(N\)(H i) absorbers, while CASBaH complements CGM\({}^{2}\) by adding very large samples of galaxies and structures at larger distances.
### Galaxy Properties
To estimate the galaxy properties for both surveys, we used CIGALE (Noll et al., 2009; Boquien et al., 2019) to fit the spectral energy distribution (SED) and retrieve stellar mass and star formation rates (SFR). We used the Bruzual & Charlot (2003) stellar population models, assuming a Chabrier (2003) initial mass function (IMF). We chose a grid of metallicities ranging from 0.001-2.5\(Z_{\odot}\). A delayed star formation history (SFH) model was employed with an exponential burst. The
e-folding time of the main stellar population models ranged from 0.1-8 Gyr. We varied the age of the oldest stars in the galaxy from 2-12 Gyr. We included an optional late burst with an e-folding time of 50 Myr and an age of 20 Myr. The burst mass fraction varied from 0.0 or 0.1 to turn this feature on or off. Nebular emission and reprocessed dust models (Dale et al., 2014) were also included with the default values. The dust models have slopes ranging from \(1-2.5\) and the nebular models include no active galactic nuclei.
We employed the Calzetti et al. (1994) dust attenuation law, but we also included a "bump" in the UV (see discussion in Prochaska et al., 2019) at 217.5 nm with a FWHM of 35.6 nm. The bump amplitude is set at 1.3 and the power law slope is -0.13 (Lo Faro et al., 2017). We varied the color excess of the stellar continuum from the young population, E(B-V), from 0.12-1.98. Finally, we used a reduction factor of 0.44 to the color excess for the old population compared to the young stars.
CIGALE then provides us with Bayesian estimates for the stellar mass and SFR for each galaxy in the combined catalog. In order to calculate the virial radius we used the abundance matching method of Moster et al. (2013) with the modifications used in Burchett et al. (2016). We adopt the convention of using \(R_{\rm vir}=R_{200m}\), the radius within which the average mass density is 200 times the mean matter density of the universe, as the virial radius (\(R_{\rm vir}\)) of a galaxy halo.
### Combining the CGM\({}^{2}\) and CASBaH Surveys
In order to combine the surveys, we modified both catalogs to ensure the same matching criteria between galaxies and absorbers. In the original CGM\({}^{2}\) survey, we measured the \(2\sigma\) upper limit on absorption within \(\delta v=\pm 30\) km s\({}^{-1}\) of the galaxies redshift using the normalized error of the quasar flux when no absorption system was found within our \(|\delta v|<500\) km s\({}^{-1}\) window. In order to match the CASBaH survey, we adjusted this to a \(3\sigma\) upper limit. This did not change our results in a meaningful way. The original CASBaH survey used a velocity window of \(|\delta v|<400\) km s\({}^{-1}\) to match the galaxies to absorption systems. We adjusted the window for this work to \(|\delta v|<500\) km s\({}^{-1}\) to match the CGM\({}^{2}\) survey. As in Paper I, we restrict our H i measurements to those less than \(z<0.481\) since at this redshift, the Lyman-\(\alpha\) line redshifts out of the G160 grating band, and thus we are only sensitive to higher order transitions at higher redshifts.
Having made these two small changes to each survey, both could be combined to give us a total survey that includes 7244 galaxies spanning \(\sim 0.01-8\) comoving Mpc in impact parameter around 28 QSO sightlines. The distributions of impact parameter, redshift, and stellar mass are shown in Figure 1. In this paper, we will focus on galaxies with \(8<\log M_{\star}/M_{\odot}<10.5\), a stellar mass range with good coverage in both surveys, which trims our galaxy sample to 6136 galaxies from CASBaH and 453 galaxies from CGM\({}^{2}\) for a total sample of 6589 absorber-galaxy pairs. The number of absorber-galaxy pairs is summarized in Table 1.
## 3 Modeling absorber-galaxy clustering
We model the CGM using an absorber-galaxy cross-correlation analysis. This technique is based on modeling the covering fraction, \(f_{c}\), as a binomial probability distribution of detections. To ensure high completeness in the absorber sample, based on the S/N of the data, we require a total column density \(N_{\rm HI}\geq 10^{14}\) cm\({}^{-2}\) to consider the sightline to have a "detection". Likewise, a non-detection is the case where we do not detect gas above this threshold. The models used here are based on the models employed in Paper I, which was inspired by the model developed by Hennawi and Prochaska (2007) and Prochaska et al. (2019). A more detailed explanation can be found in those three papers. In Paper I, we found a mass dependence of the extent of the CGM based on dividing the data into three mass bins. In this work, we wish to quantify the mass dependence of the clustering as well as determine the redshift dependence given our data.
### Single Power-Law Model
The single power-law model consists of two terms: the base rate of detection due to the random incidence of absorbers greater than this threshold and an excess above this base rate due to the clustering of galaxy-absorber pairs.
Much like Prochaska et al. (2019), we define the 3D absorber-galaxy cross-correlation function, \(\xi_{ag}(r)\) as
\[\xi_{ag}(r)=\left(\frac{r}{r_{0}}\right)^{-\gamma}. \tag{1}\]
To model the galaxy mass dependence of the clustering, we add a new mass dependence to the clustering scale, \(r_{0}\),
\[r_{0,m}(m)=r_{0}\left(\frac{M_{\star}}{M_{0}}\right)^{\beta}. \tag{2}\]
As before, we examine the projected 2-D correlation function, which is obtained by integrating the 3-D correlation function over the line of sight
\[\chi_{\perp}(r)=\frac{1}{\Delta r_{\parallel}}\int_{r_{\parallel}}\xi_{ag}( \sqrt{r_{\parallel}^{2}+r_{\perp}^{2}}\ )dr_{\parallel}, \tag{3}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Survey & \(10^{7-11.3}M_{*}/M_{\odot}\) & \(10^{8-10.5}M_{*}/M_{\odot}\) & \(10^{8-9}M_{*}/M_{\odot}\) & \(10^{9-10}M_{*}/M_{\odot}\) & \(10^{10-10.5}M_{*}/M_{\odot}\) \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline CGM\({}^{2}\) & 543 & 453 & 103 & 271 & 79 \\ CASBaH & 6701 & 6136 & 1265 & 3545 & 1326 \\ Total & 7244 & 6589 & 1368 & 3816 & 1405 \\ \hline \end{tabular} Note. – Summary of absorber-galaxy pairs used in this manuscript. (1) The number of absorber-galaxy pairs in each survey and total of the combined surveys; (2) the number of absorber-galaxy pairs in the entire mass range; (3) the mass range used to perfom the model fitting; (4, 5, 6) the number of absorber-galaxy pairs within each mass bin used for model verification.
\end{table}
Table 1: Number of Absorber-Galaxy Pairs
Figure 2: Corner plots showing the posterior parameter probabilities for the parameters in the single power-law clustering model. We find a non-zero, positive mass dependence term in the two-halo absorber-galaxy clustering, \(\beta^{2h}\).
where \(r_{\parallel}\) is the line-of-sight distance, \(r_{\perp}\) is the transverse distance, and \(\Delta r_{\parallel}\) is the size of the redshift window.
For simplicity of notation, \(r\) is equivalent to \(r_{\perp}\) in the following analysis.
In the following definitions, we label the single power law clustering terms "2-halo," as the galaxy clustering method we adopt here describes the clustering of separate dark matter halos. This approach distinguishes the "two-halo" only method from the two-component model we develop later in this manuscript.
In order to model \(f_{c}\), we assume that the number of detected absorbers above the column-density threshold has a Poisson distribution. We consider two cases: (1) one or more absorbers detected, and (2) the case where no absorbers are detected. In this framework the probability of seeing no absorbers is
\[P^{\rm miss}=\frac{\lambda^{0}\exp(-\lambda)}{0!} \tag{4}\]
where we denote the rate of incidence (see below) as \(\lambda\). The probability of finding one or more absorbers is just the complement of Equation 4,
\[f_{c}=1-P^{\rm miss}. \tag{5}\]
We model the rate of absorber incidence as the projected correlation function, the 2-halo term, as the excess over the probability of intersecting an absorber with \(N_{\rm HI}>10^{14}\) cm\({}^{-2}\) in the redshift window,
\[\lambda=(1+\chi_{\perp}^{2h})\ \langle d\mathcal{N}/dz\rangle\delta z, \tag{6}\]
where \(\langle dN/dz\rangle\) is the base rate of detection due to the random incidence of absorbers greater than this threshold and \(deltaz\) is the line-of-sight redshift window.
In addition to parameterizing the mass dependence as in Equation (2), we also parameterize the redshift dependence of \(\langle dN/dz\rangle\) as follows:
\[\frac{d\mathcal{N}(\rm N_{\rm HI}\geq N_{\rm HI}^{14},z)}{dz}=C_{0}(1+z)^{ \alpha}, \tag{7}\]
where \(N_{\rm HI}^{14}\) denotes absorbers with column densities of \(10^{14}\) cm\({}^{-2}\), \(C_{0}\) is the random rate of incidence at \(z=0\), and \(\delta z\) is the redshift window. We adopt a redshift window to be \(\pm 500\) km s\({}^{-1}\) in velocity units.
Thus, we have a rate of incidence of the form
\[\lambda=(1+[\chi_{\perp}^{2h}(r,m|r_{0}^{2h},\gamma^{2h},\beta^{2h})])\ \langle d\mathcal{N}(z|C_{0},\alpha)/dz\rangle\ \delta z. \tag{8}\]
Finally, we construct the likelihood function,
\[\mathcal{L}=\prod_{i}P^{\rm hit}(r_{i},z_{i},m_{i}|\theta)\prod_{j}P^{\rm miss }(r_{j},z_{j},m_{j}|\theta), \tag{9}\]
where \(\theta=[r_{0}^{2h},\gamma^{2h},\beta^{2h},C_{0},\alpha]\).
In constructing our Bayesian model, we must choose priors. For the single power law parameters, we chose the priors based on the results of cross-correlation analysis by Tejos et al. (2014) except for our new mass dependent term, \(\beta^{2h}\), which was motivated by physical arguments:
* \(r_{0}^{2h}\sim\mathcal{N}(\mu=3.2,\sigma=0.3)\), \(r_{0}^{2h}>0\)
* \(\gamma^{2h}\sim\mathcal{N}(\mu=1.7,\sigma=0.1)\), \(\gamma^{2h}>0\)
* \(\beta^{2h}>0\),
where \(\mathcal{N}\) is the normal distribution with mean \(\mu\) and variance \(\sigma^{2}\).
The priors for the redshift dependence were chosen based on the findings in Kim et al. (2021):
* \(C_{0}\sim\text{Lognormal}(\mu=1.25,\sigma=0.11)\), \(C_{0}>0\)
* \(\alpha\sim\mathcal{N}(\mu=0.97,\sigma=0.87)\), \(-3<\alpha<3\)
We note that we chose to use the more recent results of Kim et al. (2021) in modeling the redshift evolution instead of that from Danforth et al. (2016), as were used in Paper I.
As in Paper I, we apply the Bayesian Markov Chain Monte Carlo (MCMC) sampler emcee(Foreman-Mackey et al., 2013) to generate samples from the posterior probability distribution function to estimate the parameters of interest and their distributions, using Equation (9) and the priors described above.
In Figure 2, we show the posterior distributions of our single power-law model with \(M_{0}=10^{9.5}M_{\odot}\). These were fit only to data with \(8<\log M_{\star}/M_{\odot}<10.5\), as above this range there is a change in the virial radius due to the \(M_{\star}-M_{\rm halo}\) relation from abundance matching (Moster et al., 2013). Below this mass range we find a very flat covering fraction profile, which does not show a clustering signal.
### Two-component Models
The single power-law model used in galaxy-galaxy clustering and adapted above to model the galaxy-absorber clustering makes no assumption of a CGM or overlapping (in projection) gaseous halos. However, the existence of the CGM is now well-established (Tumlinson et al., 2017). In particular, the trends of ionized metal species with impact parameter around L* and sub-L* galaxies from \(z=0-3.5\) distinctly show that metal-enriched gaseous atmospheres are a fundamental component of galaxies (e.g. Werk et al., 2013; Lehner et al., 2014; Bordoloi et al., 2014; Borthakur et al., 2015;
Rudie et al., 2019). In the following section, we therefore assume the existence of the CGM and use a simple Gaussian profile to model the excess clustering signal due to the presence of the CGM. In addition, we investigated several other functional forms of the CGM component, which we describe in SS3.2.2. We find that the particular functional form of this component has little impact on the results.
#### 3.2.1 The Gaussian CGM Two-Component Model
We now add a third term to the detection rate: a Gaussian 1-halo component. The detection rate now consists of a baseline random incidence rate, an enhancement due to large-scale absorber-galaxy clustering, and an additional enhancement due to the CGM. We employ an exclusion model where the contribution from the 2-halo term terminates at the distance it reaches the 1-halo component. This scheme, shown in Figure 3, also allows us to determine a natural estimate of the extent of the CGM: the crossing point of the 1-and 2-halo components. More explicitly, within some radius, the galaxy has a CGM that we define as the gas of that galaxy and any other satellite galaxies within its halo. Our formalism then defines the \(R_{\rm cross}\) where this CGM component exceeds the 2-halo.
The model is similar to that single power-law we introduced before with a few key differences. We introduce a Gaussian one-halo term defined as:
\[G(r)^{1h}=Ae^{-(r/\sigma)^{2}}. \tag{10}\]
Where the two models intersect, \(R_{\rm cross}\), we can solve for \(\sigma\) as
\[\sigma=\sqrt{\frac{1}{2}\frac{R_{\rm cross}^{2}}{\ln(A)+\gamma\ln(R_{\rm cross }/r_{0})}}. \tag{11}\]
It should be noted that \(R_{\rm cross}\) here is the 3-D distance and not the projected distance. In order to characterize the mass dependence of \(R_{\rm cross}\) we define
\[R_{\rm cross}=R_{cross,0}\left(\frac{M_{\star}}{M_{0}}\right)^{\beta^{1h}}, \tag{12}\]
where \(R_{cross,0}\) is the 1-halo term extent for a galaxy at the fixed pivot mass \(M_{0}\). The galaxy mass dependence of \(\sigma\) includes contributions from the mass dependencies of \(R_{cross}\) and \(r_{0}\).
This parameterization allows us to compare the mass dependence of the 1-halo term, \(\beta^{1h}\) with that of the 2-halo term, \(\beta^{2h}\).
In order to solve for the projected clustering signal, \(\xi\), we first make some definitions to ease the notation. We use \(s=r_{\parallel}\) in the remainder of the analysis. The integration is performed over different portions of the line of sight distance, s, corresponding to the 1 and 2-halo components. We define the line of sight crossing point \(s_{\rm cross}\) as
\[s_{\rm cross}=\sqrt{\max(R_{\rm cross}^{2}-r_{\perp}^{2},0)}, \tag{13}\]
and we can then integrate Equation 10 to \(s_{\rm eval}=\min(s_{\rm cross},s_{\rm max})\), where \(s_{\rm max}\) is the maximum interval we wish to integrate over, which in our case is \([-500,500]\) km s\({}^{-1}\). Thus we have
\[\chi(r_{\perp})\propto 2\int_{0}^{s_{\rm eval}}G(r_{\perp},s)^{1h}ds+2\int_{s_{ \rm eval}}^{s_{\rm max}}\xi(r_{\perp},s)^{2h}ds \tag{14}\]
where the factor of 2 comes from the fact that both components are symmetric. Here we integrate the one-halo component over the more nearby regime out to \(s_{\rm eval}\) and only integrate the 2-halo term beyond \(s_{\rm eval}\) out to the maximum line of sight distance, thus excluding the regimes in which the models do not apply. For the two-component model, we choose fairly weak priors on unknown parameters based on physical arguments while following the same priors as described above for the parameters in the single power-law model:
* \(\beta^{1h}>-3\)
* \(A>0\)
* \(R_{\rm cross}>0\)
Figure 3: A schematic depiction of our two-component exclusion model and the determination of \(R_{\rm cross}\). The 2-halo component cuts off interior to \(R_{\rm cross}\).
We can then follow the same MCMC fitting procedure described above to determine the posteriors for the parameters in this model as well as the crossing radius, \(R_{\rm cross}\). These are shown in Figure 4. As before, we only fit data with \(8<\log M_{\star}/M_{\odot}<10.5\) and use \(M_{0}=10^{9.5}M_{\odot}\).
#### 3.2.2 Other Two-Component Models
While the single power-law clustering model does an adequate job reproducing the data on large spatial scales, its contribution is insufficient at \(R_{\perp}\lesssim 200\) kpc as can be seen in Figure 5 (pink curve). Furthermore, the primary goal of our study is to find the boundary between the CGM and IGM, and thus including a CGM component is essential for this purpose. We explored several candidate functional forms for this CGM component.
We first investigated a two-component model where each component is represented by a power law, inspired by the 1-halo and 2-halo terms that are used to model the clustering of galaxies. The 3D and projected forms
Figure 4: Posterior probabilities for the parameters in the two-component clustering model. We again recover a non-zero, positive mass dependence term in the two-halo absorber-galaxy clustering, \(\beta^{2h}\) but find an even stronger one-halo CGM clustering mass dependence \(\beta^{1h}\simeq 0.14\pm 0.07\).
of the two absorber-galaxy correlation functions are given by Equations 1 and 3, respectively, and the two-component correlation function is the sum of these parts. We also considered a model where the two-component correlation function is, in 3D, the maximum of the two power laws. This is similar to our chosen model, but with an inner power law rather than an inner Gaussian profile.
To rise above the outer power law component at small radii, the inner power law has to be steeper. In practice, the two power law indices turned out to be similar, yielding essentially the same result as a single power law fit. This outcome is not unexpected: the enhancement in the incidence rate or surface density of gas near galaxies often does not resemble a steepening power law at small radii (Zhu et al., 2014; Lan, 2020).
In those studies, the enhancement is better described by a function that declines gradually (compared to a power law) at small radii and quickly at large radii. The top-hat function, which has amplitude \(A\) inside a boundary and amplitude \(0\) outside the boundary, is an extreme example of this class. Our adopted Gaussian profile allows a smoother transition between the CGM-like and outer components of the model. However, we note that a fit to the data combining a inner 3D top-hat with an outer power law yields an \(R_{\rm cross}(M_{*})\) that is effectively indistinguishable from the one that emerges from the Gaussian component model.
### Model Comparison
In addition to comparing the two models to each other, Figure 5 compares the models to the empirical covering fraction as a function of impact parameter and mass. The data are shown in black with \(1\sigma\) error bars. The single power-law model is shown in pink while the two component model is shown in purple. Both models recreate the covering fractions in all mass bins at all values of \(R_{\perp}\) except for one data point in the \(\log M_{*}/M_{\odot}=9-10\) bin at \(R_{\perp}\approx 200\) kpc. Moreover, the two models make different predictions at low \(R_{\perp}\) except for in the lowest mass bin (\(\log M_{*}/M_{\odot}<9\)) where there is no discernible excess above the clustering signal. This does not preclude the presence of a CGM around these galaxies, but rather suggests that we require more data at lower \(R_{\perp}\) for galaxies with \(\log M_{*}/M_{\odot}<9\) to be able to constrain \(R_{\rm cross}\) at these masses.
The two-halo only model under-predicts the observed signal for galaxies at intermediate masses (\(\log M_{*}/M_{\odot}=9-10\)). The two component model does better for galaxies of \(M_{\star}=10^{9-10}M_{\odot}\) at the lowest impact parameters where the single power law model underestimates the covering fraction, although not significantly so. For \(R_{\rm cross}<300\) kpc, one detects 52 H i systems where 46 systems are predicted. Assuming Poisson statistics, the two-halo only model is consistent with the data at \(1\sigma\) level. Analogous to the one-halo term of galaxy-galaxy clustering, the data themselves do not require an enhanced covering fraction of H i absorption that we identify as the CGM.
We find the 1-halo component has a stronger clustering mass dependence, \(\beta^{1h}\simeq 0.14\pm 0.07\), than the two-halo term, \(\beta^{2h}\simeq 0.08\pm 0.03\). We also find the 2-halo clustering terms in each model to be internally consistent with each other as seen in Figure 6.
## 4 Results
Figure 5: Comparison of our two models to the empirical covering fraction as a function of impact parameter in comoving kpc in mass bins of \(10^{8-9}M_{\odot}\), \(10^{9-10}M_{\odot}\) and \(10^{10-10.5}M_{\odot}\). The data are shown in black with \(1\sigma\) error bars. The single power-law model is shown in pink while the two-component model is shown in purple. The vertical dotted line denotes \(R_{\rm cross}\) in each mass bin. Both models recreate the covering fraction of the data in all mass bins except for the lowest mass bin where the clustering signal disappears. The two-component model provides a better match to the data for galaxies of \(M_{\star}>10^{9}M_{\odot}\) at the lowest impact parameters where the single power law model underestimates the covering fraction.
### Clustering Mass Dependence
As seen in Figure 2, we find the clustering parameters to be \(r_{0}=3.6\pm 0.3\) cMpc, \(\gamma=1.6\pm 0.5\). \(r_{0}\) and \(\gamma\) are consistent with those found in Tejos et al. (2014) who find \(r_{0}=3.7\pm 0.1\) cMpc and \(\gamma=1.7\pm 0.3\). We also find a mass dependence of the absorber-galaxy clustering of \(\beta^{2h}=0.07^{+0.3}_{-0.2}\).
We find the the two component model better fits the data as can be seen in Figure 5. Specifically, the two component model better matches the covering fraction for galaxies of \(M_{\star}>10^{9-10}M_{\odot}\) at the lower impact parameters where the single power law model underestimates the covering fraction. In addition, we find the two-component model reproduces the mass dependence of the 2-halo clustering term, \(\beta^{2h}\simeq 0.07\) while also producing a stronger mass dependence of the 1-halo clustering term, \(\beta^{1h}\simeq 0.14\).
### Physically-Motivated Extent of the CGM
As mentioned above, using the two-component model produces an estimate of \(R_{\rm cross}\), a natural metric for the extent of the CGM. This 3-D distance demarcates where the contribution to the clustering begins to be dominated by the CGM above the expected two-halo clustering due to isolated galaxy halos traced by H I. \(R_{\rm cross}\)
Figure 6: Comparison of the two-halo 3D cross correlation posteriors between the two-component model (\(r_{0}=3.99^{+0.28}_{-0.24}\) cMpc, \(\gamma=1.62\pm 0.07\)) and the single power-law model (\(r_{0}=3.58^{+0.28}_{-0.24}\) cMpc, \(\gamma=1.55\pm 0.05\)). The two models are consistent with each other within the \(1\sigma\) limits and have a power-law slope consistent with the absorber-galaxy 3D cross correlation found in the literature (e.g. Tejos et al., 2014) of \(\gamma=1.7\pm 0.1\).
can be viewed as the maximum radius to which an enhancement from the CGM could extend without over-predicting the data at large radii.
In Figure 7, we see \(R_{\rm cross}\) (blue) compared with the spread in virial radii of the galaxy sample (grey filled region). The filled blue region represents the \(1\sigma\) limits of the distribution in \(R_{\rm cross}\) while the blue line denotes the median of this distribution. We find \(R_{\rm cross}\) is \(\sim 2\pm 0.6R_{\rm vir}\) for galaxies in the range \(8<\log(M_{\star}/M\odot)<10.5\). The black crosses correspond to the values published in Paper I defined as the extent where there is 50%chance to see H i absorption above \(10^{14}\) cm\({}^{-2}\). The vertical dotted lines denote the mass range of \(8<\log(M_{\star}/M\odot)<10.5\) that was used in our MCMC analysis. Above this range, we see a change in the relation of the virial radius with stellar mass, and below this mass range, we find little to no correlation between absorbers and galaxies. 5).
We also calculated the splashback radius, \(R_{\rm sp}\), using the method from Diemer (2018) and encoded in the COLOSSUS3 package. This radius denotes the location at which particles reach the apocenter of their first orbit. We find excellent agreement of \(R_{\rm cross}\) with the results in Paper I and \(R_{\rm cross}\) neatly matches the splashback radius for galaxies in this mass range. We discuss these results in more detail below.
Footnote 3: [https://bdiemer.bitbucket.io/colossus/](https://bdiemer.bitbucket.io/colossus/)
## 5 Discussion
Both of the models we investigate do an adequate job of recreating the cross correlation signal at all impact parameters and masses \(10^{8}<M_{\star}<10^{10.5}M_{\odot}\) as seen in Figure 5. It is not entirely clear that the single power law model has any physically-consistent meaning, however. Effectively, it would seem to signify that every time one measures H i absorption at the same redshift as a particular galaxy (\(|\Delta v|<500\) km s\({}^{-1}\)), the absorption is always due to _another galaxy's CGM_. Note, we would conclude this for all galaxies, i.e. each has no CGM and only neighbors with a CGM. This is clearly impossible. The two-halo-only model for the CGM effectively breaks down when the galaxies lie within the halo under consideration, i.e. when they "mix." We cannot and do not try to distinguish between the two. However, our formalism does allow one to identify the outer extent of this "mixing."
The two-component model asserts that galaxies with \(M_{\star}>10^{8}M_{\odot}\) have a CGM, an assumption that is motivated by previous survey results (e.g. Werk et al., 2013). Additionally, this model is able to better recreate the data - from the combined datasets of CGM\({}^{2}\) + CASBaH, which together represent the largest sample of galaxies with confirmed spectroscopic redshifts in the foregrounds of UV-bright QSOs with high-resolution absorption spectroscopy - both at smaller impact parameters and at \(M_{\star}>10^{9}\)\(M_{\odot}\).
The much larger number of galaxies at larger impact parameters drives the fit of the models to the data. There is, however, a \(>1\sigma\) inconsistency between the two-halo only model and the data at \(R_{\perp}\sim 200\) and for both models at \(R_{\perp}\sim 600\) in the \(\log M_{\star}=9-10M_{\odot}\) mass range. The latter inconsistency may be due to cosmic variance or the assumption that the absorber-galaxy measurements are independent and are not correlated, which would increase the scale of the error bars at \(R_{\perp}\sim 600\).
### Comparing the mass dependence of the single and two-component models
Our galaxy sample includes a large number of galaxies at low (\(<500\) kpc) impact parameters which allows us to better model the regime in which the two-halo galaxy clustering becomes dominated by the signal of galaxies that inhabit the same dark matter halo, the one-halo term. By separating these two terms in the manner presented here, we can disentangle the large scale clustering as well as the contribution of the CGM to the 3D correlation of absorbers and galaxies.
Figure 7: A comparison of \(R_{\rm cross}\) with the virial radius (\(R_{\rm vir}\), grey filled region) as well as the splashback radius (\(R_{\rm splash}\), pink shaded region) of the galaxy sample. The filled regions in \(R_{\rm vir}\) and \(R_{\rm splash}\) denote the redshift range for the galaxies in our sample (\(0.1\lesssim z\lesssim 0.48\)). The filled blue region represents the \(1\sigma\) limits of the distribution in \(R_{\rm cross}\) while the blue line denotes the median of this distribution. The black crosses correspond to the values published in Paper I. The vertical dotted lines denote the mass range of \(8<\log(M_{\star}/M\odot)<10.5\) to which we limited the fitting in our MCMC analysis in Figure 5.
Our analysis finds nearly identical terms for the mass dependence of the clustering at large scales, \(\beta^{2h}\) as well as the contribution of absorbers at random, \(C_{0}\) and \(\alpha\). We do find a stronger mass dependence in the one-halo term, \(\beta^{1h}\) than at larger scales. This can be seen in Figure 5 where the correlation steepens in higher mass bins.
### Absorber-Galaxy Bias
Our covering fraction analyses provide an estimate of the galaxy-absorber correlation function, \(\xi_{ag}\) (eq. 1). Here, we test if the mass dependence of \(\xi_{ag}\) outside the CGM is consistent with absorption systems and galaxies simply being two independent tracers of the same underlying dark matter distribution. Assuming both tracers have linear bias, \(\xi_{ag}\) should be equal to \(b_{a}b_{g}\xi_{\rm DM}\), where \(b_{a}\) and \(b_{g}\) are the absorber and galaxy bias, respectively, and \(\xi_{\rm DM}\) is the dark matter 3D correlation function. Following Tinker et al. (2010) (hereafter, T10), we assume the dark matter correlation function can be described by a power-law function of radius with index \(\gamma=1.62\). We fix the power-law index in the \(\xi_{ag}\) determined by fitting a single power-law to the data to this same value, with which it is consistent. With the above assumptions, \(\xi_{ag}=(r/r_{0}(M))^{-\gamma}=b_{a}b_{g}\xi_{\rm DM}(r)\). The radial dependence cancels, leaving the proportionality \(r_{0}(M)^{\gamma}\propto b_{a}b_{g}\).
We show a scaled \(r_{0}(M)^{\gamma}\) in Figure 8 along with the galaxy bias as a function of stellar mass from T10 and implemented in the COLOSSUS package (Diemer, 2018). If \(b_{a}\) is constant and the assumptions stated above hold, \(r_{0}(M)^{\gamma}\) should have the same mass dependence as galaxy bias. While there is a visually apparent difference between the galaxy bias and the best-fit \(r_{0}(M)^{\gamma}\), this difference is not significant at a \(2\sigma\) level and so is merely suggestive. If the difference is real, it could be a consequence of the H i mass per dark matter mass being a function of overdensity. Up to the overdensities at which \(M_{star}=10^{10.5}\)\(M_{\odot}\) galaxies tend to be found, this function would be increasing: H i would be less common in low density regions than in higher density filaments. This behavior would be consistent with theoretical expectations (e.g., Hui & Gnedin, 1997; Schaye, 2001; Dave et al., 2010) and observations (e.g., Rudie et al., 2012; Burchett et al., 2020).
### Comparison to Previous Work
One of the key aspects of this analysis is determining the mass dependence of the extent of the \(N_{\rm HI}>10^{14}\) cm\({}^{-2}\) for which our model provides a direct metric, \(R_{\rm cross}(M_{\star})\). We compare our resulting \(R_{\rm cross}(M_{\star})\) to the method and results from Paper I in Figure 7. The result of Paper I, \(R_{\rm CGM}^{14}\), which are based only on the CGM\({}^{2}\) survey are shown as black crosses in the mass bins they span in that paper. We also compare the method used in that paper to determine \(R_{\rm CGM}^{14}\), the radius at which the probability of detecting \(N_{\rm HI}>10^{14}\) cm\({}^{-2}\) is \(>50\%\), calculated with the two-component model using the combined CGM\({}^{2}\) + CASBaH surveys and find it to be consistent within \(1\sigma\) with our newer model for \(R_{\rm cross}(M_{\star})\). We find that our mass dependent estimate of the extent of the CGM, \(R_{\rm cross}(M_{\star})\) corroborates the findings of Paper I that the \(N_{\rm HI}>10^{14}\) cm\({}^{-2}\) extends to approximately twice the virial radius (\(\sim 2\pm 0.6R_{\rm vir}\)).
One of the main strengths of the CGM\({}^{2}\)+ CASBaH sample is the large number of galaxies at small projected separations (\(<\)1 Mpc). This allows us to investigate the smaller scale regime in more detail within the context of similar studies such as Tejos et al. (2014) (hereafter, T14) who uses a single power law model to measure the two-point correlation between H i and galaxies above \(N_{\rm HI}>10^{14}\) cm\({}^{-2}\). In this work they break up their measurements into SF vs non-SF samples while we do not. Our sample however is dominated by the more common SF galaxies and we will compare our results to their SF sample. Comparing our cross-correlation results with T14, we find good agreement between the results in T14, \(r_{0}^{\rm T14}=3.8\pm 0.2\) Mpc, \(\gamma=1.7\pm 0.1\) and the results from both models presented here, \(r_{0}=3.99^{+0.28}_{-0.24}\) Mpc, \(\gamma=1.62\pm 0.07\)) and the single power-law model (\(r_{0}=3.58^{+0.28}_{-0.24}\) Mpc, \(\gamma=1.55\pm 0.05\). We find
Figure 8: A comparison of the slopes of the relative bias as a function of mass derived from our analysis (orange) compared to Tinker et al. (2010) (T10, black). The dashed lines correspond to the ranges spanned by the \(1\sigma\) limits in in \(\beta^{2h}\). The relative bias, \(r_{0}(M)\propto(M_{\star}/M_{0})^{\gamma\beta}\), are normalized to the value of T10 at log \(M_{\star}/M_{\odot}=9.5\). We find a steeper mass dependence than T10, but the significance of the difference is less than \(2\sigma\).
a mass dependence of this cross-correlation, however as parameterized by \(\beta^{2h}\).
Our results are slightly in tension with Momose et al. (2021) who find galaxies in the \(10^{9-10}M_{\odot}\) range dominate their H i-galaxy cross correlation signal. We find the largest mass bin sample to have the most elevated covering fractions at low impact parameter.
### Physical Extent of Galaxy Halos
Astronomers often use the viral radius as a means to describe the characteristic size of galaxy halos and it is convenient to compare this to the extent of the gaseous galactic atmosphere as we have done here and in Paper I. The virial radius is typically defined in terms of the spherical overdensity mass definition which is based on the radius which encloses an overdensity of 200 times the critical or mean density, i.e., \(R200_{c}\) and \(R200_{m}\). Because the mean and critical densities are decreasing over cosmic time, this can lead to a pseudo-evolution as pointed out in Diemer et al. (2013). In addition, subhalos show evidence of being stripped outside the virial radius of clusters (Behroozi et al., 2014).
An alternative physically motivated halo scale is the splashback radius, \(R_{\rm sp}\)(Diemer and Kravtsov, 2014; Adhikari et al., 2014; More et al., 2015). This radius effectively distinguishes infalling material from matter orbiting in the halo. We compare our results to the splashback radius in Figure 7 and find that our estimate of the extent of the H i CGM, \(R_{\rm cross}\), neatly aligns with \(R_{\rm sp}\) over the mass range \(10^{8}<M_{\star}/M_{\odot}<10^{10.5}\). This result implies that \(R_{\rm sp}\) is a better approximation of the CGM extent than the more commonly used viral radius.
O'Neil et al. (2021) compared \(R_{\rm sp}\) as estimated from dark matter and gas profiles in the IllustrisTNG simulations and found that the gas \(R_{\rm sp}\) is consistently smaller than the dark matter \(R_{\rm sp}\). However, they were looking at much more massive halos \(M_{\rm halo}>10^{13}\) in which shocks dominate the gas distribution. Nonetheless, the fact that \(R_{\rm cross}\approx R_{\rm sp}\) at the mass ranges considered here (\(M_{\rm halo}\ 10^{10-12}M_{\odot}\)) is intriguing. The halo mass accretion rate generally sets whether \(R_{\rm sp}\) exceeds \(R_{\rm vir}\); a rapid accretion rate will impact the growth of the gravitational potential well, leading to \(R_{\rm sp}<R_{\rm vir}\). If the location of \(R_{\rm cross}\) reflects the extent of orbiting gas in a halo, then our observational results imply a halo mass accretion rate that is slow enough to keep the apocenters of orbiting structures at large radii.
Another way of defining the extent of the CGM is to use the boundary of the pressure-supported CGM. For galaxies with halo masses \(\gtrsim 10^{11.5}M_{\odot}\) (\(M_{\star}\approx 10^{9.8}M_{\odot}\)), this pressure support comes from fact that the gas that has fallen into the gravitational potential well is virially shocked and cannot cool within a Hubble time (Binney, 1977; Rees and Ostriker, 1977; Silk, 1977). For the galaxies in our survey, which are predominately below this halo mass, however, the gas would rapidly cool and thus this pressure support might come from galactic winds. Fielding et al. (2017) and Lochhaas et al. (2018) show that supernovae winds with reasonable mass loading efficiencies could shock the gas to distances past the virial radius and account for the survival of cool gas at these large radii. Using a more comprehensive model of the multiphase CGM, Fielding and Bryan (2022) show that SF in the galactic disk can slow cooling and accretion as part of a global preventive self-regulation mechanism. In addition, the winds can transport cold clouds to large radii, consistent with these constraints from our combined survey data.
## 6 Summary
Herein, we have examined the associations of galaxies with Ly\(\alpha\) absorption \(z<0.48\) to explore the spatial profile of this gas and the mass dependence of the profile. Specifically, we have combined the CGM\({}^{2}\) and CABaH H i measurement and constructed a catalog of 7244 absorber-galaxy pairs around 28 QSO sightlines (6589 absorber-galaxy pairs when we restrict our galaxy sample to galaxies with \(8<\log M_{\star}/M_{\odot}<10.5\)). The CGM\({}^{2}\) survey has better sampling of galaxies at low impact parameter while CABBaH samples galaxies out to 20 cMpc. This allows us to characterize the H i profile via the covering fraction as a tracer of the gas.
1. By modeling the covering fraction as a power-law with a mass dependent length scale, we find good agreement with previous studies, such as T14, of our clustering amplitude and power law slope parameters.
2. In Section 3.1, we find the clustering scale has a mass dependence with a power-law slope of \(\beta^{2h}=0.08\pm 0.03\).
3. We compare the slope of our absorber-galaxy bias to the galaxy-dark matter bias of Tinker et al. (2010). The absorber-galaxy bias is a steeper function of galaxy mass than the galaxy-dark matter bias. However, this difference is only significant at a sub-\(2\sigma\) level.
4. We model the data with an exclusionary two-component model where we adopt an inner-CGM Gaussian profile to describe the data at smaller impact parameters and the customary two-halo single power-law model at larger impact parameters. This model faithfully reproduces the data for galaxies \(M_{\star}>10^{8}M_{\odot}\).
5. The two component model allows us to calculate the crossover radius, \(R_{\rm cross}(M_{\star})\), where the models are equal. \(R_{\rm cross}(M_{\star})\) represents a soft upper estimate of the furthest impact parameter needed to optimally fit the inner CGM component. We then use \(R_{\rm cross}\) as an estimate of the extent of the CGM and find \(R_{\rm cross}(M_{\star})\approx 2\pm 0.6R_{\rm vir}\) for galaxies \(10^{8}\leq M_{\star}/M_{\odot}\leq 10^{10.5}\). Additionally, we find excellent agreement between \(R_{\rm cross}(M_{\star})\) and the splashback radius, \(R_{\rm sp}\) for galaxies in this mass range.
## 7 Acknowledgments
MCW, KT, and JKW acknowledge support for this work from NSF-AST 1812521, NSF-CAREER 2044303, the Research Corporation for Science Advancement, grant ID number 26842. Support for the CASBaH HST programs HST-GO-11741 and HST-GO-13846 was provided through grants from the Space Telescope Science Institute under NASA contract NAS5-26555. Support for the CASBaH HST programs HST-GO-11741 and HST-GO-13846 was provided through grants from the Space Telescope Science Institute under NASA contract NAS5-26555. Support for the CASBaH HST programs HST-GO-11741 and HST-GO-13846 was provided through grants from the Space Telescope Science Institute under NASA contract NAS5-26555. Support for the CASBaH HST programs HST-GO-11741 and HST-GO-13846 was provided through grants from the Space Telescope Science Institute under NASA contract NAS5-26555. The CGM\({}^{2}\) Survey would not have been possible without the substantial contributions from a dedicated group of UW undergraduate Student Quasar Absorption Diagnostics, the Werk SQuAD, with over 50 individual undergraduate research assistants since 2016. The SQuAD confirmed all auto-fitted galaxy spectroscopic redshifts by eye, identified absorption systems along every quasar line of sight, and measured various spectroscopic properties (e.g. SFRs) of the nearly 1000 galaxies included in the survey. We are deeply grateful to work with such motivated and enthusiastic students.
|
2302.13345 | Analysis of Deep Image Quality Models | Subjective image quality measures based on deep neural networks are very
related to models of visual neuroscience. This connection benefits engineering
but, more interestingly, the freedom to optimize deep networks in different
ways, make them an excellent tool to explore the principles behind visual
perception (both human and artificial). Recently, a myriad of networks have
been successfully optimized for many interesting visual tasks. Although these
nets were not specifically designed to predict image quality or other
psychophysics, they have shown surprising human-like behavior. The reasons for
this remain unclear.
In this work, we perform a thorough analysis of the perceptual properties of
pre-trained nets (particularly their ability to predict image quality) by
isolating different factors: the goal (the function), the data (learning
environment), the architecture, and the readout: selected layer(s), fine-tuning
of channel relevance, and use of statistical descriptors as opposed to plain
readout of responses.
Several conclusions can be drawn. All the models correlate better with human
opinion than SSIM. More importantly, some of the nets are in pair of
state-of-the-art with no extra refinement or perceptual information. Nets
trained for supervised tasks such as classification correlate substantially
better with humans than LPIPS (a net specifically tuned for image quality).
Interestingly, self-supervised tasks such as jigsaw also perform better than
LPIPS. Simpler architectures are better than very deep nets. In simpler nets,
correlation with humans increases with depth as if deeper layers were closer to
human judgement. This is not true in very deep nets. Consistently with reports
on illusions and contrast sensitivity, small changes in the image environment
does not make a big difference. Finally, the explored statistical descriptors
and concatenations had no major impact. | Pablo Hernández-Cámara, Jorge Vila-Tomás, Valero Laparra, Jesús Malo | 2023-02-26T16:36:28Z | http://arxiv.org/abs/2302.13345v1 | # Analysis of Deep Image Quality Models
###### Abstract
Subjective image quality measures based on deep neural networks are very related to models of visual neuroscience. This connection benefits engineering but, more interestingly, the freedom to optimize deep networks in different ways, make them an excellent tool to explore the principles behind visual perception (both human and artificial). Recently, a myriad of networks have been successfully optimized for many interesting visual tasks. Although these nets were not specifically designed to predict image quality nor other psychophysics, they have shown surprising human-like behavior. The reasons for this remain unclear.
In this work, we perform a thorough analysis of the perceptual properties of pre-trained nets (in particular their ability to predict image quality) by isolating different factors: the goal (the function), the data (the learning environment), the architecture, and the readout: selected layer(s), fine-tuning of channel relevance, and use of statistical descriptors as opposed to plain readout of responses.
Several conclusions can be drawn. All the studied models correlate better with human opinion than SSIM (a _de-_facto _standard_). More importantly, some of the nets are in pair of the state-of-the-art with no extra refinement nor perceptual information. Nets trained for supervised tasks such as classification correlate substantially better with humans than LPIPS (a recent net specifically tuned for image quality). Interestingly self-supervised tasks such as jigsaw also perform better than LPIPS. Simpler architectures are better than very deep nets. In simpler nets, correlation with humans increases with depth as if deeper layers were closer to human judgement. This is not true in very deep nets. Consistently with reports on illusions and contrast sensitivity, small changes in the image environment does not make a big difference in performance. Finally, the explored statistical descriptors and concatenations had no major impact.
## 1 Introduction
Humans have evolved to adapt our visual system to the statistics of nature. Neural networks learn the statistics of the data they have been trained on [9]. However, how human perception and machine learning models trained with natural images are related is an open issue. There is a long tradition on relating human visual system with machine learning models and there are many options to relate them. One option is trying to reproduce fundamental signatures of the visual system that have been directly measured, such as contrast sensitivity functions (CSF) [15, 1]. Other option that we explore here is the image quality assessment (IQA) problem, which tries to predict image distances in a similar way to what a human would do.
A classical tradition to predict human perception is to use biologically plausible models to define the quality metrics [8]. However, this is not mandatory and it is possible to approach the problem from other perspectives. For many years, statistical based quality methods have been widely used and awarded [24]. In the last decade, features extracted from deep learning models have been used to calculate the distances without taking care of the biological plausibility of the architecture (LPIPS [26] and DISTS [5]). These models use convolutional feature extractors (pre-trained to perform classification) as a backbone to define image quality metrics in their inner domain and perform a fine-tune with few perceptual data to maximize the correlation with human perception. Results from these models correlate surprisingly well with human perception and they have become state-of-the-art in image quality assessment. However, the reason why these models designed for other tasks correlate so well with human perception remains unclear.
In recent years, some studies tried to analyze different aspects of this relation between machine learning models and human perception [9, 15, 1, 14]. They found, not only that low level tasks have better properties than high level tasks for reproduce human-CSF, but also that models with
higher accuracy on ImageNet classification perform worse when correlating with human perception. However, there are still many open issues to clarify these behaviors.
In this work, we do an analysis of the perceptual properties of already trained models when facing image quality problems. We analyze how different factors such as architecture, objective, data, and ways of computing the distance (output deepness, output concatenation, channel relevance fine-tuning, and using statistics) affect the correlation between human perception and model distances. Table 1 shows a summary of the different tested factors. To the best of our knowledge, this is the first work that analyzes how well the features extracted (layer by layer) correlate with human perception.
## 2 Related work
Since its appearance in 2004, SSIM became the standard model for predicting image distances as close to humans as possible [24]. It is based on calculating statistical descriptors of image patches to compare them an obtain a distance measurement. However, SSIM recently was surpassed by models based on pre-trained artificial neural networks. For example, LPIPS [26] concatenates features of different VGG layers. Once an image and its distorted version have passed through the network and the outputs have been read, each feature is weighted to maximize the correlation with human perception. Even fewer years ago, in 2020, DISTS [5] unified both SSIM and LPIPS. It uses different layers of VGG to extract features from the images, and computes statistical descriptors as in SSIM to compare them in order to get a distance measurement between an image and its distorted version.
These IQA models based on neural networks correlate surprisingly well with human perception, although the reason is not known. In the last years, some works have tried to explore and understand this relation between deep learning models and human perception. On the one hand, some studies studied fundamental signatures of human visual system to check if deep learning models also have the same signatures. They show that human-CSFs appear more in low/middle-level tasks than in high-level objectives [1, 15]. They also show that depending on the training objective, CSFs appear more in some layers than in others. This shows that, as expected, neither all training targets nor all layers of deep learning models have the same properties, and that some of them are more closely related to human perception than others.
\begin{table}
\begin{tabular}{c|c|c|c|c} Architecture & Goal & Training data & Read-out & Fine tuning \\ \hline \hline AlexNet & & & & \\ VGG-16 & & & & \\ ResNet-50 & Supervised & ImageNet-1K & Euclidean, no concat. & No \\ DenseNet-121 & & & & \\ EfficientNet-B0 & & & & \\ \hline \multirow{4}{*}{AlexNet} & Supervised & & & \\ \cline{2-5} & Self-Supervised RotNet & & & \\ \cline{2-5} & Self-Supervised Jigsaw & ImageNet-1K & Euclidean, no concat. & No \\ \cline{2-5} & Self-Supervised Colorization & & & \\ \cline{2-5} & Self-Supervised DeepCluster & & & \\ \hline \multirow{4}{*}{AlexNet} & \multirow{4}{*}{Supervised} & ImageNet-1K & & \\ \cline{2-5} & & Places-365 & Euclidean, no concat. & No \\ \cline{2-5} & & Cifar-10 & & \\ \hline \multirow{4}{*}{AlexNet} & \multirow{4}{*}{Supervised} & & Euclidean, no concat. & \\ \cline{2-5} & & & Euclidean, concat. & \\ \cline{1-1} \cline{3-5} & & & Means, no concat. & \\ \cline{1-1} \cline{3-5} & & & Means, concat. & \\ \cline{1-1} \cline{3-5} & & & Means-sigmas, concat. & \\ \cline{1-1} \cline{3-5} & & & Gram, no concat. & \\ \hline \multirow{4}{*}{AlexNet} & \multirow{4}{*}{Supervised} & ImageNet-1K & & No \\ \cline{1-1} \cline{3-5} & & & & TID-2008 \\ \cline{1-1} \cline{3-5} & & & & train-KADID-10K \\ \end{tabular}
\end{table}
Table 1: Resume of the factors analyzed. Each column show the different model factors that we have varied and each row shows the different options explored.
On the other hand, other works explore the relation between deep learning models classification accuracy and human perception. They show that there is an inverse-V relation between ImageNet accuracy and correlation with human perception in IQA problems [14], where models with high and very low accuracies have lower correlation with humans.
## 3 Method
Here we describe the databases and models we used in our experiments and the different ways we employed to calculate distances between images in the inner representation of a certain model.
### Databases and models
We restrict ourselves to already trained models by third-party people in order to avoid dependence on architecture design or training procedures. Following this, we used different pre-trained models, trained on ImageNet [4], Places-365 [28] and Cifar-10 [12] databases, in supervised or self-supervised ways. More particularly, for the supervised models, we used AlexNet [13], VGG-16 [21], DenseNet-121 [10], ResNet-50 [7] and EfficientNet-B0 [22]. We downloaded AlexNet and EfficientNet-B0 from TorchVision [17] and VGG-16, DenseNet-121 and ResNet-50 from Keras [3]. We downloaded all of them with ImageNet pre-trained weights. We also downloaded the supervised AlexNet model pre-trained in Places-365 from MIT CSAIL Computer Vision [27] and pre-trained in Cifar-10 from [20]. For the self-supervised models, we select one architecture (AlexNet) and use different self-supervised training goals with ImageNet images. We tested RotNet [11], Jigsaw [18], Colorization [25] and DeepCluster [2] tasks. We downloaded all the self-supervised models from Facebook research VISSL [6]. Table 2 shows a resume of the supervised and self-supervised models trained with ImageNet-1K data and their ImageNet-1k Top 1 accuracy.
To test the models described previously we used two image quality databases: TID-2013 [19] and KADID-10K [16]. Both consists on pairs of images and distorted versions of the same image with a mean opinion score (MOS) for each image - distorted image pair, which represents the distance between them as estimated by humans. More particularly, we used the whole TID-2013 (3000 image pairs) and \(30\%\) of KADID-10K, which we call val-KADID-10K (3038 image pairs), for testing the different models.
We also tested weighting the features extracted by the models (what we call fine-tuning), similarly to LPIPS. We used TID-2008 (1700 image pairs) and the remaining \(70\%\) of KADID-10K (train-KADID-10K) (7087 image pairs) to weight each output feature in order to maximize the correlation with the MOS in these training databases.
### Distance measurement and correlation
There are several ways to measure the distance between two images. For, example, let \(x_{0}\in\mathbb{R}^{(H,W,C)}\) and \(y_{0}\in\mathbb{R}^{(H,W,C)}\) denote an image and its distorted version, and \(\hat{x}^{l}\in\mathbb{R}^{(H^{l},W^{l},C^{l})}\) their feature maps at the \(l^{th}\) layer of a network. Then, their euclidean distance is just:
\[d^{l}(x_{0},y_{0})=\sqrt{\sum_{H^{l},W^{l},C^{l}}(\hat{x}^{l}-\hat{y}^{l})^{2}} \tag{1}\]
However, it is also possible to statistically summarize the layer output before computing the distance. Here, we used three different ways of summarize the layer output. First, we can use the mean of each feature:
\[d^{l}_{\mu}(x_{0},y_{0})=\sqrt{\sum_{C^{l}}(\hat{\mu}^{l}_{x}-\hat{\mu}^{l}_{y })^{2}} \tag{2}\]
where \(\hat{\mu}^{l}_{x}\in\mathbb{R}^{C^{l}}\) and \(\hat{\mu}^{l}_{y}\in\mathbb{R}^{C^{l}}\) are the spatial average of the outputs of the \(l^{th}\) layer for the image \(x_{0}\) and its distorted version \(y_{0}\). In this case, we compute the spatial averages to calculate the distance with them. Second, we can use not only the spatial averages but also the spatial standard deviations. In this case, we concatenate the spatial averages and standard deviation before computing the distance:
\[d^{l}_{\mu,\sigma}(x_{0},y_{0})=\sqrt{\sum_{C^{l}}(concat(\hat{\mu}^{l}_{x}, \hat{\sigma}^{l}_{x})-concat(\hat{\mu}^{l}_{y},\hat{\sigma}^{l}_{y}))^{2}} \tag{3}\]
where \(\hat{\mu}^{l}_{x}\in\mathbb{R}^{C^{l}}\), \(\hat{\mu}^{l}_{y}\in\mathbb{R}^{C^{l}}\) and \(\hat{\sigma}^{l}_{x}\in\mathbb{R}^{C^{l}}\), \(\hat{\sigma}^{l}_{y}\in\mathbb{R}^{C^{l}}\) are the spatial average and standard deviation of the outputs of the \(l^{th}\) layer for the image \(x_{0}\) and its distorted version \(y_{0}\) respectively. Finally, we can summarize the outputs through their Gram matrix:
\[d^{l}_{G}(x_{0},y_{0})=\sqrt{\sum_{C^{l}}(\hat{G}^{l}_{x}-\hat{G}^{l}_{y})^{2}} \tag{4}\]
\begin{table}
\begin{tabular}{c|c|c} Architecture & Training process & ImageNet Top 1 \\ \hline \hline AlexNet & Supervised & 56.5\% \\ VGG-16 & Supervised & 71.3\% \\ DenseNet-121 & Supervised & 75.0\% \\ ResNet-50 & Supervised & 74.9\% \\ EfficientNet-B0 & Supervised & 77.7\% \\ \hline AlexNet & Self RotNet & 39.5\% \\ AlexNet & Self Jigsaw & 34.8\% \\ AlexNet & Self Colorization & 30.4\% \\ AlexNet & Self DeepCluster & 37.9\% \\ \end{tabular}
\end{table}
Table 2: Summary of the tested models, their training process and their ImageNet Top 1 accuracy.
where \(\hat{G}^{l}_{x}\in\mathbb{R}^{(C^{l},C^{l})}\) and \(\hat{G}^{l}_{y}\in\mathbb{R}^{(C^{l},C^{l})}\) are the Gram matrices of the outputs at the \(l^{th}\) layer for the image \(x_{0}\) and its distorted version \(y_{0}\).
Inspired by some of the SOTA image quality models [26], one can also concatenate the outputs of different layers in order to introduce more information to calculate the distance. Not only that, it is also possible to weight (fine-tune) the output features of a model (concatenating or not) so that the correlation with a specific database is maximized.
Summing up, to obtain the results we pass each image - distorted image pair through the different models and record the outputs at different layers. Then, we calculate the distance between them at the different layers using one of the distance definitions from above (\(d^{l}\)-\(d^{l}_{\mu}\)-\(d^{l}_{G}\); concatenating or not the outputs of different layers; weighting or not the outputs). Finally, we calculate the Spearman correlation between the distance of the model at different layers with the experimental MOS.
## 4 Experiments and results
### Architecture
In our first experiment, we tested how different architectures (AlexNet, VGG-16, ResNet-50, DenseNet-121 and EfficientNet-B0) correlate with human perception. Here we fixed the goal of the models (ImageNet supervised classification task), the training data (ImageNet-1K), the way we perform the read-out to calculate the distance (euclidean: \(d^{l}(x_{0},y_{0})\)) and we did not weight the output features, so we just calculate the euclidean distance at the inner domain of each layer.
Figure 1 shows how different architectures correlate with human perception (MOS) at different depths (different layers) for TID-2013 and val-KADID-10K.
There are several results to notice from this figure. First, all the models perform better than classical statistical image quality models (SSIM [24]: black solid line). Also, simpler models (AlexNet and VGG-16) perform better than modern image quality algorithms based on neural networks (DISTS [5], LPIPS [26]: black dashed lines). Only some specific biologically inspired image quality algorithms [8] perform better than the majority of the models in TID-2013. Second, simpler models achieve higher correlations. Specifically, AlexNet and VGG-16, which do NOT have skip connections, get more correlation with human perception than modern models with higher ImageNet accuracies. The fact that skip connections hurt the models is in agreement with the results of style transfer, where it was found that using ResNet with skip connections performs worse than networks without skip connections such as VGG [23]. Third, in the simplest models (AlexNet and VGG-16), there is a relationship between the depth of the layer used to measure image distances and the correlation with perception, with higher correlations obtained in deeper layers. However, both models show a decrease in correlation for the last two layers. More complicated models (ResNet-50, DenseNet and EfficientNet) have much more complex correlation diagrams with depth, not showing a clear relation between deepness and correlation.
This suggests that image quality algorithms that used deep learning models should use classical networks without skip connections even if they do not achieve high accuracy in ImageNet classification.
### Goal function
In the last years, some studies showed that human-CSF emerge more in low level tasks models such as denoising autoencoders [15, 1]. However, what about self-supervised tasks? Here we select the network architecture that obtained the higher correlation in the supervised scenario, AlexNet, and check if some self-supervised goals achieve higher correlation with human perception. To do this, we fixed the architecture (AlexNet), the training data (ImageNet-1K), the way we perform the read-out to calculate the distance (euclidean, \(d^{l}(x_{0},y_{0})\)) and we did not weight the features, so that we just calculate the euclidean distance in the inner domain of each layer for AlexNet trained for different goals.
Figure 2 shows how AlexNet trained with different objectives correlates with human perception (MOS) at different layers for TID-2013 and val-KADID-10K.
While all training objectives have good perceptual properties, there are some that have better properties than others. In fact, all the models achieve higher correlation than SSIM. We obtain that the supervised model and RotNet model obtain the higher correlations. However, it is important to highlight that the RotNet model was pre-trained on the supervised ImageNet task. Between the other self-supervised tasks, Jigsaw obtains correlations at the level of the supervised model only in its first layers but, as depth increases, the correlation goes down. Colorization goal shows a small linear increase in correlation with layer depth but it always has lower correlation than the supervised one. Finally, DeepCluster model remains always almost at the same correlation level than in the RGB (input) domain, which is just the correlation with the RMSE.
This suggests that image quality algorithms based on deep learning models should not use models trained for self-supervised tasks, but rather they should use supervised or low-level tasks.
### Training data and learning environment
We check how human perception correlates with the distances from networks trained with different data. We fixed the architecture that obtained a better correlation (AlexNet) and the training objective, supervised. We also keep fixed the way we perform the read-out to calculate the distance
(euclidean, \(d^{l}(x_{0},y_{0})\)) and we did not weight the features. The only difference between them is data used to train the models, and we analyze how the correlation depends on training on ImageNet-1k, Places-365 and Cifar-10. Figure 3 shows how different different training data correlate with human perception (MOS) at different layers for TID-2013 and val-KADID-10K.
There are not big differences between the data used in the training process. However, the best result is obtained with ImageNet-1K, which are around 1 million of natural images. Places-365 (ten million place locations images) and Cifar-10 (50K small natural images) achieve less correlation. It implies that not using natural images or using smaller/not enough images doesn't hurt the model too much.
It suggests that image quality algorithms using deep learning models should use as natural and big images as possible, while the data quantity isn't as important.
### Readout strategies and statistical descriptors
Here we tested how the way we perform the read-out affects the correlation. To test it, we use the supervised AlexNet model trained in a supervised way with ImageNet-1K and we calculate the correlation with human perception using the different ways to calculate the distances in the model inner domain. We tested not only the different distance definitions (summarizing the layer outputs with statistics or not) but also we check what happens when we concatenate the outputs of the three max pooling layers. Figure 4 shows how different distance measurements correlate with human perception (MOS) at different layers for TID-2013 and val-KADID-10K.
First, figure 4 shows that the best correlation is obtained when the whole output is used to calculate the distances, without the use of any statistical descriptor. When statistical descriptors are used, using the spatial Gram Matrix per
Figure 1: TID-2013 (left) and val-KADID-10K (right) Spearman correlation at different model depths (different layers) for different model architectures. Note that each model has different number of layer so that to plot all together, the x-axis represents the percentage of the network. Colors represent the different AlexNet blocks, which achieve the higher correlation at the second half of the network. Some published IQA models results are shown in black (solid and dashed lines).
Figure 2: TID-2013 (left) and val-KADID-10K (right) Spearman correlation at different model depths (different layers) for different goals. Colors represent the different AlexNet blocks. Some IQA models results are shown in black solid and dashed lines.
forms worst. Using the spatial mean together with the spatial standard deviation lead to higher correlation than using only the spatial means, because it implies using more information to calculate the distances. With regard to concatenating different layer outputs (such as in LPIPS [26] and DISTS [5]), it does not have a big effect.
This suggests that image quality algorithms that use deep learning models should utilize the full output of the layers without using any statistical descriptor. Also, using a concatenation of the outputs of different layers seems to have no benefits while increasing the computational complexity.
### Fine-tuning strategies.
Finally, we perform an analysis of what happen when we weight each feature to maximize the correlation in some database. In this scenario we take the model that gives the higher correlation (AlexNet) trained in a supervised way on ImageNet-1K. We used only the euclidean read-out without any statistical descriptor measurement with and without concatenating the layer outputs. Figure 5 shows how different ways of weight features correlate with human perception. More specifically, we fine-tune the outputs with TID-2008 and train-KADID-10K and we tested at different layers with TID-2013 and val-KADID-10K.
The first conclusion is that fine-tuning with TID-2008 when evaluating in TID-2013 results in an increase in the correlation, which makes sense because the images are similar. It also occurs when fine-tuning with train-KADID-10K and evaluating with val-KADID-10K. More interesting results happened when we performed cross fine-tune: fine-tuning with train-KADID-10K gives no substantially changes when evaluating with TID-2013. However, fine-tuning with TID-2008 gives much worse results when evaluating in val-KADID-10K.
Figure 4: TID-2013 (left) and val-KADID-10K (right) Spearman correlation at different model depths (different layers) for different options to calculate the distance: just the euclidean distance with the full output, using statistical descriptors and/or concatenating the outputs of different layers. Colors represent the different AlexNet blocks. Some IQA model results are shown in black solid and dashed lines.
Figure 3: TID-2013 (left) and val-KADID-10K (right) Spearman correlation at different model depths (different layers) for different training data. Colors represent the different AlexNet blocks. Some IQA model results are shown in black solid and dashed lines.
### Relation between ImageNet-1K classification accuracy and perceptual correlation
Finally, here we tried to compare the ImageNet-1K classification correlation with the maximum correlation obtained both in TID-2013 and val-KADID-10K. Figure 6 shows this relation between the supervised and self-supervised models ImageNet-1K classification accuracy and the Spearman correlation.
It is interesting to note that there is an inverse relation, for the supervised models, between classification accuracy and the maximum correlation they obtained. It implies that the simpler the models, the better correlation. However, for the self-supervised models, there is a direct relation between classification accuracy and correlation.
## 5 Conclusion
In this work, we explore the perceptual properties of deep learning models. We restrict ourselves to already trained models by third-party people in order to avoid dependence on architecture design or training procedures. Following this idea, we analyze the perceptual properties using two classical image quality databases already accepted by the community [19] and [16].
Multiple factors are analyzed and conclusions can be raised for all of them:
* Different models: All models are better than SSIM on prediction of human correlation. Simpler models have better perceptual behavior than complex models and are better than usual image quality metrics.
* Training objectives: While all training objectives have good perceptual properties, there are some that have better properties than others. In particular, the best results are obtained by the model trained for classification.
* Amount of training data: Differences in the amount of data used for training didn't have a big effect on perceptual behavior, but training the models with over a million big natural images seems to be beneficial.
* Measure concatenating outputs or using statistical descriptor: Concatenating outputs of different layers (as in LPIPS [26] and DISTS [5]) or just taking the output of one layer does not have a big effect on the correlation. The use of statistical descriptors as the channel mean, channel mean and standard deviation or the Gram Matrix has a bad effect in the correlations.
* Fine-tuning the measures: Fine-tuning the channel relevance for a particular image quality database seems to have a negative effect when measuring correlation in a different database. A relevant output is that even without fine-tuning, the correlation of already trained models surpasses most image quality metrics. In fact, they surpass LPIPS [26] and DISTS [5].
* The best behavior is obtained by the fifth layer of the original AlexNet model: trained for classification, without using fine-tuning on perceptual data, without concatenating outputs, and without using statistics.
|
2301.05077 | Incorporating time-dependent demand patterns in the optimal location of
capacitated charging stations | A massive use of electric vehicles is nowadays considered to be a key element
of a sustainable transportation policy and the availability of charging
stations is a crucial issue for their extensive use. Charging stations in an
urban area have to be deployed in such a way that they can satisfy a demand
that may dramatically vary in space and time. In this paper we present an
optimization model for the location of charging stations that takes into
account the main specific features of the problem, in particular the different
charging technologies, and their associated service time, and the fact that the
demand depends on space and time. To measure the importance of incorporating
the time dependence in an optimization model, we also present a simpler model
that extends a classical location model and does not include the temporal
dimension. A worst-case analysis and extensive computational experiments show
that ignoring the temporal dimension of the problem may lead to a substantial
amount of unsatisfied demand. | Carlo Filippi, Gianfranco Guastaroba, Lorenzo Peirano, M. Grazia Speranza | 2023-01-12T15:21:43Z | http://arxiv.org/abs/2301.05077v1 | Incorporating time-dependent demand patterns in the optimal location of capacitated charging stations
###### Abstract
A massive use of electric vehicles is nowadays considered to be a key element of a sustainable transportation policy and the availability of charging stations is a crucial issue for their extensive use. Charging stations in an urban area have to be deployed in such a way that they can satisfy a demand that may dramatically vary in space and time. In this paper we present an optimization model for the location of charging stations that takes into account the main specific features of the problem, in particular the different charging technologies, and their associated service time, and the fact that the demand depends on space and time. To measure the importance of incorporating the time dependence in an optimization model, we also present a simpler model that extends a classical location model and does not include the temporal dimension. A worst-case analysis and extensive computational experiments show that ignoring the temporal dimension of the problem may lead to a substantial amount of unsatisfied demand.
_Keywords:_ Facility location, Charging stations, Electric vehicles, Demand patterns, Time-dependent optimization.
## 1 Introduction
Sustainable transportation is one of the major challenges that modern countries are facing. Several sources indicate that the transportation sector generates the largest share of GreenHouse Gas (GHG) emissions. According to the United States Environmental Protection Agency1, in 2020 the transportation sector produced 27% of the total GHG emissions in the US, mostly generated from burning fossil fuels by cars, trucks, ships, trains, and planes. Domestic statistics issued by the UK government2 confirm that the transportation sector generated 27% of the total GHG emission. The majority (91%) came from road transport vehicles, where the biggest contributors were cars and taxis. Furthermore, data provided by the European Environment Agency3 highlight that in the EU more than 22% of the GHG emissions came from the transportation sector.
Footnote 1: [https://www.epa.gov/ghgemissions/sources-greenhouse-gas-emissions](https://www.epa.gov/ghgemissions/sources-greenhouse-gas-emissions)
Footnote 2: [https://www.gov.uk/government/statistics/transport-and-environment-statistics-autumn-2021/tran](https://www.gov.uk/government/statistics/transport-and-environment-statistics-autumn-2021/tran) sport-and-environment-statistics-autumn-2021
Footnote 3: [https://www.eea.europa.eu/data-and-maps/data/data-viewers/eea-greenhouse-gas-projections-data-viewer](https://www.eea.europa.eu/data-and-maps/data/data-viewers/eea-greenhouse-gas-projections-data-viewer)
Despite technical advances have made available a range of options for sustainable mobility, there are still important obstacles that must be overcome for their mass adoption. Among such options, Electric Vehicles (EVs) are considered one of the major directions to reduce the environmental impact of people mobility and make urban areas more sustainable. In the 2021 edition of the Global EV Outlook 20214, the International Energy Agency pointed out that at the end of 2020 the global EVs stock hit 10 millions units, with 3 millions newly registered EVs. Europe was the fastest growing market, with a sales share equal to 10% and some leading countries, such as Norway, which registered a record high sales share of 75%. This trend was accelerated by many countries of the European Union through substantial financial incentives. However, the decision of potential EV buyers is still strongly affected by two major issues. On one hand, the purchase cost of an EV is still higher than that of a traditional internal combustion engine vehicle. On the other hand, the limited travel range of an EV and the long charging time are well-known to generate anxiety in the potential buyers (e.g., Pevec et al., 2020). In fact, the willingness of drivers to purchase an EV strongly depends on the availability of charging stations nearby their points of interests (e.g., home and work). As the number of charging stations is growing, thanks to public and private investments, the location problem of such stations has attracted much attention (see Section 2).
Footnote 4: [https://www.iea.org/reports/global-ev-outlook-2021](https://www.iea.org/reports/global-ev-outlook-2021)
There are a number of factors that make the location of charging stations substantially different from other, more classical, location problems, in particular the choice of the charger to install (e.g., slow, quick, fast), and the characteristics of the charging demand.
The type of charger is a key factor to be taken into account, as it impacts the charging time. As of the end of 2021, there exist three main types of charger (see Moloughney, 2021). Level 1 chargers, also referred to as _slow_ chargers, use common 120-volt outlets, and can take up to 40 hours to raise the level of a standard battery EV (with a 60 kWh sized battery) from 10% to 80% of the capacity. These chargers are most suitable for private usage. Level 2 chargers, sometimes called _quick_ chargers, can charge up to 10 times faster than a level 1 charger, and are the most commonly used types for daily EV charging (see Moloughney, 2021). Given the same battery characteristics mentioned above, the charging time is about 4.5 hours. The level 3 or _fast_ chargers can reduce the charging time to 40 minutes or even less. For a comprehensive study regarding the state of the art on charging stations, the interested reader can refer to Pareek et al. (2020). The type of charger demanded by EVs is affected by the urban layout. For example, slow chargers will be demanded in residential areas so that EVs can be recharged over the night at low cost (an interesting study of the factors influencing the charging demand is provided in Wolbertus et al., 2018).
In the classical location models a customer is characterized by the distance from any potential location and by a single quantity - a measure of the demand. The models do not consider a temporal dimension of the problem which basically corresponds to assuming that the demand is uniformly distributed over the time period of interest of the location decision. On the contrary, the charging demand of EVs fluctuates over time, with peaks of demand in periods of time where the traffic volume is high. Neglecting the demand dynamics may lead to solutions where the charging capacity deployed is not sufficient to satisfy the demand during the peak times.
In this paper, we study the problem of determining an optimal deployment of charging stations for EVs within an urban environment. Different types of chargers have to be located in pre-defined potential locations, modeled as nodes of a network. The urban area is partitioned in sections. A customer is associated with each section of the urban area. Its demand in a certain time interval is the number of EVs in that section that need to be recharged. The customer is located in the
center of gravity of the section and is modeled as a node of the network. The urban area is also partitioned in zones (e.g., commercial, industrial, or residential) which have different needs in terms of minimum number of each type of charger deployed in the zone.
We have to determine, for each type of charger and each potential location, the number of chargers to be deployed. Two criteria have a key role in this location problem: the cost of installing the chargers and the distance the customers have to travel to be recharged.
We present, over a discretized time horizon, an optimization model that introduces a temporal dimension which, to the best of our knowledge, has never been introduced in the literature on location problems and captures the dynamics of the charging demand. Assuming that a charger can take more than one period to fully recharge an EV, the proposed multi-period formulation includes constraints to keep track of the usage of chargers across consecutive time periods and to ensure that no other vehicles are assigned to any occupied charger. This novel approach guarantees a correct sizing of the solution, in terms of number of stations opened and number of chargers installed, and ensures that the demand is completely satisfied in all time periods. In order to assess the value of introducing the temporal dimension in the location problem, which makes the optimization model more complex, we present a single-period optimization model that captures the same specificities of the problem but ignores the temporal aspect. In both models, the objective is the minimization of a convex combination of two terms: the total cost of deploying the charging stations and installing the chargers, and the average distance traveled by the customers to reach the assigned charging station. The two optimization models turn out to be Mixed Integer Linear Programming (MILP) problems. We compare the two models through a theoretical and a computational analysis. We show, through worst-case analysis, that a solution to the single-period model may fail to satisfy a large portion of the charging demand. Extensive computational experiments are run on different classes of randomly generated instances. The results confirm the importance of explicitly considering the dependence on time of the demand. In fact, the single-period model is based on the common assumption that the charging demand is uniformly distributed across the planning horizon. In an application context such as the one at hand, where the demand fluctuates significantly during the day and across different zones of the same urban area, the single-period model produces solutions that are not capable of serving a large portion of the charging demand, especially in those time periods where the demand is prominently concentrated. The computational experiments also include a parametric analysis of the relative weight assigned to the objective function components.
**Structure of the paper.** The remainder of the paper is organized as follows. In Section 2, the literature most closely related to our research is reviewed and the contribution of this paper is highlighted. In Section 3, after the presentation of the single-period extension of a classical location model, we provide the multi-period mathematical formulation. In Section 4, we analyze the worst-case performance of the single-period model in terms of portion of unsatisfied charging demand. Section 5 reports extensive computational experiments conducted on instances generated to resemble demand dynamics frequently observed in different zones of a city. Finally, some concluding remarks are outlined in Section 6.
## 2 Literature review
The problem of determining an optimal location and size of charging stations for EVs has recently attracted an increasing academic attention. Recent overviews of the main modeling and algorithmic approaches employed in this research area are available in Deb et al. (2018), Zhang et al. (2019), and
Kchaou-Boujelben (2021). For a general introduction on location problems the interested reader can refer to Laporte et al. (2019). In the following, we focus on the papers that are most closely related to our research, and refer the interested reader to the above-mentioned surveys and the references cited therein.
A first broad classification of the literature is based on the type of network considered (cf. Deb et al., 2018). When only the _distribution network_ is considered, the optimal location of charging stations must consider the potential adverse effects on the power grid, as an inappropriate placement of charging stations can be a threat to the power system security and reliability. On the other hand, when only the _transportation network_ is taken into account, the main issue is to determine an optimal location of charging stations over a road network. This paper lies in the latter category. Within this category, the related literature can be further classified into two main streams of models called flow-based and node-based demand models (e.g., see Kchaou-Boujelben, 2021). In the literature, the majority of the research efforts are devoted to the flow-based demand models, whereas the number of papers adopting a node-based approach is still relatively limited. To the best of our knowledge, Anjos et al. (2020) are the only authors that integrated, within the same optimization model, both a node-based and a flow-based approach. The flow-based demand models are best suited for modeling long-haul (e.g., inter-urban) journeys where accounting for the limited driving range of EVs is important (cf. Anjos et al., 2020). Contributions to this line of research can be found, for example, in Kuby and Lim (2005), MirHassani and Ebrazi (2013), Yildiz et al. (2016), and Hosseini et al. (2017). The present paper adopts a _node-based_ demand model.
In the class of node-based demand models, drivers demanding to charge their EVs are associated with one/few fixed locations, which represent, for instance, their residence, workplace or specific service facilities (such as commercial activities). This approach is best suited for urban settings. In fact, in such case EVs do not move much from the location where they need to be charged and their limited driving range can be neglected (cf. Anjos et al., 2020). The most common modeling approaches applied in the literature are based on the extension of classic discrete location models (e.g., location-allocation as in Zhu et al. (2016), set covering as in Huang et al. (2016), and maximum coverage problems as in Dong et al. (2019)) to incorporate technical constraints specific to EVs.
Characteristics of the charging demand (such as the population size, the penetration rate of EVs, the type of zone, and the time of the day) are known to have a crucial impact on the optimal location of charging stations. To position the present paper within the literature, we classify the mathematical formulations into _single-period_ and _multi-period_. In single-period optimization models all the decision variables are time independent. Although the spatial-temporal distribution of the charging demands is described by different authors (e.g., see Yi et al., 2020, and the references cited therein), only few authors have proposed multi-period optimization models where the allocation of the demand to the charging stations is time-dependent. The related stream of literature can be classified according to the length of the planning horizon considered. A long planning horizon is considered by some authors. The basic rationale of these models is that locating charging stations is a long-term strategic decision. As a consequence, during these long periods of time the technology available, as well as the charging demand, may change significantly. Along this line of research, we mention the paper by Anjos et al. (2020) where it is assumed that the locating decisions taken in a period have an impact on the charging demand in the subsequent periods. In fact, potential EV buyers are influenced by the availability of charging opportunities. Some papers have proposed multi-period optimization models that consider a short horizon, usually a day, divided in time periods, usually hours. Our research belongs to this category of papers.
To the best of our knowledge, Cavadas et al. (2015) are the first authors to recognize the importance of incorporating into an optimization model the dynamics of the charging demand across the day. The aim of the proposed multi-period model is the maximization of the total demand served, subject to a constraint on the budget available. The authors consider only one type of charger (i.e., a slow type) and the sizing of the charging stations is not part of the optimization. In the model we present in this paper, we address these shortcomings by considering multiple types of chargers and optimizing the quantities installed in each opened station. Rajabi-Ghahnavieh and Sadeghi-Barzani (2017) estimate the charging demand of EVs in different zones of a city and at different hours. The authors consider the deployment of an unlimited number of fast chargers only and propose a non-linear optimization model that includes three cost components: the total opening cost, the total cost for the drivers to reach the assigned charging stations, and the cost of connecting the charging stations to the electric grid substations. The variability of the demand across the day is taken into consideration when determining the number of chargers to install. Nevertheless, the variables assigning EVs to stations are not time-dependent, and, hence, drivers demanding to charge their EVs at different hours are all assigned to the same station. In our paper, we allow the demand arising from the same location during the day to be assigned to different stations, depending on the evolution of the overall demand and the available charging resources. Moreover, we consider different types of chargers. Both short-term and long-term decisions are considered in Quddus et al. (2019). The main long-term decisions are related to the year, the location, and the type of charging stations to open. The short-term decisions are mainly related to the amount of power (provided by different sources, such as electric grid and renewable sources) to satisfy the hourly charging demand at a given location. Compared to our research, the drivers are, indirectly, pre-assigned to a charging station and, hence, the assignment is not part of the optimization model. The authors cast the problem as a two-stage stochastic programming model. Li and Jenn (2022) present an optimization model based on the concept of charging opportunities, which is measured through the time an individual stays at a given location within a day. The authors separate the charging opportunities into home and non-home (i.e., public) categories, and allow the same individual to charge the EV multiple times at different locations. The proposed optimization model determines the number of home and non-home chargers to install, as well as the times and locations for each individual to charge the EV. The model aims at minimizing the sum of the annual electricity cost for charging the EVs and the total cost of locating the home and non-home chargers. The number of chargers that can be installed in each location (called region by the authors) is unlimited.
Finally, we mention the growing body of literature that addresses the problem of determining an optimal location of charging stations for EVs in car-sharing systems (e.g., cf. Brandstatter et al., 2017, 2020; Bekli et al., 2021). Although such problem has some characteristics in common with ours, it includes some operational characteristics that make it considerably different, for example the decisions about the number of EVs to acquire, the relocation of the EVs among stations, and the assumption that charging occurs only between two consecutive trips.
**Contributions of the paper.** The contributions of this paper to the literature can be summarized as follows.
* We present a node-based multi-period optimization model for the location of charging stations that captures the dependence on time of the charging demand;
* the multi-period model takes into account several characteristics of the real problem: multiple types of chargers (each with its own charging speed and installation cost), the capacitated nature of the charging stations (in terms of maximum number of chargers that can be in
stalled), a minimum number of chargers to be installed in different zones (e.g., commercial, residential, industrial);
* the multi-period model is compared to a single-period model through a worst-case analysis;
* extensive computational experiments are presented that show, in particular, the importance of incorporating the dependence on time of the charging demand.
## 3 Problem definition and mathematical formulations
In this section, we first provide a general description of the location problem along with the notation that is common to the two optimization models that will follow. Then, the single-period MILP model is presented, together with the notation that is specific for the model, followed by the multi-period formulation.
We consider the problem of determining, in an urban area, an optimal location of charging stations for EVs, along with the type and number of chargers to deploy in each station. A maximum number of chargers, of each type and in total, can be deployed in each station. The location for any station can be selected from a pre-defined set of potential locations. We introduce a complete bipartite network \(G=(\mathcal{I}\cup\mathcal{J},A)\), where \(\mathcal{I}=\{1,2,\ldots,I\}\) is the set of demand nodes and \(\mathcal{J}=\{1,2,\ldots,J\}\) is the set of potential locations for the stations. Let \(c_{ij}\) be the travel distance from demand node \(i\) to station \(j\).
A fixed opening cost \(F_{j}\) is associated with each station \(j\). The opening cost does not include the cost of the chargers. We denote as \(\mathcal{K}=\{1,2,\ldots,K\}\) the set of types of chargers considered, and as \(f_{jk}\) the cost of installing one charger of type \(k\in\mathcal{K}\) in location \(j\in\mathcal{J}\). Let \(u_{jk}\) be the maximum number of chargers of type \(k\) that can be installed in station \(j\). Similarly, \(u_{j}\) denotes the maximum number of chargers that can be installed in total in station \(j\). The latter two parameters define, implicitly, the maximum charging capacity of station \(j\).
Each node \(i\) is the center of gravity of a section of the urban area where the demand of the section is measured as the number of EVs that need to be recharged. We will introduce later, for each of the two optimization models, the planning horizon and the notation for the demand of a customer. For the sake of brevity, hereafter we refer to each potential location \(j\) simply as station \(j\). The demand must be entirely satisfied by the chargers that will be deployed.
To take into account that different parts of the urban area have different needs in terms of type of charger desired, the urban area is partitioned in zones (e.g., commercial, residential, industrial). We denote by \(\mathcal{L}=\{1,2,\ldots,L\}\) the set of zones. We assume that, based on some preliminary analysis, in each zone \(\ell\in\mathcal{L}\) a minimum percentage \(\rho_{\ell k}\) of chargers of type \(k\) must be deployed. Each station \(j\in\mathcal{J}\) belongs to a zone as well as each customer \(i\in\mathcal{I}\). Thus, the zones imply a partition of both the stations and the demand points. This partition does not restrict the allocation of demand to stations, i.e., a demand point located in a zone can be assigned to a station located in a different zone.
Two criteria have a key role in this location problem: the cost of opening the stations and installing the chargers and the distance the customers have to travel to be recharged. The objective function we consider, to be minimized, is a convex combination of these two criteria. The optimization problem is aimed at determining, for each type of charger and each station, the number of chargers to be deployed in such a way that the objective function is minimized.
Both MILP models include the following decision variables. Let \(z_{j}\in\{0,1\}\), with \(j\in\mathcal{J}\), be a binary variable that takes value \(1\) if station \(j\) is opened, and \(0\) otherwise. Let \(y_{jk}\in\mathbb{Z}_{+}\), with \(j\in\mathcal{J}\) and \(k\in\mathcal{K}\), be an integer variable that represents the number of chargers of type \(k\) installed in station \(j\).
### A single-period location model
This section presents a single-period model for the location of the charging stations. The MILP formulation, denoted as SP-CFL, is an extension of a classical CFL model. Hereafter, we introduce the notation needed for the formulation, in addition to the one introduced above.
We consider a single planning period of length \(H\) and denote as \(d_{i}\) the total demand in \(i\in\mathcal{I}\), that is, the total number of EVs demanding to be recharged in \(i\) during \(H\). Let \(p_{k}\) denote the average number of EVs fully recharged by one charger of type \(k\) during time period \(H\). For the sake of simplicity, we assume that \(p_{k}\) does not depend on the type of EV.
The SP-CFL model also makes use of the following decision variables. Let \(x_{ijk}\in[0,1]\), with \(i\in\mathcal{I}\), \(j\in\mathcal{J}\), and \(k\in\mathcal{K}\), be the fraction of the demand of node \(i\) assigned to a charger of type \(k\) in station \(j\). Then, the SP-CFL model can be stated as the following MILP:
\[\begin{array}{cc}\mbox{\small\
stations and installing the chargers. The two terms represent criteria of a substantially different nature: the first measures the quality of the service provided by the deployed stations and chargers to the drivers, whereas the second the cost of the service. The two criteria are weighted by the trade-off parameter \(\lambda\in[0,1]\), which is used to balance their importance.
Constraints (2) and (3) limit the number of chargers that can be installed in station \(j\). The former set bounds the number of chargers of type \(k\) to be lower than or equal to \(u_{jk}\), whereas the second set of constraints bounds the total number of chargers to be lower than or equal to \(u_{j}\). Both sets of constraints (2) and (3) impose that no charger can be installed if station \(j\) is not open (i.e., \(z_{j}=0\)). Constraints (4) ensure that the demand of each node \(i\in\mathcal{I}\) is entirely satisfied. Constraints (5) guarantee that the number of EVs assigned to the chargers of type \(k\) deployed in station \(j\) is not greater than the charging capacity available (i.e., \(p_{k}y_{jk}\)). They also impose that no EV can be assigned to a type \(k\) of chargers in station \(j\) if no charger of that type is available (i.e., \(y_{jk}=0\)). Inequalities (6), which are redundant in this formulation, are well-known to yield a tighter Linear Programming (LP) relaxation than the equivalent formulation without them (e.g., see Filippi et al., 2021). Constraints (7) guarantee that the number of chargers of type \(k\) installed in zone \(\ell\) is at least equal to the minimum percentage \(\rho_{\ell k}\). Finally, constraints (8) define the domain of the decision variables.
### A multi-period location model
This section presents the MILP formulation for the multi-period model, henceforth denoted as the MP-CFL model, for the problem defined at the beginning of this section.
The planning period \(H\) of the single-period model is here partitioned into a number \(T\) of time periods. For example, if \(H\) is a day, we may partition the day in hours. Let \(\mathcal{T}=\{1,2,\ldots,T\}\) denote the set of time periods. We denote as \(R_{k}\) the number of consecutive time periods needed to completely recharge a car using a charger of type \(k\). Note that, similar to \(p_{k}\) for the SP-CFL model, \(R_{k}\) does not depend on the type of EV but only on the type of charger. Furthermore, parameters \(p_{k}\) and \(R_{k}\) are strictly related, as the latter is determined by dividing the length of the time horizon by \(p_{k}\), i.e. \(R_{k}=\frac{T}{p_{k}}\).
The demand of each node \(i\in\mathcal{I}\) is no longer identified by a single value (\(d_{i}\) in the SP-CFL model) but by a time-dependent profile. Let \(d_{i}^{t}\) denote the demand of node \(i\in\mathcal{I}\) at the beginning of time period \(t\in\mathcal{T}\). A more detailed discussion about the demand profiles can be found in Section 5.1.1. We assume that the demand of a time period \(t\) must be served in that time period, i.e., it cannot be postponed to a later time. We say that a node is served by a charger of type \(k\) at time \(t\) if a charger is available at time \(t\) to start the charging which will occupy the charger for a total of \(R_{k}\) time periods. The capacity installed in each station must be sufficient to serve the charging demand assigned to that station in a time period and the demand assigned to the station in a previous time period that has not yet completed the charging. Finally, let \(x_{ijk}^{t}\in[0,1]\), with \(i\in\mathcal{I}\), \(j\in\mathcal{J}\), \(k\in\mathcal{K}\), and \(t\in\mathcal{T}\), be the fraction of the charging demand of node \(i\) to be served at time \(t\) that is assigned to a charger of type \(k\) in station \(j\).
The MP-CFL model is formulated as follows:
[MP-CFL]
\[\min\quad\lambda\cdot\left(\frac{1}{\sum\limits_{t\in\mathcal{T}}\sum \limits_{i\in\mathcal{I}}d_{i}^{t}}\sum\limits_{t\in\mathcal{T}}\sum\limits_{i \in\mathcal{I}}d_{i}^{t}\sum\limits_{j\in\mathcal{J}}c_{ij}\sum\limits_{k\in \mathcal{K}}x_{ijk}^{t}\right)+(1-\lambda)\cdot\left(\sum\limits_{j\in\mathcal{ J}}F_{j}z_{j}+\sum\limits_{j\in\mathcal{J}}\sum\limits_{k\in\mathcal{K}}f_{jk}y_{jk}\right) \tag{9}\]
s.t. (2), (3), and (7)
\[\sum\limits_{j\in\mathcal{J}}\sum\limits_{k\in\mathcal{K}}x_{ijk}^{t}=1\quad i \in\mathcal{I},t\in\mathcal{T} \tag{10}\]
\[x_{ijk}^{t}\leq y_{jk}\quad i\in\mathcal{I},j\in\mathcal{J},k\in\mathcal{K},t \in\mathcal{T} \tag{11}\]
\[\sum\limits_{i\in\mathcal{I}}\sum\limits_{\tau=0}^{t-1}d_{i}^{t-\tau}x_{ijk}^ {t-\tau}\leq y_{jk}\quad j\in\mathcal{J},k\in\mathcal{K},t\in\mathcal{T}:t<R_{k} \tag{12}\]
\[\sum\limits_{i\in\mathcal{I}}\sum\limits_{\tau=0}^{R_{k}-1}d_{i}^{t-\tau}x_{ ijk}^{t-\tau}\leq y_{jk}\quad j\in\mathcal{J},k\in\mathcal{K},t\in\mathcal{T}:t\geq R _{k} \tag{13}\]
\[z_{j}\in\{0,1\}\quad j\in\mathcal{J};\quad y_{jk}\in\mathbb{Z}_{+}\quad j\in \mathcal{J},k\in\mathcal{K};\quad x_{ijk}^{t}\in[0,1]\quad i\in\mathcal{I},j \in\mathcal{J},k\in\mathcal{K},t\in\mathcal{T}. \tag{14}\]
The objective function in (9) is the multi-period extension of function (1). For each node \(i\in\mathcal{I}\), constraints (10) ensure that the charging demand arising in each time period \(t\) is fully satisfied. Akin to the objective function, also inequalities (11) are the multi-period extension of constraints (6).
Constraints (12) and (13) guarantee that the number of EVs that are charging in time period \(t\) at a charger of type \(k\) in station \(j\) is smaller than or equal to the number of available chargers of that type (i.e., \(y_{jk}\)). Note that the second sum in (12) and (13) is used to keep track of the EVs that started to recharge in a previous time period but have not completed the charging in \(t\). Constraints (12) are defined for the first time periods in the planning horizon (such that \(t<R_{k}\)), whereas (13) are defined for the remaining time periods. Finally, constraints (14) define the domain of the decision variables.
## 4 Worst-case analysis
In this section, we analyze the worst-case performance of the SP-CFL model in terms of the demand that cannot be satisfied if the optimal solution produced is implemented in a context where the demand fluctuates over time. In fact, in this case if an optimal solution to the SP-CFL model is implemented, there is no guarantee that all the charging demand is satisfied. As the SP-CFL model implicitly assumes that the charging demand is uniformly distributed across the planning horizon, when the demand fluctuates over time, there may be peak time periods where the chargers installed are not sufficient.
**Theorem 1**: _When an optimal solution of the SP-CFL model is implemented, the fraction of the demand that does not find an available charger to be served may be up to \(1-\frac{1}{T}\), where \(T\) is the number of time periods of the planning horizon. This bound is tight._
* To prove the theorem, we build the following instance. Recalling constraint (4) and summing up all constraints (5), the following chain of inequalities holds: \[\sum_{i\in\mathcal{I}}d_{i}\stackrel{{\eqref{eq:def_eq_
## 5 Experimental Analysis
This section is devoted to the presentation and discussion of the computational experiments. They were conducted on a Workstation HP Intel(R)-Xeon(R) at 3.5GHz with 64 GB RAM (Win 10 Pro, 64 bits). The processor is equipped with 6 physical cores, and all threads were used while solving each instance. The MILP models were implemented in Java, compiled within Apache NetBeans 12.3, and solved by means of CPLEX 20.1. Each instance was solved with a CPU time limit of 3,600 seconds. All other CPLEX parameters were set at their default values.
The section is organized as follows. First, we present the testing environment we used in our experiments, then we compare the optimal solutions for two illustrative examples generated according to two different urban structure models, and finally we provide detailed computational results comparing the solutions produced by the single-period and the multi-period models.
### Testing environment
The generation of the charging demand and potential station locations follows the procedure described in Section 5.1.1. All the remaining parameters defining the testing environment are detailed in Section 5.1.2.
#### 5.1.1 Spatial and temporal charging demand generation
As far as the urban structure is concerned, we considered two classic models, the concentric zone model and the sector model. The _concentric zone model_ was proposed in 1925 by sociologist Ernest Burgess on the base of his human ecology theory, and was initially applied to the city of Chicago (cf. Burgess, 2008). It is, perhaps, the first theoretical model used to explain urban social structures. The model depicts urban land usage as concentric rings: the business district is located in the center, whereas the remainder of the city is expanded in rings, each corresponding to a different land usage (such as industrial or residential). The _sector model_ was proposed in 1939 by land economist Homer Hoyt (see Hoyt, 1939). It is a modification of the Burgess' model where the city zones devoted to a specific land usage (e.g., business, residential, and productive) develop in sectors expanding from the original city center. Though the actual structure of modern cities can hardly be captured by models as simple as Burgess' and Hoyt's, they are the basis of more
Figure 1: An instance where the demand arises in time period \(\hat{t}\) (green bar). The pink bars show a uniform distribution of the demand across the planning horizon, as implicitly assumed by the SP-CFL model.
complex structures (Hall and Barrett, 2012) and, on the other hand, can simplify the interpretation of the results. For these reasons, we considered two classes of instances, each associated with one urban model. Hereafter, the two classes are referred to as the _concentric ring_ (COR) instances and the _sector_ (SEC) instances. In both cases, we assume the urban structure comprises three possible zones: _commercial_, _residential_, _industrial_. With a little abuse of notation, we denote the set of zones as \(\mathcal{L}=\{C,R,I\}\), where \(C\), \(R\), and \(I\) refer to the commercial, residential, and industrial zones, respectively. Each zone is characterized by a different pattern of the charging demand during the planning horizon, as it will be detailed later. We consider as planning horizon a day, discretized into \(T=24\) hours (i.e., the time periods).
In the COR instances, we assume the commercial zone is the central circle with ray 1000, the residential zone is a ring in the middle around the commercial zone with outer ray 2000, and the industrial zone is a most outer ring around the residential zone with outer ray 3000. In the SEC instances, we assume that commercial, residential, and industrial zones correspond to three slices of identical size that partition a circle of ray 3000.
Then, for each zone, we uniformly generate the same number of demand nodes. More exactly, given the total number of demand nodes to be generated, such value is divided by three to obtain, after an integer rounding whenever necessary, the number of demand nodes to generate in each zone. Figure 2 gives an example.
Both in COR and SEC instances, a given number \(J\) of potential stations is uniformly generated over the total area (i.e., the circle with ray 3000).
For each pair of demand node \(i\) and potential station \(j\), parameter \(c_{ij}\) is computed as the Euclidean distance between the two nodes.
Concerning the demand pattern, this is specific for each zone. In the commercial zone we assume there is a high density of offices, shops, pubs, restaurants, and hotels. Hence, we expect a high
Figure 2: Demand nodes generation for a COR (left) and a SEC (right) instance. Blue points are located in the commercial zone, orange points in the residential zone, and gray points in the industrial zone.
demand with two peaks, the first one at the beginning of the working day and the second one in the afternoon, higher than the first peak and slowly decreasing in the evening hours. In the industrial zone, we expect a high demand in the morning with one peak around lunch time and an almost null demand during the night. Finally, in the residential zone, we assume the presence of a high density of private houses, with a comparatively low demand in the morning and a peak in the evening and early night hours. These assumptions are consistent with several studies regarding the spatial-temporal distribution of the charging demands observed in urban areas (see, e.g., Yi et al., 2020; Straub et al., 2021).
To generate the charging demand according to these patterns, we proceed as follows. We first generate three basic profiles of demand that mimic the patterns described above for the commercial, industrial, and residential zones, respectively. Such basic profiles are built according to the four standard demand levels shown in Table 1. The resulting basic demand profiles are depicted in Figure 3. For each \(i\in\mathcal{I}\) and \(t=1,\ldots,24\), we randomly generate an initial demand value \(\tilde{d}_{i}^{t}\) from a Poisson distribution with mean (and variance) equal to the standard level assigned to \(i\) and \(t\) in the corresponding basic profile. For example, if \(i\) is in the commercial zone and \(t=8\), then \(\tilde{d}_{i}^{t}\) is a realization of a Poisson with mean 2, cf. Figure 3(a). We then set:
\[d_{i}^{t}=\left[\frac{\tilde{d}_{i}^{t}}{\sum_{t\in\mathcal{T}}\tilde{d}_{i}^ {t}}\cdot 10\right],\]
where \([\cdot]\) denotes the nearest integer rounding operator. In this way, we obtain that: (1) the total daily demand from each demand node is around 10; (2) the total demand in each zone is consistent with the corresponding basic demand profiles shown in Figure 3.
#### 5.1.2 Remaining parameters
We generated a set of instances by varying the number of demand nodes \(I\), of potential stations \(J\), and of the maximum number of chargers to install in each station \(u_{j}\). All the remaining parameters take the same value across all instances. The name of each instance is \(I\_J\_u_{j}\), where:
* \(I\): The number of demand nodes ranges according to the following values: \(I=50\), 100, 150, 200, 250, and 500.
* \(J\): The number of potential stations ranges according to the following values: \(J=10\), 20, 30, 40, and 50.
* \(u_{j}=u\): The maximum number of chargers to install is equal across all the stations, and ranges according to the following values: \(u_{j}=10\), 20, and 30.
For example, instance 50_20_30 comprises 50 demand nodes, 20 potential stations, and parameter \(u_{j}\) is equal to 30. Note that the latter parameter is equal for each potential station \(j\). We made
\begin{table}
\begin{tabular}{c|c c c c} Level & 0 & 1 & 2 & 3 \\ \hline Demand & null & low & medium & high \\ \end{tabular}
\end{table}
Table 1: Standard levels of hourly demand.
Figure 3: Basic hourly demand profile in each zone.
this choice to simplify the interpretation of the results. For the same reason, we decided to set \(u_{jk}=u_{j}=u\), for each \(j\in\mathcal{J}\) and \(k\in\mathcal{K}\).
Each instance in our testbed has the following common characteristics:
* The planning horizon considered is 1 day, discretized in time periods of one hour length. Consequently, \(\mathcal{T}=\{1,2,\ldots,24\}\).
* The cost of opening one charging station is \(F_{j}=\)100,000 \(\forall j\in J\).
* Two types of chargers are considered. Therefore, \(\mathcal{K}=\{1,2\}\), where 1 denotes quick chargers, and 2 stands for fast chargers.
* The cost of installing each type of chargers is \(f_{j1}=3,000\) and \(f_{j2}=\)25,000 \(\forall j\in J\) for quick and fast chargers, respectively.
* Quick chargers need \(R_{1}=4\) hours to fully recharge an EV, whereas fast chargers require \(R_{2}=1\) hour. Parameter \(p_{k}\) for the SP-CFL model is determined as: \(p_{k}=\frac{24}{R_{k}}\).
* The minimum percentage of chargers of each type to deploy in each zone is the following:
* Commercial zone: at least 20% of quick chargers (\(\rho_{C1}=0.20\)), and at least 40% of fast chargers (\(\rho_{C2}=0.40\)).
* Residential zone: at least 50% of quick chargers (\(\rho_{R1}=0.50\)), and at least 20% of fast chargers (\(\rho_{R2}=0.20\)).
* Industrial zone: at least 25% of quick chargers (\(\rho_{I1}=0.25\)), and at least 25% of fast chargers (\(\rho_{I2}=0.25\)).
Note that, in both MILP models, the two components in the respective objective function can take very different values, differing even by orders of magnitude. In our experiments we scaled the two components to make them comparable in value.
We initially considered all possible combinations of the values mentioned above for \(I\), \(J\), and \(u_{j}\). Subsequently, we ruled out each instance that turned out to be infeasible for both MILP models. This situation happened especially for the largest numbers of the demand nodes, the smallest numbers of potential stations, and the smallest values of parameter \(u_{j}\). In such cases, the maximum charging capacity, obtained opening all potential stations and deploying \(u_{j}\) chargers in each station \(j\), turned out not to be sufficient to serve the total charging demand. All together, we analyzed 67 instances.
### A comparison between COR and SEC instances
To illustrate the solutions obtained by the two MILP models on the COR and SEC instances, we discuss the results obtained on two small instances. In both instances, the number \(I\) of demand nodes is equal to 21, equally divided among the three zones. The number \(J\) of potential stations is 5. We assumed that the planning horizon comprises 8 time periods, and that the demand profile in each zone is the one depicted in Figure 4. These profiles are the same for both the COR and SEC instance.
An optimal solution produced by the MP-CFL model for the COR instance is depicted in Figure 5, where demand nodes are colored circles (blue for the commercial zone, orange for the residential zone, grey for the industrial zone) and potential locations are black triangles. Moreover, the color of the edge connecting a demand node to a black triangle represents the fraction of the charging demand assigned to the station thereby opened (black = 100% of the demand, yellow 75%, red 66%, blue 50%, green 33% and gray 25%). Note that Figure 5 shows the assignments concerning only the significant time periods. In other words, the assignments in time periods 3 and 8 are not reported, the former since there is no demand in that time period, the latter because it is identical to time period 7. Figure 6 displays an optimal solution to the SP-CFL model for the same instance.
The optimal solutions found by the two MILP models for the SEC instance are shown in Figures 7 and 8.
Comparing the optimal solutions for the COR instance obtained by the MP-CFL and the SP-CFL models (see Figures 5 and 6, respectively), one can notice that in the former all the potential stations are open and the charging demand is assigned, in the majority of the cases, to the nearest station. The solution found by the SP-CFL model opens only three stations. This small example highlights the limits of the latter model: it neglects that the charging demand is concentrated in few peak time periods, and, consequently, underestimates the charging need in those time periods. In fact, most of the demand is assigned to the charging station located in the central position (coordinates (0,0)), but the chargers deployed there are not sufficient to serve all the EVs during the peak hours. Due to the lower number of stations opened (3 against 5) and the number of chargers approximately 40% lower (30 against 52), we observe that 17.04% of customers cannot be served by the solution to SP-CFL.
Similar conclusions can be drawn observing the optimal solutions for the SEC instance produced by the MP-CFL and the SP-CFL models (see Figures 7 and 8, respectively). As expected, there is no remarkable difference between the computational time on the COR and SEC instances. To keep a reasonable length of the paper, we conducted the extensive experiments on the COR instances only.
Figure 4: Illustrative example: Basic hourly demand profile in each zone.
### Computational results
This section is devoted to the illustration and comment of the computational results. Before entering into the details of the results, we illustrate the solutions produced by the two MILP models for instance 200_30_30**and**\(\lambda=0.50\). Figure 9 depicts, for each time period, the charging capacity installed and the demand satisfied by an optimal solution to model MP-CFL for instance 200_30_30. For each time period (vertical axis), the tornado diagram shows as bordered bars the number of chargers of each type deployed: the red bordered bars (left) are the fast chargers, whereas the black bordered bars (right) are the quick chargers. The solid bars represent the assignment of the charging demand. For each time period and type of charger, the bar indicates the total number of chargers assigned to EVs. Recall that quick chargers need multiple time periods to fully charge an EV. Hence, an EV assigned to a quick charger will use it for multiple consecutive time periods. From Figure 9, one can notice that the demand assigned to each type of chargers in each time period does not violate the charging capacity deployed. Note also that in several time periods the demand approaches the capacity installed, and these two quantities are sometimes even equal (see the fast chargers in time periods 12 through 16).
Figure 6: COR instance: An optimal solution to the SP-CFL model.
The limits of the solution produced by the SP-CFL model are evident from Figure 10. The solution found by the SP-CFL model assigns, in several time periods, more EVs than the chargers actually available. For the fast chargers, this happens in time periods 8, 9, and from 12 to 22. On the other hand, for the quick chargers, this occurs in time periods 11 through 15. We observed this outcome for the majority of the instances tested.
Figure 8: SEC instance: An optimal solution to the SP-CFL model.
Figure 9: An optimal solution to the MP-CFL model for instance 200_30_30: Charging capacity installed (bordered bars) and demand assigned (solid bars).
We now analyze more thoroughly the solutions produced by the two MILP models. To gain some insights about the two components of the objective functions, each instance is solved by each model for several values of the trade-off parameter \(\lambda\). We tested the following values: \(\lambda=0.0001\) (maximum weight on the minimization of the total opening and installing costs), 0.25, 0.5, 0.75, and 0.9999 (maximum weight on the minimization of the average distance traveled by the EVs).
Table 2 provides, in the first three groups of columns, a summary of the charging capacity deployed by the solutions found by the two MILP models. For each group of instances and each model, Table 2 shows the average number of stations open (columns with header _"Stations"_), as well as the average number of quick and fast chargers installed. For each value of \(\lambda\), we reported in bold the average value of each of the former statistics for the MP-CFL model, along with the average deviation from the latter value for the SP-CFL model. The last group of three columns provides some statistics about the solution of the SP-CFL model. In fact, the statistics refer to a modified solution obtained as follows. The deployed capacity remains unchanged. However, as the solution assigns the demand to open stations that may be overloaded in some peak periods of time, we modified the assignment of the demand to the stations with the goal of increasing the percentage of demand satisfied by the charging capacity deployed by the solution to the SP-CFL model.
The procedure to modify the solution to the SP-CFL model iteratively considers one time period at a time, from 1 to \(T\), and, for a given time period \(t\), examines each demand node \(i\), from 1 to \(I\). The procedure checks whether the demand \(d_{i}^{t}\) of node \(i\) could be served in time period \(t\) according to the assignment indicated by the values of variables \(x_{ijk}\). In this context, being served means that there is a number of vacant chargers \(k\) in station \(j\) greater than or equal to \(d_{i}^{t}\cdot x_{ijk}\).
If such demand cannot be completely served, the unserved demand is reallocated among the vacant chargers of any type different from \(k\) available at the same station \(j\), if any. Then, the procedure attempts to reallocate the remaining unserved demand among the other stations. If there are vacant chargers among multiple stations, priority is given to the one nearest to \(j\). If at the station there
Figure 10: An optimal solution to the SP-CFL model for instance 200_30_30: Charging capacity installed (bordered bars) and demand assigned (solid bars).
\begin{table}
\begin{tabular}{c r|r r r|r r|r r||r r} \hline & & \multicolumn{2}{c|}{**Stations**} & \multicolumn{2}{c|}{**Quick**} & \multicolumn{2}{c||}{**Fast**} & \multicolumn{2}{c}{**SP-CFL**} \\ \(\boldsymbol{\lambda}\) & \(\boldsymbol{I}\) & MP-CFL & SP-CFL & MP-CFL & SP-CFL & MP-CFL & SP-CFL & **Recall\%** & **Max Lost\%** \\ \hline \multirow{6}{*}{**0.0001**} & 50 & 4.33 & 2.67 & 53.33 & 33.00 & 23.33 & 15.00 & 6.37\% & 21.00\% & 60.23\% \\ & 100 & 8.20 & 5.00 & 101.07 & 60.53 & 45.87 & 30.40 & 9.82\% & 21.07\% & 58.33\% \\ & 150 & 10.87 & 7.53 & 141.40 & 91.93 & 60.73 & 45.93 & 10.95\% & 20.88\% & 58.03\% \\ & 200 & 14.64 & 9.71 & 186.00 & 126.50 & 94.14 & 61.00 & 10.41\% & 21.10\% & 58.60\% \\ & 250 & 17.75 & 12.00 & 239.42 & 153.93 & 115.08 & 77.36 & 11.24\% & 21.28\% & 58.19\% \\ & 500 & 31.25 & 21.64 & 418.56 & 300.18 & 198.78 & 154.82 & 13.24\% & 21.82\% & 55.89\% \\ \hline
**Average** & & **12.90** & **-28.93\%** & **171.01** & **-30.32\%** & **80.46** & **-25.87\%** & **10.19\%** & **21.16\%** & **58.32\%** \\ \hline \multirow{6}{*}{**0.25**} & 50 & 4.80 & 3.80 & 55.47 & 35.60 & 22.80 & 14.13 & 12.23\% & 21.15\% & 55.90\% \\ & 100 & 8.13 & 5.33 & 95.00 & 61.73 & 47.07 & 29.93 & 12.12\% & 20.90\% & 58.43\% \\ & 150 & 11.07 & 7.73 & 137.00 & 90.13 & 68.57 & 46.07 & 11.66\% & 20.98\% & 57.90\% \\ & 200 & 14.14 & 9.79 & 170.14 & 120.64 & 97.93 & 62.14 & 12.17\% & 21.11\% & 57.73\% \\ & 250 & 17.50 & 11.79 & 227.58 & 143.21 & 117.75 & 79.79 & 12.54\% & 21.35\% & 54.93\% \\ & 500 & 30.38 & 21.27 & 393.44 & 287.27 & 229.75 & 159.00 & 14.56\% & 21.79\% & 54.95\% \\ \hline
**Average** & & **12.82** & **-26.74\%** & **162.39** & **-29.14\%** & **85.00** & **-28.74\%** & **12.46\%** & **21.19\%** & **56.73\%** \\ \hline \multirow{6}{*}{**0.50**} & 50 & 6.93 & 6.60 & 62.67 & 36.60 & 23.20 & 14.40 & 18.38\% & 19.64\% & 54.65\% \\ & 100 & 8.67 & 7.60 & 97.73 & 73.67 & 46.20 & 27.00 & 14.02\% & 20.56\% & 62.05\% \\ & 150 & 11.07 & 8.80 & 131.86 & 97.87 & 69.93 & 44.53 & 13.55\% & 20.58\% & 59.55\% \\ & 200 & 14.21 & 10.71 & 166.71 & 126.29 & 98.86 & 60.86 & 13.93\% & 20.82\% & 58.03\% \\ & 250 & 17.75 & 12.50 & 219.67 & 150.29 & 119.58 & 78.00 & 14.41\% & 21.06\% & 57.80\% \\ & 500 & 30.88 & 21.55 & 441.50 & 285.73 & 230.50 & 158.82 & 16.00\% & 21.69\% & 54.80\% \\ \hline
**Average** & & **13.44** & **-19.64\%** & **163.51** & **-26.20\%** & **85.68** & **-30.81\%** & **14.86\%** & **20.68\%** & **57.95\%** \\ \hline \multirow{6}{*}{**0.75**} & 50 & 12.13 & 12.20 & 78.60 & 37.47 & 31.07 & 15.20 & 26.82\% & 17.09\% & 51.54\% \\ & 100 & 12.20 & 11.40 & 120.60 & 70.27 & 50.20 & 28.53 & 19.83\% & 19.71\% & 60.30\% \\ & 150 & 13.64 & 12.80 & 152.93 & 103.73 & 73.14 & 43.00 & 18.00\% & 20.06\% & 60.58\% \\ & 200 & 15.21 & 13.50 & 173.36 & 137.07 & 97.14 & 58.14 & 16.34\% & 20.59\% & 59.90\% \\ & 250 & 18.00 & 14.79 & 223.25 & 156.71 & 120.58 & 76.43 & 16.13\% & 20.88\% & 58.69\% \\ & 500 & 30.63 & 21.91 & 428.25 & 285.36 & 233.75 & 157.91 & 17.78\% & 21.66\% & 54.67\% \\ \hline
**Average** & & **15.77** & **-10.69\%** & **175.14** & **-29.15\%** & **88.72** & **-33.95\%** & **19.02\%** & **19.90\%** & **57.71\%** \\ \hline \multirow{6}{*}{**0.9999**} & 50 & 21.20 & 21.20 & 96.87 & 40.53 & 41.27 & 17.00 & 31.11\% & 12.33\% & 47.73\% \\ & 100 & 27.20 & 27.20 & 160.20 & 74.67 & 77.60 & 31.40 & 26.57\% & 16.04\% & 52.02\% \\ \cline{1-1} & 150 & 29.71 & 28.40 & 212.93 & 104.33 & 113.86 & 46.87 & 24.51\% & 16.62\% & 57.10\% \\ \cline{1-1} & 200 & 30.93 & 30.07 & 245.50 & 141.43 & 146.00 & 61.86 & 23.28\% & 17.41\% & 57.54\% \\ \cline{1-1} & 250 & 33.58 & 30.79 & 276.00 & 165.29 & 158.29 & 79.43 & 22.05\% & 17.92\% & 57.05\% \\ \cline{1-1} & 500 & 38.75 & 34.55 & 429.38 & 285.45 & 375.25 & 165.27 & 19.59\% & 19.90\% & 52.74\% \\ \hline \multicolumn{2}{l|}{**Average**} & **29.33** & **-3.25\%** & **218.95** & **-41.67\%** & **132.99** & **-53.23\%** & **24.80\%** & **16.53\%** & **54.02\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: MP-CFL vs.
are vacant chargers of multiple types, priority is given to the type that has the largest number of vacant units. Let \(0\leq\bar{d}_{i}^{t}\leq d_{i}^{t}\) be the demand arising in node \(i\) at time period \(t\) that the procedure has reallocated. Eventually, the procedure computes the fraction of the total demand that has been reallocated, in percentage, producing statistic _"Reall%"_. For each instance, the fraction of the total demand reallocated is computed as \(100\cdot\frac{\sum_{t\in T}\sum_{i\in T}\bar{d}_{i}^{t}}{\sum_{t\in T}\sum_{i \in T}\bar{d}_{i}^{t}}\).
If, after the reallocation procedure, a portion of the demand is not served, the fraction of the total demand that remains unserved is computed, in percentage, producing statistic _"Lost%"_. For each instance, the latter is computed as \(100\cdot\frac{\sum_{t\in T}\sum_{i\in T}\bar{d}_{i}^{t}}{\sum_{t\in T}\sum_{i \in T}\bar{d}_{i}^{t}}\), where \(0\leq\bar{d}_{i}^{t}\leq d_{i}^{t}\) is the demand arising in node \(i\) at time period \(t\) that is not served. Finally, the procedure computes statistic _"Max Lost%"_ as the maximum fraction of demand not served across all time periods. The latter is computed for each instance as \(100\cdot\max\limits_{t\in T}\left\{\frac{\sum_{i\in T}\bar{d}_{i}^{t}}{\sum_{i \in T}\bar{d}_{i}^{t}}\right\}\).
From Table 2 we can gain the following insights:
* the charging capacity deployed in the solutions to the SP-CFL model is significantly smaller than the capacity installed according to the MP-CFL model, both in terms of stations open and chargers installed (see statistics "Stations", "Quick", and "Fast");
* reducing the weight of the opening and installing costs (i.e., increasing the value of \(\lambda\)), the average deviation between the solutions to the two models in terms of stations open decreases steadily, from -28.93% for \(\lambda=0.0001\) to -3.25% for \(\lambda=0.9999\). Nevertheless, the average number of chargers installed (of each type) in the solutions found by the SP-CFL model is always remarkably smaller compared to those produced by the MP-CFL model;
* given a value of \(\lambda\), for both models the greater the number of demand nodes, the greater the number of stations open and chargers installed (see statistics "Stations", "Quick", and "Fast");
* the larger the value of \(\lambda\), the larger the values of "Stations", "Quick", and "Fast". This is an expected outcome, as more importance is given to the average distance term in the objective functions. In other words, the reduction of the average distance traveled can only be achieved by increasing the number of stations opened and chargers deployed;
* due to the cheaper installation cost, both models install a larger number of quick compared to fast chargers.
Before entering into the details of the statistics computed to measure the limits of the SP-CFL model, it is worth pointing out that in every solution found by the latter model, a part of the demand was reallocated, and a part was lost. The main insights we can gain from the three rightmost columns of Table 2 are the following:
* "Reall%" takes, on average, large values. It ranges from 6.37% (see \(\lambda=0.0001\) and \(I=50\)) to 31.11% (see \(\lambda=0.9999\) and \(I=50\));
* "Lost%" takes, on average, large values as well. It ranges from 12.33% (see \(\lambda=0.9999\) and \(I=50\)) to 21.82% (see \(\lambda=0.0001\) and \(I=500\));
* "Max Lost%" takes, on average, extremely large values, always larger than 47%.
Figure 11: SP-CFL model: Box-and-wisker plots showing the distribution of the demand reallocated and lost (\(I=200\)).
The statistics confirm that the SP-CFL model is not capable of capturing the characteristics of the problem and tends to underestimate the charging capacity to deploy.
Further insights that can be obtained from Table 2 on the limits of the SP-CFL model are as follows:
* for values of \(\lambda\) smaller than or equal to 0.25, the average value of "Reall%" tends to increase with the number of demand nodes;
* for values of \(\lambda\) greater than or equal to 0.75, the average value of "Reall%" tends to decrease with the number of demand nodes;
* the larger the value of \(\lambda\), the larger the average value of "Reall%", and the smaller tends to be the value of "Lost%". This behavior can be explained by observing that increasing the value of \(\lambda\), the number of stations opened and chargers installed increases as well, making it easier to find vacant chargers, and thereby reducing the unserved demand. of "Max Lost%" slightly decreases as the value of \(\lambda\)" increases.
The box-and-whisker plots depicted in Figure 11 show the distribution of the values of the three statistics computed to determine the reallocated demand, and the unserved demand, for all the instances with \(I=200\) solved for different values of \(\lambda\). The box-and-whisker plots confirm the insights previously drawn. When the main term in the objective function of the SP-CFL model is the opening and installing cost - i.e, for small values of \(\lambda\) - the charging capacity installed is small and little can be done to reallocate the unserved demand. As a consequence, large percentages of the charging demand are unserved. By increasing the weight given to the average distance traveled - i.e., for large values of \(\lambda\) - the charging capacity installed increases. Consequently, the percentage of the demand that can be reallocated increases, and, thereby, the percentage of the demand that is unserved becomes smaller. Nevertheless, the latter percentages always remain quite large (the values are always greater than 16%, and often greater than 20%). The performance is even worse if we analyze the distribution of "Max Lost%". Its average value ranges from approximately 56% to roughly 60% (see the dotted lines inside the boxes). Similar conclusions can be drawn observing the results obtained when solving the instances with a different number of demand nodes. For the sake of readability, such results are not reported here.
The results discussed above clearly show that the solutions found by the SP-CFL model, if implemented, would lead to a very poor quality of service provided to the EV drivers. Moreover, the results on demand reallocation imply that the objective function of SP-CFL would underestimate the total travel distance covered by the EV drivers to reach a free charging station.
The following analysis is focused only on the solutions of the MP-CFL model. Figure 12 illustrates the distributions of the values of the average distance traveled (first term in the objective function) and the opening and installing cost (second term), for all the instances with \(I=200\) solved for different values of \(\lambda\).
From Figure 12, we can draw the following main insights:
* as expected, the larger the value of \(\lambda\), the smaller the average distance traveled by an EV to reach the assigned charger;
* besides its largest value, increasing the value of \(\lambda\) produces only slight increases in the values of the total cost.
In fact, we expected a sharper increase of the values of the total cost when less importance is given to the second term of the objective function. On the contrary, and neglecting the extreme case with \(\lambda=0.9999\), when the value of \(\lambda\) is increased the solutions obtained by the MP-CFL model significantly improved the average traveling distance, at the cost of only a small deterioration of the total opening and installing cost.
We conclude our analysis by considering the computational burden required to solve each MILP model. Recall that each instance is solved with a time limit of 3,600 seconds. Table 3 summarizes the computational performance of the two MILP models. For each value of \(\lambda\), the instances are clustered in groups according to the number of potential locations \(J\). For each group of instances and each model, Table 3 provides the average CPU time (in seconds) spent to find the optimal (or best) solution (columns with header _"CPU Time (secs.)"_), the average optimality gap (_"Gap%"_) and the worst optimality gap (_"Max Gap%"_).
The main insights that we can gain from Table 3 are as follows:
* the solution to the MP-CFL model is, in general, more computationally expensive compared to the SP-CFL model (see the average values of "CPU Time (secs.)", and the average values of "Gap%");
* the optimality gaps, for both models, are on average very small. In the majority of the cases, the solver found an optimal solution, or a solution very close to the optimum;
* the worst gaps, for both models, are also very small. In only few instances, statistic "Max Gap%" took a value greater than 1%;
* as expected, for a given value of \(\lambda\), computing times for both models increase with the number of potential locations;
Figure 12: MP-CFL model: Box-and-wisker plots showing the distribution of the average distance and the cost (\(I=200\)).
* the computational burden required to solve each MILP model decreases as the value of \(\lambda\) increases.
In summary, while the MP-CFL model requires on average more computational time than the SP-CFL model, the additional time needed by the MP-CFL model is marginally small.
## 6 Conclusions
In this paper, we studied the role of temporal and spatial distributions of charging demand in determining an optimal location of charging stations for electrical vehicles in an urban setting. This is an application context where the daily demand is well-known to be very dynamic and concentrated at some peak hours, and where the demand pattern is known to depend also upon the city zone.
To highlight the need of considering explicitly the daily demand patterns, we presented a multi
\begin{table}
\begin{tabular}{c c|c c|c c|c c} \hline & & \multicolumn{2}{c|}{**CPU Time (secs.)**} & \multicolumn{2}{c|}{**Gap\%**} & \multicolumn{2}{c}{**Max Gap\%**} \\ \(\boldsymbol{\lambda}\) & \(\boldsymbol{J}\) & MP-CFL & SP-CFL & MP-CFL & SP-CFL & MP-CFL & SP-CFL \\ \hline & 10 & 1,456.48 & 820.60 & 0.13\% & 0.09\% & 0.45\% & 0.76\% \\ & 20 & 2,775.72 & 2,404.48 & 0.61\% & 0.46\% & 1.85\% & 1.87\% \\
**0.0001** & 30 & 3,308.19 & 2,759.82 & 1.03\% & 1.08\% & 2.90\% & 3.69\% \\ & 40 & 3,515.42 & 3,142.17 & 0.94\% & 1.23\% & 2.90\% & 3.69\% \\ & 50 & 3,513.11 & 3,330.17 & 1.08\% & 1.43\% & 2.90\% & 3.69\% \\ \hline
**Average** & & **3,037.11** & **2,568.85** & **0.81\%** & **0.90\%** & & \\ \hline & 10 & 660.83 & 259.72 & 0.03\% & 0.01\% & 0.17\% & 0.11\% \\ & 20 & 2,133.02 & 922.11 & 0.12\% & 0.04\% & 0.60\% & 0.37\% \\
**0.25** & 30 & 2,923.03 & 1,320.37 & 0.31\% & 0.13\% & 0.92\% & 0.82\% \\ & 40 & 3,156.36 & 1,790.20 & 0.45\% & 0.19\% & 1.19\% & 1.07\% \\ & 50 & 3,406.24 & 2,120.10 & 0.55\% & 0.18\% & 1.77\% & 0.84\% \\ \hline
**Average** & & **2,614.44** & **1,335.04** & **0.32\%** & **0.11\%** & & \\ \hline & 10 & 263.90 & 2.43 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ & 20 & 1,135.25 & 338.01 & 0.01\% & 0.01\% & 0.06\% & 0.17\% \\
**0.50** & 30 & 2,544.90 & 1,334.12 & 0.19\% & 0.07\% & 0.67\% & 0.66\% \\ & 40 & 2,739.82 & 2,051.00 & 0.34\% & 0.09\% & 1.06\% & 0.58\% \\ & 50 & 2,978.22 & 2,145.28 & 0.97\% & 0.10\% & 6.85\% & 0.49\% \\ \hline
**Average** & & **2,094.62** & **1,238.01** & **0.34\%** & **0.06\%** & & \\ \hline & 10 & 6.45 & 2.51 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ & 20 & 376.49 & 1,183.04 & 0.00\% & 0.01\% & 0.02\% & 0.13\% \\
**0.75** & 30 & 1,678.43 & 1,984.19 & 0.04\% & 0.05\% & 0.41\% & 0.22\% \\ & 40 & 2,449.72 & 2,298.07 & 0.06\% & 0.07\% & 0.37\% & 0.24\% \\ & 50 & 2,926.84 & 2,312.06 & 0.40\% & 0.06\% & 4.28\% & 0.22\% \\ \hline
**Average** & & **1,648.46** & **1,629.29** & **0.11\%** & **0.04\%** & & \\ \hline & 10 & 2.80 & 1.97 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ & 20 & 966.18 & 1,488.73 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\
**0.9999** & 30 & 1,575.65 & 2,581.28 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ & 40 & 1,864.13 & 3,243.86 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ & 50 & 2,569.15 & 2,957.01 & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ \hline
**Average** & & **1,528.78** & **2,152.78** & **0.00\%** & **0.00\%** & & \\ \hline \end{tabular}
\end{table}
Table 3: MP-CFL vs. SP-CFL models: A summary of CPU times and optimality gaps.
period optimization model, that captures the variability over time of the demand, and compared it with a single-period optimization model. By means of a worst-case analysis, we theoretically proved that the single-period model may produce solutions where a large portion of the demand cannot be served. Extensive computational experiments confirm the limits of the single-period model. The goal of the optimization models is to balance two objectives: the total cost of deploying the infrastructure and the average distance traveled by the customers to reach a charging station.
The limits pointed out for the single-period model go beyond the specific application considered in this paper, and suggest the importance of incorporating time-dependency in location decisions when the demand fluctuations are remarkable during the planning horizon.
Future developments of the research may concern the objective function. Since the two objectives considered are not homogeneous, a thorough analysis of the trade-off between infrastructure cost and drivers traveling distance may be of interest to a decision-maker. Moreover, we expect that replacing the average traveling distance with some equity measures may produce solutions that are more satisfactory for the customers, at the cost of a little increase in the infrastructure cost. Finally, to solve larger instances, in particular when an equity measure is considered, a heuristic approach would deserve to be studied.
### Acknowledgements
This study has been made in the framework of the MoSoRe@UniBS (Infrastrutture e servizi per la Mobilita Sostenibile e Resiliente) Project 2020-2022 of Lombardy Region, Italy (Call-Hub ID 1180965; bit.ly/2Xh2Nfr, [https://ricerca2.unibs.it/?page](https://ricerca2.unibs.it/?page) id=8548).
|
2305.17184 | Monte Carlo Radiation Transport for Astrophysical Transients Powered by
Circumstellar Interaction | In this paper, we introduce \texttt{SuperLite}, an open-source Monte Carlo
radiation transport code designed to produce synthetic spectra for
astrophysical transient phenomena affected by circumstellar interaction.
\texttt{SuperLite} utilizes Monte Carlo methods for semi-implicit,
semi-relativistic radiation transport in high-velocity shocked outflows,
employing multi-group structured opacity calculations. The code enables rapid
post-processing of hydrodynamic profiles to generate high-quality spectra that
can be compared with observations of transient events, including superluminous
supernovae, pulsational pair-instability supernovae, and other peculiar
transients. We present the methods employed in \texttt{SuperLite} and compare
the code's performance to that of other radiative transport codes, such as
\texttt{SuperNu} and CMFGEN. We show that \texttt{SuperLite} has successfully
passed standard Monte Carlo radiation transport tests and can reproduce spectra
of typical supernovae of Type Ia, Type IIP and Type IIn. | Gururaj A. Wagle, Emmanouil Chatzopoulos, Ryan Wollaeger, Christopher J. Fontes | 2023-05-26T18:17:53Z | http://arxiv.org/abs/2305.17184v1 | # Monte Carlo Radiation Transport for Astrophysical Transients Powered by Circumstellar Interaction
###### Abstract
In this paper, we introduce SuperLite, an open-source Monte Carlo radiation transport code designed to produce synthetic spectra for astrophysical transient phenomena affected by circumstellar interaction. SuperLite utilizes Monte Carlo methods for semi-implicit, semi-relativistic radiation transport in high-velocity shocked outflows, employing multi-group structured opacity calculations. The code enables rapid post-processing of hydrodynamic profiles to generate high-quality spectra that can be compared with observations of transient events, including superluminous supernovae, pulsational pair-instability supernovae, and other peculiar transients. We present the methods employed in SuperLite and compare the code's performance to that of other radiative transport codes, such as SuperNu and CMFGEN. We show that SuperLite has successfully passed standard Monte Carlo radiation transport tests and can reproduce spectra of typical supernovae of Type Ia, Type IIP and Type IIn.
methods: numerical -- radiative transfer -- supernovae: general -- stars: evolution -- circumstellar matter +
Footnote †: journal: APJ
0000-0002-4002-3882]Gururraj Wagle
0000-0002-3880-0888]Emmanouil Chatzopoulos
0000-0002-4882-7880]Ryan Wollaeger
0000-0002-0780-0888]Christopher J. Fontes
## 1 Introduction
Classical supernovae (SNe) are broadly divided into Type I, Type II and their sub-types based on their observed light curves and spectra following the explosion (see Filippenko, 1997, for a review of the SN classification system). Unambiguous identification of the progenitor star is not possible for most of these events due to observational limitations. However, a significant progress has been made during the past several decades in understanding the mechanisms that drive the formation of these explosive events through theoretical modeling and the supporting observational evidence (see, e.g., Branch & Wheeler, 2017; Burrows & Vartanyan, 2021; Hillebrandt et al., 2013). In the case of Type Ia supernovae (SNe Ia), the pre-explosion progenitor star has not been successfully observed in the archival images, with only one exception in the case of spectroscopically abnormal SN 2012Z (Wright et al., 2016). The consensus about the progenitor of SN Ia being a white dwarf is based on the observations of early light curves that imply a compact star. The other widely accepted theory is that the progenitor white dwarf is part of a binary system, and it either accretes mass from its non-degenerate companion or merges with a white dwarf companion, which leads to the explosive core-carbon burning that results into a SN Ia. However, the evidence for the nature of such a companion star is also not well-established through observations (Branch & Wheeler, 2017). There are several theories proposed to explain the explosion mechanism ranging from pure detonation to pure deflagration to deflagration-to-detonation models (Hillebrandt et al., 2013).
Even in the case of more commonly observed core-collapse supernovae (CCSNe or SNe Type II), the pre-explosion archival observations are limited to nearby events (\(d\leq 30Mpc\), Smartt, 2009; Van Dyk et al.
2023). The SNe II result from the gravitational collapse of the core of massive stars (M \(>\) 8M\({}_{\odot}\) on the zero-age main-sequence, ZAMS, at solar metalicity, Woosley et al., 2002). Bethe and Wilson (1985) proposed a _delayed neutrino heating_ model in competition with then favored _core bounce-shock_ model of Colgate and White (1966) to explain these the CCSN mechanism. The neutrino-driven model of Bethe and Wilson was observationally confirmed when SN 1987A, identified as SN II, exploded in the Large Magellanic Cloudy (LMC) at a distance of less than 50 kpc (Arnett et al., 1989). Kamiokande-II (Hirata et al., 1987), Irvine-Michigan-Brookhaven (Bionta et al., 1987) detectors and the Baksan Scintillator Telescope (Alexeyev et al., 1988) observed the neutrino flux from the location of SN 1987A, which agreed with that predicted by the theory. Similar to SNe II, the theory also predicts that a flux of neutrinos can be observed from SNe Ia; however, it is about 4 magnitude smaller than SNe II. Theoretical calculations done for the detonation-to-deflagration model show that a SN Ia at a distance of about 10 kpc will be barely observable by the largest current and next-generation neutrino detectors (Wright et al., 2016). Thus to make such an observation, a SN Ia has to explode at a close distance.
Inspite of the advancements in our knowledge over past several decades, there still exists a gap in our understanding of the characteristics of progenitor stars and the differences in the observed characteristics of SNe of the same type that result from these stars. For example, the archival observations of the pre-explosion sites of CCSNe stipulate that the progenitor stars of SNe II are red supergiant stars (RSGs) that retained large hydrogen envelopes at the time of their explosion (Smartt et al., 2009; Smartt, 2015). On the other hand, the pre-explosion archival images indicate that SN 1987A resulted from an explosion of a blue supergiant star (BSG) that retained a substantial amount of hydrogen in its envelope (Arnett et al., 1989). Arnett (1991) originally proposed a model with a ZAMS mass of \(\sim\)20 M\({}_{\odot}\) and metallicity one-forth of the Sun with no mass-loss to explain a BSG progenitor star. Woosley et al. (1988) and Woosley (1988) showed that the same model star with no convective overshoot or semi-convection remains a RSG through helium burning and later becomes a BSG to explode as a SN II. Alternately, the more recent binary merger models explain most of the observed properties of the progenitor of SN 1987A (Menon and Heger, 2017). This demonstrates that the properties of the progenitor stars and the characteristics of the resulting SN depend greatly upon the choices of the initial conditions on the mass, metallicity and chemical composition, internal mixing, stellar rotation, mass loss in stellar wind, binarity, etc. (some of these properties are explored in the four-article series, Wagle et al., 2019; Wagle and Ray, 2020; Wagle et al., 2020; Palani Balaji et al., 2022).
There exist observed properties in some cases of SN explosions that are evidently the result of an interaction between the supernova outflow and the circumstellar material (CSM) that surrounds the progenitor star, formed by mass-loss prior to its explosion. In the absence of such interactions, the light curves are primarily powered by the initial shock energy deposited at the time of explosion and by radioactive decay of \({}^{56}\)Ni or \({}^{56}\)Co. However, in the presence of strong SN ejecta-circumstellar material interaction (hereafter, CSI), additional SN luminosity is displayed in the form of the ultra-violet (UV) or X-ray emission observed during the early phases to the radio emission observed during the late times. The nature of this emission depends upon the mass and location of the CSM and the properties and rate of expansion of the supernova ejecta, which in turn depends on the structure and evolution of the progenitor star. The CSI gives rise to a strong and fast shock wave in the CSM and a reverse shock in the SN ejecta. In such situations, the radiation leaks through the shocked region as the time-scale of photon diffusion is much shorter than the shock-crossing time, and the assumptions of local thermodynamic equilibrium (LTE) are no longer valid. The inverse-Compton scattering of photons due to the fast electrons in the shocked CSM produce the UV and X-ray emission. In addition the free-free radiation from both the forward- and reverse-shocked regions produce X-ray emission. The atoms in the high-density _cool dense shell_ (CDS) formed by the compressed shocked ejecta recombine to form the narrow-width emission lines observed in the SNe of Type IIn ("n" stands for narrow), especially the Balmer series line H\(\alpha\). The wings of these emission lines can be broadened by multiple electron scattering. (see Branch and Wheeler, 2017; Chevalier and Fransson, 1994; Chevalier et al., 2006; Chevalier and Fransson, 2017; Dessart et al., 2016; Dessart and Hillier, 2022, for further reading.)
In addition to classical SNe IIn, over the last two decades, a new class of SNe that might indicate presence of strong CSI has been discovered (Quimby et al., 2011; Quimby, 2012; Gal-Yam, 2012). These SNe exhibit luminosities of an order of magnitude higher than their classical counterparts. These eponymous superluminous supernovae (SLSNe) have observed properties, such as light curves (LCs) and spectra that cannot be solely attributed to any of the standard explosion mechanisms outlined above. The SLSNe are broadly classified into two classes - SLSN-I (hydrogen-poor) and
SLSN-II (hydrogen-rich) - to which most of the observed SLSNe belong. There is also a class of radioactively powered SLSNe (SLSN-R), which are less common, but better understood (Gal-Yam, 2012). SN 2007bi is the first well-observed SLSN-R. The enormous luminosity of this SLSN implies large amount of radioactive nickel (\(>3\)M\({}_{\odot}\), Gal-Yam et al., 2009) as expected from a full-fledged pair-instability supernova (PISN) model for very massive stars with initial mass well in excess of 100 M\({}_{\odot}\)(Rakavy & Shaviv, 1967; El Eid et al., 1983; Kasen et al., 2011; Chatzopoulos & Wheeler, 2012). One of the mechanisms among the plausible scenarios that could explain the observed luminosities of these SLSNe is the CSI (Moriya et al., 2018). The evidence of CSI has been observed in the case of luminous SN 2017hcc, which exhibited narrow emission lines in its spectra like a classical SN IIn. The late time multi-wavelength observations of this SNe show evidence of CSI (Chandra et al., 2022). In rare cases, luminous SNe are observed to have SN Ia-like features before exhibiting SN IIn trends, which could be attributed to a WD exploding into dense CSM. (e.g., SN 2018evt, Yang et al., 2023)
Understanding the nature of the episodic mass-loss associated with SNe and SLSNe is a challenge. Stars loose mass throughout their lives in the form of radiation-driven winds from their surface (de Jager et al., 1988; Vink et al., 2001). The rate at which the mass is lost depends on the mass and luminosity of the star. The mass-loss is relatively unimportant in the pre-SN evolution of the low- and intermediate-mass stars (\(\leq\)10 M\({}_{\odot}\)) until the final stages of evolution. However, in higher-mass stars (\(>\)20 M\({}_{\odot}\)), a significant amount of the stellar mass can be taken away by the mass lost during stellar eruptions throughout their life. Therefore, the mass-loss is important for massive stars in determining the type of the resulting SN explosion. The CSM created by the moss-loss can come from both radiation-driven winds and episodic eruptions (Smith, 2014). The mass-loss rates in the range of 10\({}^{-4}\) to several M\({}_{\odot}\) resulting in total CMS mass of \(\sim\)0.1 to tens of M\({}_{\odot}\) have been inferred form observations (Branch & Wheeler, 2017). The late-phase nuclear burning (especially Ne/O core-burning) in massive stars can lead to convectively- driven hydrodynamic waves (g-modes). Gravity waves and acoustic modes are excited at the interface of convective and radiative zones. These waves deposit super-Eddington heat near the surface, which can drive a appreciable mass-loss under certain conditions (Quataert & Shiode, 2012; Fuller, 2017; Fuller & Ro, 2018). Other mechanisms that can drive high mass loss involve binary interactions leading to common envelope ejection and binary mergers (Bodenheimer & Taam, 1984; Taam & Bodenheimer, 1989, 1991).
There are several existing and upcoming astrophysical transient search projects such as the James Webb Space Telescope (JWST) and the Wide Field Infrared Survey Telescope (WFIRST) that will most likely include primordial transient events in the early universe including the SN explosions of the first stars. In addition, facilities such as the Zwicky Transient Facility (ZTF, Bellm et al., 2019) and the Vera Rubin Observatory (formerly known as LSST; Large Synoptic Survey Telescope, Ivezic et al., 2008) will be discovering up to a million of new transient events every night.
The prevalence of the role of CSI in many astrophysical transients necessitates the need for a radiation transport (RT) code that can simulate the emitted spectrum over the life of the transient under a variety of initial progenitor configurations. While a few codes that include the necessary assumptions to tackle the CSI regime exist, most of them are not fully accessible to the public and lack several components of physics that are important in strongly-shocked explosive outflows. In this paper, we introduce SuperLite, an open-source Monte Carlo Radiation Transport (MCRT) code that can be used to model interacting SN and transient spectra for easy comparison to observations.
In section 2 we summarize the capabilities of other numerical frameworks that have been used to model interacting SN (Type IIn) spectra, in 3, we describe the numerical methods adapted in SuperLite and the modifications made in its parent code SuperNu to enable it to post-process the non-homologous SN ejecta. In section 4, we present results to verify that the code works as expected in the standard test case scenarios. In section 5, we present the results of the spectra produced by SuperLite and compare it with other codes for several different SN types including the homologous and non-homologous velocity profiles. Finally, in section 6, we discuss the results of the SuperLite code development along with the upcoming enhancements to the code.
## 2 Review of SN Radiation Transport Codes.
There are two distinct approaches to solving the equations of radiation transport - the deterministic approach and the Monte Carlo approach. Both approaches have their advantages and disadvantages (see Castor, 2004, for further reading). In the deterministic approach a full or partial set of radiative transport equations are solved by discretizing them into coupled system of algebraic equations. These methods can prove to be computationally very inefficient for large systems and have limited parallel scalability (Abdikamalov et al., 2012).
In the Monte-Carlo (MC) methods, the radiation transport equations are not solved directly as is done with the deterministic methods. Instead, the trajectories of a number of packets of particles such as photons or neutrinos (referred to as MC particles or MC packets or MCP) are calculated stochastically using probability distribution functions (PDF) and pseudo-random numbers. Each MC particle represents a number of physical particles. Therefore, the number of MC particles required for calculations is much less than the number of physical particles they represent. The individual MC particles propagate seemingly randomly, but the ensemble of MC particles can provide an accurate representation of radiative transport process and the evolution of the radiation field. The biggest disadvantage of using MC methods is the stochastic fluctuations in the results of the MC calculations, due to its probabilistic nature. The MC methods suffer from MC noise, which roughly scales with the number of MC particles as \(N^{-1/2}\)(Abdikamalov et al., 2012). Thus one must use a large number of MC particles for better statistics, and hence, a better signal-to-noise ratio (SNR).
Nevertheless, the MC particles can be decoupled and propagated independently, similar to the physical photons they represent. The number of MC particles is constrained by the computer memory and processing power, but with the modern high-performing computing systems this problem can be easily overcome with parallelization of the code. The computational cost increases further in the regions of high optical thickness. Special techniques need to be applied in the diffusion regime to overcome this issue (see Noebauer & Sim, 2019, for a review of advanced MC techniques). A significant advantage of MC methods is that the scattering processes are easy to implement, compared to the deterministic methods discussed above. The propagation direction of the scattered particle can be chosen from a PDF constructed from the scattering kernels (Abdikamalov et al., 2012). In addition, MC methods are less susceptible to numerical errors and are easy to extend to multidimensional time-dependent problems (Noebauer & Sim, 2019; Kasen et al., 2006). Therefore, the MC methods can be more advantageous for large and complicated systems, such as explosive and interacting outflows.
The numerical frameworks that simulate the evolution or the radiation properties of the gas ejected during a SN explosion rely on several simplifying approximations; the most common being the assumption of homologous expansion of the ejected material (\(v\propto r\)), the LTE conditions, and the Sobolev approximation for line opacity (Sobolev, 1960). (These approximations are discussed further in section 3). For more common SN explosion types, such as SN Ia and SN IIP, these approximations yield LCs and spectra that are well in agreement with observations (Branch & Wheeler, 2017). In the case of the CSI, most of these assumptions break down; more specifically that of homologous expansion of the ejecta and LTE. The Sobolev approximation also breaks down as a result of the non-homologous expansion of the shocked SN ejecta and CSM, since the line interaction surfaces are not parallel planes anymore (Rybicki & Hummer, 1978). To reliably produce the outcome in such cases, the hydrodynamic step has to be solved and the quantities need to be evaluated at each time step before the transport step is performed (Roth & Kasen, 2015; Noebauer & Sim, 2019).
There are codes that use numerical approaches to solve for the time-dependent non-equilibrium radiation-hydrodynamic (RHD) evolution of the SN ejecta, e.g., STELLA (Blinnikov & Bartunov, 1993; Blinnikov et al., 1998; Blinnikov & Sorokina, 2004; Blinnikov et al., 2006), which uses a radiation intensity moments scheme, or The SuperNova Explosion Code (SNEC, Morozova et al., 2015), which uses flux-limited diffusion approximation. Both SNEC and STELLA are 1-D, multi-group RHD codes that evolve the radiation field to predict bolometric SN light curve. They calculate the ionization and excitation level populations of limited number of species in LTE conditions. SNEC imposes the same radiation and matter temperatures, while STELLA does not treat radiation in equilibrium with matter. Codes such as SEDONA (Kasen et al., 2006) use MCRT methods to produces SN light curves, spectra, and polarization. SEDONA is a multidimensional, multi-frequency code that uses expansion opacity approximation. It also assumes that the ejecta are in homologous expansion. Roth & Kasen (2015) have coupled the MCRT code to 1-D, non-relativistic RHD solvers. These codes rely on simplifications, such as a single grey opacity or a limited number of groups, to speed up the calculations. They are useful in providing the general emission properties and model light curves, but they cannot predict a resolved spectrum that includes the effects of line emissivity and opacity. Some of the other popular publicly available RHD codes include HERACLES(Gonzalez et al., 2007), FLASH (Dubey et al., 2012; Fryxell et al., 2000), and CASTRO(Zhang et al., 2011, 2013).
There exist codes that predict a synthetic spectrum by solving the radiation transfer equations such as CMFGEN(Hillier & Miller, 1998; Hillier & Dessart, 2012), SYN++(Thomas et al., 2011), or by using MC techniques like in the case of TARDIS(Kerzendorf & Sim, 2014), SuperNu(Wollaeger et al., 2013; Wollaeger & van Rossum, 2014), and PHOENIX(Hauschildt, 1992; Hauschildt & Baron,
1999, 2004; van Rossum, 2012). TARDIS and CMFGEN are 1-D, MC codes that post-process the hydrodynamic profiles, and they are computationally inexpensive compared to the RHD codes that involve time-dependent calculations. SYN++ is also a 1-D code that is a modern C++ version of parametrized spectral synthesis code SYNOW(_Synthesis Now_, Branch et al., 2009; Fisher, 2000) that can be used for rapid analysis of SN spectra. SuperNu is a time evolution code that advances the radiation field in each iteration. TARDIS, SYN++, and SuperNu all assume homologous expansion of the ejecta while implementing the radiation transport and for opacity and Doppler shift calculations. A comparison of RT codes performed by a collaboration of ten groups around the world that are developing existing RT codes is discussed in great detail by Blondin et al. (comprehensive supernova radiative-transfer code-comparison initiative; StaNdaRT, 2022). The TARDIS code assumes that the effective photosphere is external to the volume in which a majority of luminosity is generated, which only holds true for early epochs of SNe Ia (Kerzendorf & Sim, 2014).
Table 1 summarizes the capabilities of some of the most frequently cited codes that have been used to model SN spectra and of our new, open-source code SuperLite that we introduce in this paper.
## 3 Numerical Methods
SuperLite is a multigroup radiative transport code that uses Implicit Monte Carlo (IMC, Fleck & Cummings, 1971) and discrete diffusion Monte Carlo (DDMC, Densmore et al., 2007, 2008, 2012) methods to model radiation transport processes in explosive and interacting outflows. IMC solves the radiation transport equations semi-implicitly by treating absorption and emission as instanteneous effective scattering as explained in Fleck & Cummings (1971). The DDMC method is used to optimize IMC where local cell optical depth is higher than the user-specified value, to speed-up the calculations by treating several low mean-free-path scattering events with single diffusion events (see, for instance, Abdikamalov et al., 2012). SuperLite is developed by significantly modifying the "parent" SuperNu code and by relaxing some of its original assumptions that do not hold for outflows affected by shocked regions due to CSI. SuperLite is a time-independent code that uses an Eulerian grid for MCRT simulation. The MC particle properties are mapped into the lab frame and advanced by the lab-frame form of Equation (1) (see, for instance, Wollaeger et al., 2013, equation 1). The co-moving frame energy of the particle is conserved during each interaction. The opacity calculations are performed in the co-moving frame, by discretizing opacity into groups via direct integration over co-moving wavelengths. As an input to the SuperLite simulation, the spatial coordinates, velocity, temperature, and mass/density profiles are derived from a hydrodynamic or a RHD simulation at any given time since the SN explosion from an external code (such as STELLA). The outflow structures obtained from RHD simulations have the benefit of possessing implicit time-dependence in the temperature and material profiles. The SuperLite simulation itself is done in steady-state and is therefore time-independent. SuperLite assumes that the information about the radiation field is fully taken into account by the radiation temperature structure predicted by the RHD profiles that are used as input. Hence, the radiation energy deposited into the spatial grid cells due to the radioactive decays of \({}^{56}\)Ni or \({}^{56}\)Co is not implicitly re-added. In effect, SuperLite is a two-temperature code. The radiation temperature, \(T_{r}\) is assumed to equal to electron temperature, \(T_{e}\), if \(T_{r}\) is not known a priori.
### Nonhomologous expansion
In the absence of CSI, the SN ejecta outflows are in a state of near free-expansion after the initial shock-breakout phase. Hence, at any given time after the explosion, the radial distance of a particular layer of the ejecta from the center of the expansion can be found if the velocity profile at any other time during the expansion is known. In other words, the radial distance of a layer of SN ejecta at any given time '_t_' since the explosion, \(r(t)\), can be given by a simple relationship, \(r(t)=vt\). This relationship also simplifies other quantities such as the divergence of velocity at time \(t\), that becomes \(\vec{\nabla}\cdot\vec{v}(t)=3/t\), and the density decreases with time as \(\rho(t)\propto t^{-3}\). This is the standard assumption of homology which is hard-wired into many SN spectral synthesis codes, as discussed in section 2. In the case of shocked ejecta due to CSI, the assumption of homologous expansion fails as the velocity profile is nonhomologous in the entire region between forward and reverse shocks. In SuperLite, the equations of radiation transport are used in their original form without assuming homologous expansion, as discussed in sections 3.2 and 3.4.
### Steady-state approximation
The Implicit Monte Carlo (IMC) equation in comoving frame, with assumptions for semi-relativistic simplifications from Castor (2004); Wollaeger et al. (2013), is written as:
\[\frac{1}{c}\frac{DI_{\nu}}{Dt}+\hat{\Omega}\cdot\nabla I_{\nu}-\frac{ \hat{\Omega}\cdot\nabla\vec{v}\cdot\hat{\Omega}}{c}\nu\frac{\partial I_{\nu}}{ \partial\nu}+\frac{3\hat{\Omega}\cdot\nabla\vec{v}\cdot\hat{\Omega}}{c}I_{\nu}-\] \[\frac{1}{c}\hat{\Omega}\cdot\nabla\vec{v}\cdot\left(\mathbf{I}- \hat{\Omega}\hat{\Omega}\right)\cdot\nabla_{\hat{\Omega}}I_{\nu}+\sigma I_{\nu}\] \[=\frac{1}{4\pi}\frac{\sigma_{e}b}{\sigma_{P,e}}\left(1-f\right) \int\int\sigma^{\prime}I^{\prime}_{\nu}d\nu^{\prime}d\Omega^{\prime}+\frac{1}{4 \pi}f\sigma bacT^{4}\] \[+\frac{1}{4\pi}\frac{\sigma_{e}b}{\sigma_{P,e}}(1-f)\hat{Q}\,, \tag{1}\]
where \(c\) is the speed of light, \(a\) is the radiation constant, \(t\) is time, \(I_{\nu}\) is radiation intensity, \(\hat{\Omega}\) is photon direction, \(\nu\) is photon frequency, \(\sigma\) is opacity (assumed to be only absorption here), \(\sigma_{e}\) is the emission opacity, \(\sigma_{P,e}\) is the Planck mean emission opacity, \(\hat{Q}\) is a source term (for instance, a radioactive decay energy rate), \(b\) is the normalized Planck function given by
\[b(\nu)=\frac{15}{\pi^{4}}\frac{h}{kT}\frac{(h\nu/kT)^{3}}{e^{h\nu/kT}-1}, \tag{2}\]
where, \(T\) is temperature, and \(f\) is the Fleck factor (Fleck & Cummings, 1971), given by
\[f=\frac{1}{1+4aT^{3}c\sigma_{P,e}\Delta t/C_{V}}, \tag{3}\]
with \(C_{V}\) being the material heat capacity at constant volume. Here \(\Delta t\) is the physical time-step size, which can be arbitrarily chosen as noted by Kerzendorf & Sim (2014). The terms on the left-hand side of equation (1) from left to right are the Lagrangian time derivative, spatial streaming operator, Doppler shift, adiabatic effect, directional aberration, and intensity attenuation from absorption, respectively (see, for instance Castor, 2004). The terms on the right-hand side are the effective scattering source (absorption instantly followed by emission), thermal emission (reduced by the amount treated with effective scattering), and any sources of energy in ejecta (again, radioactive energy), respectively (Fleck & Cummings, 1971).
In the steady state approximation, we can modify equation (1) by neglecting the local time derivative and setting \(\Delta t\rightarrow\infty\), while preserving \(O(v/c)\) effects. For infinite time-step size,
\[\lim_{\Delta t\rightarrow\infty}f=0. \tag{4}\]
Under these assumptions, equation (1) becomes
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Code} & RT & post- & Homologous & Ionization \& & Line & Geometry & Parallel & Open \\ & Method & processing & Expansion & Excitation & Opacity, \(\kappa_{\nu}\) & & Processing & Source \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline SuperLite & MC & Yes & No & LTE(T\({}_{e}\)) & CMF \(\kappa_{\nu}\) & 1-D & MPI/OpenMP & Yes \\ SuperNu & MC & No & Yes & LTE(T\({}_{e}\)) & CMF \(\kappa_{\nu}\) & multi-D & MPI/OpenMP & Yes \\ CMFGEN & RTE - CMF & Yes & No & _dn/dt_ & CMF \(\kappa_{\nu}\) & 1-D & – & Partly \\ TARDIS & MC & Yes & Yes & dilute-LTE(T\({}_{r}\)) & Sobolev & 1-D & – & Partly \\ SEDONA & MC & No & Yes & LTE(T\({}_{e}\)) & Expansion & multi-D & MPI/OpenMP & Partly \\ STELLA & RH-MG & No & No & LTE(T\({}_{e}\)) & Expansion & 1-D & – & Partly \\ SYM++ & RTE - CMF & Yes & Yes & LTE(T\({}_{x}\)) & Sobolev & 1-D & MPI/OpenMP & Yes \\ \hline \end{tabular} Note. – Column headings: (1) Name of the code. (2) The numerical method used radiative transfer equation – MC: Monte Carlo, RTE-CMF: Radiation Transfer equation, Co-moving Frame, RH-MG: multi-group radiation hydrodynamics. (3) If the code post-processes the profiles or takes time-steps and evolve the profiles. (4) Whether or not the ejecta are assumed to be in homologous expansion (\(v=rt\)) in the code. (5) The method used to calculate ionization and excitation level populations. In LTE treatment, the Boltzmann excitation formula is used for excitation level populations and Saha-Boltzmann equation is used for ionization stage population, with setting the temperature to either the electron temperature (T\({}_{e}\)) or the radiation temperature (T\({}_{r}\)). SYM++ uses user-specified temperature for local thermodynamic equilibrium (LTE). TARDIS uses dilute-LTE treatment as an approximation to non-LTE (NLTE) by using a dilution factor \(W\) to scale the excitation level population. In NLTE, the rate equations are solved using the time-dependence on level populations (\(dn/dt\)). In a steady-state case, \(dn/dt\)=0. Currently, we are testing the NLTE for bound-bound opacity for hydrogen lines in SuperLite, which will be part of our future publications. (6) Line opacity calculations can be performed line by line explicitly in the co-moving frame (CMF) or by using the Sobolev approximation or by using an approximate frequency-dependent ‘expansion’ opacity. (7) Available geometry. The 2-D and 3-D geometries are not being completely implemented and tested for SuperLite, hence, these are not part of the current version. However, the code is easily extendible to higher dimensions, and it will be part of our future publications. (8) The optional Message-Passing Interface (MPI) and OpenMP framework for parallel processing to speed-up the calculations. (9) All of the codes presented here have a public version available, but some codes do not include all of the methods and physics implemented in the code in their public versions. For example, the nonhomology and/or the NLTE applications for SEDONA, TARDIS, and CMFGEN are present in the literature, but the code is not publicly available in its entirety.
\end{table}
Table 1: Physics and approximations in some commonly used SN Radiation Transport codes.
\[\frac{\vec{v}}{c}\cdot\nabla I_{\nu}+\hat{\Omega}\cdot\nabla I_{\nu}- \frac{\hat{\Omega}\cdot\nabla\vec{v}\cdot\hat{\Omega}}{c}\nu\frac{\partial I_{ \nu}}{\partial\nu}+\frac{3\hat{\Omega}\cdot\nabla\vec{v}\cdot\hat{\Omega}}{c}I_ {\nu}-\] \[\frac{1}{c}\hat{\Omega}\cdot\nabla\vec{v}\cdot\left(\mathbf{I}- \hat{\Omega}\hat{\Omega}\right)\cdot\nabla_{\hat{\Omega}}I_{\nu}+\sigma I_{\nu}\] \[=\frac{1}{4\pi}\frac{\sigma_{e}b}{\sigma_{P,e}}\int\int\sigma^{ \prime}I^{\prime}_{\nu}d\nu^{\prime}d\Omega^{\prime}+\frac{1}{4\pi}\frac{ \sigma_{e}b}{\sigma_{P,e}}\hat{Q}\quad. \tag{5}\]
The right-hand side of equation (5) implies that all photon collisions result in effective scattering, i.e., no MC packets are absorbed, hence conserving the co-moving frame energy of the packets as mentioned before. Each scattered particle is emitted with a new propagation direction. Effectively, all MC particles escape the domain as flux particles. For each particle that enters the inner boundary of the domain, a new particle with equal energy is generated with a random outward propagation direction at the inner boundary to conserve co-moving energy. For IMC, with these additional assumptions, the treatment of most of the terms remains the same; particle Lorentz transforms take care of \(O(v/c)\) terms in the equation. The updated DDMC treatment is discussed in section 3.4.
### Sourcing of MC particles
The MC particles are sourced in each spatial cell according to the amount of radiation energy contained within the cell based on the input profile. As mentioned earlier, it is assumed that the energy deposition by radioactive decay has been already taken into account in the radiation field by the _hydrodynamics_ code from which the profile is imported. Thus, the radioactive decay term is not separately added to the energy source for the particles.
In the TARDIS code (Kerzendorf & Sim, 2014, their Equation 10), particles are sampled on an inner surface with radius \(r_{i}\) and temperature \(T_{i}\), and the energy per particle, for \(N\) particles, is
\[E=\frac{4\pi r_{i}^{2}acT_{i}^{4}}{4N}\Delta t. \tag{6}\]
Assuming the radiation temperature is known, similar to equation (6), particles can be sourced in each spatial cell with energy \(E_{j}\) at a surface with cell-centered radius \(r_{j}\) given as,
\[E_{j}=\frac{4\pi r_{j}^{2}aT_{r,j}^{4}V_{j}}{4N_{j}}\tilde{\Delta}t\,, \tag{7}\]
where \(T_{r,j}\) is the radiation temperature in cell \(j\). Here, we have used \(\tilde{\Delta}t\) to indicate individual particle's total propagation time to distinguish from the simulation time-step size \(\Delta t\), that is taken to be infinite. The bolometric luminosity is computed as:
\[L=\frac{1}{\tilde{\Delta}t}\sum_{p}^{N_{p}}E_{p}, \tag{8}\]
where \(p\) is an index for escaped particles, \(N_{p}\) is the total number of escaped particles, and \(E_{p}\) is the energy weight at escape. Initializing particle weights with equation (7) and using the particle initialization sampling procedures for each cell "surface" described, the result for \(L\) is independent of \(\tilde{\Delta}t\). However, this is only true under the assumption that particle propagation is independent of \(\tilde{\Delta}t\) as well. To make SuperLite truly time-independent, we use the "luminosity" weights, \(L_{j}=E_{j}/\tilde{\Delta}t\) instead of equation (7) to instantiate the MC particles in each cell. This effectively eliminates the choice of arbitrary time-step size, which should not affect the results of the simulation. To ensure that the total output luminosity is equal to the user-input bolometric luminosity, the sum of the luminosity weights of all of the MC particles in all of the spatial cells is set to be equal to the user-input bolometric luminosity. As there is no loss of energy, the total output luminosity predicted by SuperLite remains unchanged.
The initial frequency of the particles is sampled with the normalized blackbody distribution given by equation (2). The particles are evenly distributed in the zone with the initial direction cosine \(\mu=\sqrt{r}\), where \(r\in(0,1]\) is a random number uniformly distributed in the interval from 0 to 1.
### Particle propagation and Doppler correction
The particle propagation is tracked through the transport (IMC) and diffusion (DDMC) steps. In the IMC step, the distance traveled by the packet, \(d_{\rm p}\), is
\[d_{\rm p}=\min(d_{B},d_{thm},d_{col},d_{dop})\,, \tag{9}\]
where \(d_{B}\) is the distance to the nearest cell boundary, \(d_{thm}\) is the Thompon scattering distance, \(d_{col}\) is the distance to effective scattering resulting from collision between a MC particle and an atom or an ion, and \(d_{dop}\) is the distance to the Doppler shift to the adjacent upper or lower group, based on the divergence of velocity in that cell. As stated earlier, all collisions result in effective scattering. As the particles propagate, the position, frequency, and direction of propagation of the particle is updated. In 1-D spherical geometry, the particle's radial coordinate, \(r_{p}\) and the component of the direction of propagation along the radial direction, \(\mu_{p}=\dot{r_{p}}\cdot\hat{\Omega}_{p}\) are updated as follows:
\[\begin{split} r_{p}^{\prime}=\sqrt{r_{p}^{2}+d_{p}^{2}+2\mu_{p}r _{p}d_{p}}\\ \mu_{p}^{\prime}=(\mu_{p}r_{p}+d_{p})/r_{p}^{\prime}\,.\end{split} \tag{10}\]
Here, \(r_{p}^{\prime}\) and \(\mu_{p}^{\prime}\) are particle's updated radial coordinate and the component of propagation direction along the radial coordiante, respectively.
The distances in equation (9) are given by Wollaeger et al. (2013, spatial distance is used here instead of "velocity" distance)
\[d_{B}=\begin{cases}((r_{i-1/2}^{2}-(1-\mu_{p}^{2})r_{p}^{2})^{1/2}+\mu_{p}r_{p}),\\ \quad\text{if }\mu<-\sqrt{1-(x_{i}/x)^{2}}\\ ((r_{i+1/2}^{2}-(1-\mu_{p}^{2})r_{p}^{2})^{1/2}-\mu_{p}r_{p}),\\ \quad\text{otherwise}\end{cases} \tag{11}\]
where, \(r_{i-1/2}\) and \(r_{i+1/2}\) are the left and right cell boundaries for a cell with index "\(i\)", respectively (with \(i\in[1...n]\), \(n\) being the maximum number of spatial cells), and
\[d_{thm/col}=|\ln r|/\sigma_{thm/col}\,. \tag{12}\]
Here, the form of the equation remains the same for the Thomson scattering distance and the distance to collision event. \(\sigma_{thm}\) is the lab-frame Thomson scattering opacity and \(\sigma_{col}\) contains the lab-frame opacity contributions detailed in section 3.5.1. We generalize the formula for distance to Doppler shift, \(d_{dop}\), between groups to non-homologous flows. The equation to find this distance is derived from invariance of frequency in the \(v=0\) frame (lab frame),
\[\nu_{p}=\frac{\nu_{p}^{(0)}}{(1-\mu_{p}v(r_{p})/c)}=\frac{\nu_{g\pm 1/2}}{(1- \mu_{p}^{\prime}v(r_{p}^{\prime})/c)}\enspace, \tag{13}\]
where \(\mu_{p}^{\prime}\) and \(r_{p}^{\prime}\) are given by equation (10). We assume that within a spatial cell, the radial velocity is linear with respect to radius,
\[v(r_{p})=m_{i}(r_{p}-r_{i-1/2})+v_{i-1/2}=m_{i}r_{p}+\tilde{v}_{i}\enspace, \tag{14}\]
where \(m_{i}=(v_{i+1/2}-v_{i-1/2})/(r_{i+1/2}-r_{i-1/2})\) is the slope of velocity across the radial domain of cell \(i\) and \(\tilde{v}_{i}=v_{i-1/2}-m_{i}r_{i-1/2}\). We define a new inertial frame, the \(\tilde{v}_{i}\) frame, to evaluated \(d_{dop}\). In this frame, the invariance is
\[\tilde{\nu}_{p}=\frac{\nu_{p}^{(0)}}{(1-\tilde{\mu}_{p}(v(r_{p})-\tilde{v}_{i} )/c)}=\frac{\nu_{g\pm 1/2}}{(1-\tilde{\mu}_{p}^{\prime}(v(r_{p}^{\prime})- \tilde{v}_{i})/c)}\enspace, \tag{15}\]
where \(\tilde{\nu}_{p}\) and \(\tilde{\mu}_{p}\) are the frequency and radial direction component in the new frame of reference, relating to the comoving frame, to order \(v/c\), via
\[\tilde{\nu}_{p}=\nu_{p}^{(0)}(1+\mu_{p}^{(0)}(v(r_{p})-\tilde{v}_{i})/c) \enspace, \tag{16a}\] \[\tilde{\mu}_{p}=\frac{\mu_{p}^{(0)}+(v(r_{p})-\tilde{v}_{i})/c}{1+ \mu_{p}^{(0)}(v(r_{p})-\tilde{v}_{i})/c}\enspace. \tag{16b}\]
Similar to the lab frame, in the \(\tilde{v}_{i}\) frame \(\tilde{\mu}_{p}\) and \(\tilde{\nu}_{p}\) are invariant. Moreover, \(\tilde{\mu}_{p}\) and \(\tilde{\nu}_{p}\) can be found from the lab or comoving frame using Eqs. (16). Furthermore, as in the lab frame, relating the initial and final \(\tilde{v}_{i}\)-frame direction components, \(\tilde{\mu}_{p}^{\prime}r_{p}^{\prime}=\tilde{\mu}_{p}r_{p}+d_{dop}\), we find the distance to Doppler shift is
\[d_{dop}=\frac{c}{m_{i}}\left(1-\frac{\nu_{g\pm 1/2}}{\tilde{\nu}_{p }}\right)-\tilde{\mu}_{p}r_{p}\\ =\frac{c}{m_{i}}\left(1-\tilde{\mu}_{p}\frac{(v(r_{p})-\tilde{v}_ {i})}{c}-\frac{\nu_{g\pm 1/2}}{\tilde{\nu}_{p}}\right)\\ =\frac{c}{m_{i}\tilde{\nu}_{p}}\left(\nu_{p}^{(0)}-\nu_{g\pm 1/2} \right)\enspace. \tag{17}\]
We see that for homologous flow \(m_{i}=1/t\) and \(\tilde{v}_{i}=0\), hence the \(\tilde{v}_{i}\) frame becomes the lab frame and Eq. (17) becomes the distance to Doppler shift presented by Wollaeger et al. (2013). Importantly, this equation for Doppler shifting encodes redshifting based on the sign of the radial velocity gradient, \(m_{i}\), which can be seen by examining the right side of the third equality. Given \(\nu_{g+1/2}<\nu_{p}^{(0)}<\nu_{g-1/2}\) (decreasing group index implying increasing frequency), if \(m_{i}<0\), then \(\nu_{g\pm 1/2}=\nu_{g-1/2}\) to give \(d_{dop}>0\), corresponding to blue-shifting. Similarly, if \(m_{i}>0\), then \(\nu_{g\pm 1/2}=\nu_{g+1/2}\) to give \(d_{dop}>0\), corresponding to red-shifting. When \(m_{i}=0\), there is no radial velocity differential across the cell, so the distance is infinite, meaning the particle frequency does not shift toward either group boundary in the comoving frame.
Since \(\tilde{v}_{i}\) is artificial, it is possible that \(|v(r_{p})-\tilde{v}_{i}|\) violates the O(v/c) approximation, and can even be greater than \(c\). In these instances, at least, a direct numerical solution for \(d_{dop}\) can be obtained using Newton-Raphson iteration, where the variable that is iteratively updated is \(d_{dop}\). We define the function \(\varphi\) as follows,
\[\varphi(s)=m_{i}\mu_{p}r_{p}-c\left(1-\frac{\nu_{g\pm 1/2}}{\nu_{p}}\right)+ \tilde{v}_{i}\mu_{p}^{\prime}(s)+m_{i}s\enspace. \tag{18}\]
By construction, \(\varphi(d_{dop})=0\). The derivative of Eq. (18) is
\[\frac{d\varphi}{ds}=\tilde{v}_{i}\frac{d\mu_{p}^{\prime}}{ds}+m_{i}=\tilde{v}_ {i}\frac{(1-(\mu_{p}^{\prime}(s))^{2})}{r_{p}^{\prime}(s)}+m_{i}\enspace, \tag{19}\]
where use has been made of a standard identity for the path-derivative of \(\mu_{p}(s)\). Introducing a convergence tolerance \(\varepsilon_{tol}\), the Newton-Raphson iteration proceeds as follows.
1. Estimate an initial value of \(s=s_{0}\).
2. While \(|s_{k}-s_{k-1}|>s_{k}\varepsilon_{tol}\), for iteration \(k\):
1. Evaluate \(r^{\prime}_{p}(s_{k})\) and \(\mu^{\prime}_{p}(s_{k})\) using Eqs (10) (replacing \(d_{p}\) with \(s_{k}\)).
2. Evaluate Eqs. (18) and (19) using \(s=s_{k}\), \(r^{\prime}_{p}(s_{k})\), and \(\mu^{\prime}_{p}(s_{k})\).
3. Calculate next iteration value: \[s_{k+1}=s_{k}-\left.\frac{\varphi(s_{k})}{ds}\right|_{s=s_{k}}\] (4) \(k+1\to k\)
3. Set \(d_{dop}=s_{k}\), where \(s_{k}\) is the final iteration.
We adopt this method to calculate the Doppler distance in SuperLite.
For DDMC, the energy loss due to Doppler shift is determined by the time-step size, which is infinite (or undefined) as discussed in section 3.2. This choice would imply that the energy should be redshifted to 0, which is obviously incorrect. To best match the impact of detailed Lorentz transformations in the IMC portions, the DDMC particle's time-of-flight in a spatial cell can be used, with energy loss begin determined by (see, e.g, Wollaeger et al., 2013),
\[E^{\prime}=Ee^{-\nabla\cdot\vec{v_{i}}\cdot\delta t_{k}/3}, \tag{20}\]
where \(\delta t_{k}\) is the time spent by the particle in cell \(j\) on MC step \(k\). Note that the equation simplifies to the standard approach of Wollaeger & van Rossum (2014) when the flow is homologous, \(\nabla\cdot\vec{v_{j}}=3/t\) for all \(j\). To see this, if the time step \(\Delta t\) is re-imposed,
\[\sum_{k}\delta t_{k}=\Delta t, \tag{21}\]
so,
\[E_{p}^{(f)}=E_{p}^{(i)}\prod_{k}e^{-\delta t_{k}/t}=E_{p}^{(i)}e^{-\Delta t/t }\,, \tag{22}\]
where \(E_{p}^{(i)}\) and \(E_{p}^{(f)}\) are initial and end-of-time-step particle weight, respectively. For both IMC and DDMC, in steady state, non-Doppler adiabatic loss from expansion is assumed to have been taken into account by the pre-existing values of radiation temperature in each cell, \(T_{r,j}\).
The spatial leakage opacity formulation for DDMC follows Wollaeger & van Rossum (2014), where optically thick groups are regrouped (or collapsed) into single groups in order to optimize the simulations, by minimizing the direct treatment of line-to-line effective scattering. However, we depart from previous DDMC formulations by developing a novel approach to Doppler shift from discontiguous groups in non-homologous flows. Integrating the diffusion equation over the subset of groups to collapse into DDMC, assuming a Planck function Ansatz for frequency dependence within the resulting DDMC group, for a leakage opacity for leaking from the single DDMC group to one of several adjacent IMC groups via Doppler shift, we obtain
\[\sigma_{dop}=\begin{cases}\dfrac{\nabla\cdot\vec{v_{i}}}{3cb_{\partial_{D}}} \sum_{g\in\delta\mathcal{G}_{D}^{+}}(\nu b)|_{g+1/2}\;\;\text{if}\;\nabla \cdot\vec{v_{i}}>0\;\;,\\ \\ \dfrac{-\nabla\cdot\vec{v_{i}}}{3cb_{\partial_{D}}}\sum_{g\in \delta\mathcal{G}_{D}^{-}}(\nu b)|_{g-1/2}\;\;\text{if}\;\nabla\cdot\vec{v_{i} }<0\;\;,\end{cases} \tag{23}\]
where subscript \(\mathcal{G}_{D}\) is the subset of groups collapsed into one group for DDMC, denoting integration over the subset, \(\delta\mathcal{G}_{D}^{\pm}\) is the subset of groups of the DDMC subset that are adjacent to IMC groups, either at the long (+) or short (-) wavelength side of the DDMC group. If Doppler shift is sampled in DDMC from the discrete probability distribution formed by equation (23) and the other standard leakage opacities, then the particular group edge in \(\delta\mathcal{G}_{D}^{\pm}\) that the particle escapes from is sampled from the discrete probability formed by the values \((\nu b)|_{g\pm 1/2}\).
### Opacity and Emissivity Calculations
#### 3.5.1 Local Thermodynamic Equilibrium
Thermodynamic Equilibrium (TE) describes an equilibrium state of interaction between gas and the radiation field - a state where the radiation field is described by the Planck's law and the level populations by Saha-Boltzmann equations. Both of these are determined by the same state variable, the local temperature \(T\), and the equilibrium state is described as the Local Thermodynamic Equilibrium (LTE). From microscopic point-of-view, in state of LTE, all atomic processes are in detailed balance, i.e., the number of atomic processes is balanced by the exact same number of inverse processes. We follow the same method as in SuperNu to determine the ionization balance and excitation level populations in each cell based on the LTE conditions. The Saha ionization equation (see Hubeny & Mihalas, 2015) is:
\[\left(\frac{n_{0,j+1,k}}{n_{0,j,k}}\right)^{*}=\frac{2}{n_{e}} \left(\frac{2\pi m_{e}k_{B}T_{e}}{h^{2}}\right)^{3/2}\left(\frac{g_{0,j+1,k}}{ g_{0,j,k}}\right)\\ \exp\left(\frac{-\epsilon_{Ijl}}{k_{B}T_{e}}\right), \tag{24}\]
where, \(n_{0,j,k}\) is the number density of the ground state population of an ion with net charge \(j\) for a chemical
species \(k\). The asterisk '\(*\)' on the RHS of equation (24) implies LTE. \(n_{e}\) is the number density of the electrons, and \(g_{0,j,k}\) is the statistical weight of the ground state of stage \(j\) and \(\epsilon_{Ijk}\) is the ionization energy of stage \(j\). \(T_{e}\) is the electron temperature and the other symbols have their usual meanings.
The number density (originally "occupation number") (or population) of a particular excited state \(i\) in ionization stage \(j\) can be determined using the Boltzmann excitation equation, once the ground-state populations have been calculated, i.e.
\[n^{*}_{ijk}=C_{I}\left(\frac{n_{e}n^{*}_{0,j,k}}{T^{3/2}}\right)\left(\frac{g_{ ijk}}{g_{0,j+1,k}}\right)\exp\left(\frac{\epsilon_{Ijk}-\epsilon_{ijk}}{k_{B}T} \right), \tag{25}\]
where, \(\epsilon_{ijk}\) is the energy of excited level \(i\), and
\[C_{I}=\frac{1}{2}\left(\frac{h^{2}}{2\pi m_{e}k_{B}}\right)^{3/2}\,. \tag{26}\]
By summing equation (25) over all levels within an ion stage, the total ion number density of successive ion stages can be obtained from the Saha-Boltzmann equation,
\[\left(\frac{N_{j,k}}{N_{j+1,k}}\right)^{*}=C_{I}\frac{n_{e}}{T^{3/2}}\left( \frac{U_{j,k}}{U_{j+1,k}}\right)^{*}\exp\left(\frac{\epsilon_{Ijk}}{k_{B}T} \right), \tag{27}\]
where, \(U^{*}_{j,k}\) is a LTE partition function. The following relationship between the ground state number density \(n^{*}_{0,j,k}\) and the total number density \(N_{jk}\) is also used:
\[N^{*}_{j,k}=\frac{n^{*}_{0,j,k}}{g_{0,j,k}}U^{*}_{j,k}\,. \tag{28}\]
Equation (27) is solved iteratively in SuperLite to determine the ionization populations and the electron density, until a desired accuracy is reached.
The line opacity is calculated, under the assumption of LTE, for the transition from a lower level \(l\) to an upper level \(m\) for within an ionization stage \(j\) of chemical species \(k\) (dropping the subscripts \(j\) and _k_) as
\[\kappa^{*}_{l\to m}=n^{*}_{l}\left(\frac{\pi e^{2}}{m_{e}c}f_{l\to m} \right)\phi_{\nu}\left[1-\exp\left(-\frac{\epsilon_{l\to m}}{k_{B}T} \right)\right], \tag{29}\]
where, \(f_{l\to m}\) is the oscillator strength and \(\epsilon_{l\to m}\) is the corresponding transition energy. The symbol \(\phi_{\nu}\) represents the line profile function, which is set to a delta function in SuperLite as in SuperNu. The factor in square brackets represents the effect of stimulated emission. The line emissivity is calculated under the LTE assumption from Kirchhoff's law:
\[\eta^{*}(\nu,T)=\kappa^{*}(\nu,T)\,B(\nu,T), \tag{30}\]
where, \(B(\nu,T)\) is the Planck function.
In SuperLite, the opacities are computed under the LTE assumption using a subgroup structure, as explained in Wollaeger & van Rossum (2014). We include the atomic data for elements up to atomic number \(Z=30\). The opacity contributions include bound-bound (b-b), bound-free (b-f) and free-free (f-f) transitions, as well as standard elastic Thomson scattering opacity. The line opacities (b-b opacities) are calculated using the oscillator strength data for each atomic species using the Kurucz line list1. The data used in SuperLite includes hundreds of levels of each species with about 786,000 transitions in total. The b-f opacities are calculated using the analytic fit prescription described by Verner et al. (1996) and the f-f opacities are calculated using the Gaunt factors based on the work of Sutherland (1998). The total absorption opacity is the sum of the bound-bound, bound-free, and the free-free contributions for each subgroup. The wavelength groups have logarithmic spacing, while the subgroups have uniform linear spacing, unless otherwise specified. It is also possible to include a custom grid for the groups. The opacity mixing control parameter \(\alpha_{\sigma}\in(0,1]\) introduced by Wollaeger & van Rossum (2014) is also included to linearly combine reciprocal ("Rosseland type") and direct averages of opacity. The Planck-averaged opacity is calculated using the total absorption opacity. The Rosseland mean opacity includes the scattering contribution as well. However, the Rosseland mean opacity is calcualted only for output in the LTE version of the code.
Footnote 1: [http://kurucz.harvard.edu/atoms.html](http://kurucz.harvard.edu/atoms.html)
#### 3.5.2 Non-Local Thermodynamic Equilibrium
The assumption of statistical equilibrium holds true if the statistical timescale is much shorter than the timescale over which the radiation field changes. This is the case for the regions with high optical depths, but by definition not in the regions that play a role in shaping the spectrum. As the radiation leaves and reaches the observer from these regions, the assumption of LTE is violated. In non-LTE (NLTE) conditions, the detailed balance between excitation and de-excitation processes may be broken, leading to non-thermal populations of energy levels. To determine the level populations and ionization state of the gas, a complete set of rate equations has to solved (Castor, 2004; Oelgoetz et al., 2007; Hubeny & Mihalas, 2015). In the matrix form, we can write these equations as
\[\mathbf{A}\cdot\mathbf{n}=\mathbf{b} \tag{31}\]
Here, the matrix \(\mathbf{A}\), which is often referred to as the _rate matrix_, contains the _radiative rates_ (\(R\), for interactions between particles and photons), and the _collisional rates_ (\(C\), for interactions between two or more particles). The elements of matrix \(\mathbf{A}\) for levels \(i\) and \(j\) are then:
\[A_{ii}=\sum_{j\neq i}(R_{ij}+C_{ij}), \tag{32}\] \[A_{ij}=-(R_{ij}+C_{ij}),\ for\ j\neq i \tag{33}\]
\(\mathbf{n}\) is a vector of level populations, \(\mathbf{n}=(n1,n2,...,n_{NL})^{T}\), where \(NL\) is the number of levels, and \(\mathbf{b}\) is the vector of the rate at which a level's population changes, \(dn/dt\). In steady-state, \(dn/dt=0\).
In SuperLite, we have added the NLTE treatment for calculations of excited level populations of hydrogen up to the principle quantum number, n = 10. We use the _quasistatic_ approximation that the ionization balance can be decoupled from the excited-state population calculation. Furthermore, we assume that most of the population is in the ground state of the neutral and singly charged ion stages of hydrogen. The majority of the population flow between the two ion stages via ionization and recombination processes will occur through transitions that connect the ground states of the two ionization stages of hydrogen (rather than via processes that involve excited states), due to their massive populations relative to the excited-state populations. This dominant flow of population via ionization/recombination processes is, to a very good approximation, encapsulated within the prescribed ionization balance values defined by the LTE Saha ionization equation (24). Therefore, we do not consider the effect of ionization/recombination processes on the two ground states, but only on the processes that effect the excited-state populations. We solve the rate matrix to find the excited state populations of hydrogen, with an additional condition that the sum of ground- plus excited-state populations equals the ionization state population, \(\sum n_{i}=N\). The first row of the \(\mathbf{A}\) is replaced with 1's and the \(\mathbf{b}\) is set to \((N,0,0,...,0)\), to reflect this boundary condition. To calculate the rate matrix we have included photo-ionisation and radiative recombination, and electron-impact collisional excitation and de-excitation rates calculated with the Los Alamos National Laboratory (LANL) suite of atomic physics codes (Fontes et al., 2015). The b-b line opacity and line emissivity are calculated using equations (14.12a) and (14.12b) of Hubeny and Mihalas (2015). In our NLTE implementation, the b-b opacity for other elements are calculated under the LTE assumption, as described in section 3.5.1 The b-f and the f-f opacities, and the Thomson scattering opacity are also calculated using the same methods as described in section 3.5.1. The NLTE implementation is still in the testing phase, but some of the initial model results are displayed in section 5.
## 4 Code verification
### Line Transport in moving media - homologous and non-homologous cases
In this section, we present the test carried out to verify the SuperLite code using the transport of line radiation in a moving media as described in Roth and Kasen (2015, section 7.2). For this test, we consider an uniform density sphere of minimum radius, \(r_{min}=10^{14}\) cm and maximum radius, \(r_{max}=10^{15}\) cm, with uniform logarithmic grid of 100 spatial cells. The choice of density and temperature structures is not important for this problem. We choose an uniform density of \(\rho=10^{-11}\) g cm\({}^{-3}\) and uniform temperature \(T_{e}=10^{3}\) K. In this test case, we have considered the velocity profile to be homologous. For cell "i", the velocity is given as \(v_{i}=v_{max}(r_{i}/r_{max})[cm/s]\), where \(v_{max}=10^{8}\) cm/s is adopted. We instantiate MC packets at the inner boundary of the domain with intensity 1 erg/cm\({}^{2}\)/s/Hz/sr. (The choice of the intensity is arbitrary.) The MC packets are sampled with an uniform frequency distribution for this verification test. We use \(2^{19}\) MC packets for this test. A logarithmic frequency grid between the energies of 0.8 eV to 1.2 eV with 200 frequency groups is used. The line frequency of 1 eV is chosen so that the line is roughly in the middle of the frequency grid. The line opacity is calculated as
\[\kappa_{\nu} =(dv/dr)*(\tau_{S}/c)*(\nu_{c}/d\nu_{g}) \tag{34}\] \[=(dv/dr)*(1/c)*(1/d\nu_{g})[cm^{-1}]\]
where, \(\tau_{S}\) is the Sobolev optical depth, which was chosen to be 1 (Roth and Kasen, 2015, equations 61). \(d\nu_{g}\) is the width of the frequency group containing the line centered at frequency \(\nu_{c}=1\) eV. The semi-analytic solution to the emerging line profile is given by equation 60 of Roth and Kasen. As illustrated in Figure 1a, the spectrum produced by SuperLite is in good agreement with the semi-analytic solution.
We also compare a version of the analytic solution to a non-homologous problem, where a wind profile with a constant velocity value of \(v=10^{5}\) cm/s is added to the homologous profile. The drop to \(10^{5}\) cm/s introduces a single spatial cell with a negative velocity gradient. We attempt to account for this in the semi-analytic solution by using the homologous intensity solution as an inner boundary source for the region with the negative velocity gradient. We test two variants of the non-homologous problem: one where the line in the negative
gradient cell is purely scattering, as in the homologous region, and one where the line in the negative gradient cell is purely absorbing. For the scattering variant, shown in Figure 1b, a slight enhancement in the emission and absorption peaks of the P-Cygni profile is observed for the simulated spectrum, which can be attributed to part of the uncollided flux that is at slightly lower frequency than the absorption trough edge, which is getting blue-shifted back into resonance with the line. However, our analytic solution does not reflect this enhancement effect. For the absorption variant, the range of frequencies corresponding to the absorption trough happens to also be nearly equal to the range of lab frame frequencies that resonate with the line in the negative gradient cell. The analytic solution for this is to simply attenuate the intensity by \(e^{-\tau_{S}}\) over those frequency ranges. As expected, for both the semi-analytic solution and the SuperLite result, in Figure 1c, only the absorption trough is enhanced.
### Type Ia SN spectra
We verify the results of the snapshot post-processing with SuperLite by comparing with the results of radiation transfer calculations for time-dependent evolution of a SN-Ia by the SuperNu code. In the case of SNe Ia, the SN ejecta homologously expands following the shock break-out from the surface of a progenitor white dwarf that exceeds the Chandrasekhar mass. We use the 1D spherically-symmetric w7 model of a carbon-oxygen white dwarf of about 1.37 M\({}_{\odot}\) at the time of explosion (Nomoto et al., 1984), that is included as a test case within the public version of the SuperNu code (Wollaeger and van Rossum, 2014) and has been used by several other authors (Kasen et al., 2006; Kromer and Sim, 2009; van Rossum, 2012). The input structure includes the density and abundance profile for several key isotopes up to \({}^{56}\)Ni in a velocity grid. The ground- and excited-level populations are calculated in SuperNu using the LTE assumptions discussed in section 3.5.1. The model simulations start at 2 days from the time of explosion. The outer boundary of the ejecta outflow has a velocity of expansion about 0.07 times the speed of light. The output spectra is saved by SuperNu at the end of each time step. We adjusted the SuperNu code to enable the checkpointing of the SN ejecta profile at regular intervals, as the time-dependent MC calculations are advancing. The profile checkpoints include mass, electron temperature, and mass factions, which we used as inputs for the post-processing simulation with SuperLite.
For the purposes of this test, we chose to post-process a profile at 10 days since the time of explosion, near the time of peak luminosity. The radial coordinates are generated using the velocity grid with the relationship \(r_{i}=v_{i}t\), as the expansion in this case is homologous. Here, "\(i\)" is the cell index and "\(t\)" is the time since explosion.
The structure imported from SuperNu is shown in Figure 2. The bolometric luminosity predicted by SuperNu is used as an input to normalize the total input luminosity to the SuperLite simulation as described in section 3.3. The energy deposition due to radioactive decay processes are not included in SuperLite, as this energy is already accounted for in the SuperNu simulation. Similar to SuperNu, the opacity calculations are performed using the LTE assumptions. The electron temperature is used for both the sourcing of the MC packets and for the opacity calculations. The simulation input for SuperLite contains 64 spatial cells and 250 wavelength groups between the wavelength range of 1000 and 32000 A, both set to the same values as the SuperNu model simulation. About a million (\(2^{20}\)) MC packets were used for the SuperLite simulation, while for the SuperNu simulation about 16 million (\(2^{24}\)) initial and source MC packets each were used. SuperLite requires a smaller number of MC packets to obtain the same level of SNR. This is because, as a single-step, post-processing code with the steady-state assumption, all of the instantiated MC packets are processed through the ejecta. None of the MC packets are either absorbed (destroyed), censused, or buffered for the next time step in SuperLite.
Figure 3 shows a comparison between the spectra predicted by SuperNu at the time step when the output profile was saved at 10 days after explosion with the spectra predicted by the LTE version of SuperLite in right panel, along with the light curve for the SN generated by SuperNu in the left panel. Within the uncertainty of the MC noise, the spectra produced by both codes are in good agreement. The SuperLite simulation was completed within five minutes on a typical desktop with parallel processing using 6 MPI ranks.
### Profile Truncation Depth
In Figure 4, we show the effect of truncation depth into the ejecta profile on the calculations of spectra by SuperLite for the w7 model. As SuperLite is a post-processing code, it doesn't need entire ejecta to process the spectra. Including the line forming region is sufficient to predict the spectra within the uncertainties of MC calculations. To demonstrate this, we truncated the ejecta profile for the w7 at different location based on the optical depth (\(\tau\)) calculated using the Rosseland mean optical depth, and simulated the spectrum using the SuperLite code for this model. The optical depth is
set to 0 at the location of the observer at infinity. Thus, it increases into the ejecta. The left panel of the figure shows \(\tau\) with the locations of \(\tau=100\), 50, 10, and 1 marked with vertical lines. The ejecta profile was truncated from the surface inward at these locations into the ejecta. The right panel of the figure shows a comparison for the simulations with different truncation depths. As seen in this figure, the spectra predicted by the code remains mostly unaffected until about the truncation depth at \(\tau=10\) (green, dashed-dotted line). The spectra for \(\tau=1\) (thin cyan line) is significantly different than the original simulation with the entire ejecta profile (black, solid line). As the optical depth varies with the frequency of the photon packet, the depth in to the outflow profile up to the Rosseland mean optical depth of \(\tau=1\) is not sufficient enough for correctly predicting all of the transitions. We recommend including the outflow profile up to a Rosseland mean optical depth of \(\tau\approx 30\)-50 for better prediction of the synthetic spectra.
## 5 Applications to Supernova Spectral Modeling
Here, we present the synthetic spectra produced with SuperLite post-processing simulations for various types of SNe including both the homologous and non-homologous expansion regimes, and compare them with observations or results from the CMFGEN code.
### Type IIP SN spectra
Figure 1: The result of the test for line transport in a homologously expanding ejecta are shown in the left panel, while that for the non-homologous case are shown in the right and bottom panels. Continuum photons are injected into a uniform sphere containing pure-scattering media with Sobolev line optical depth of \(\tau_{S}=1\). The line in the non-homologous region is scattering (right) or absorbing (bottom). Results from the MC transport are compared to the semi-analytic solution as discussed in section 4.1. Within the uncertainty of MC noise, the modeled spectra compare well with the analytic solution.
In this section, we present the results of SuperLite post-processing for two typical SN IIP models and compare the results with the time-evolution simulation performed with SuperNu in one case and to the observations of a typical SN IIP in the other case.
#### 5.1.1 Model s18.0
For the first case, we use model s18.0 from the suite of SN IIP model simulations presented by Curtis et al. (2021). The model consists of a star with ZAMS mass equal to 18 M\({}_{\odot}\)and solar metallicity, and a pre-SN mass of 14.5 M\({}_{\odot}\) at the time of explosion, with a 9.25 M\({}_{\odot}\) hydrogen envelope. The pre-SN radius of the progenitor star is 1010 R\({}_{\odot}\). An explosion energy of 1.45\(\times\)10\({}^{51}\) erg is assumed. The SuperNu comparison model corresponds to 20 days from time of explosion. The velocity profile is shown in Figure 12 of (Curtis et al., 2021). We use the SuperNu input files kindly provided by Dr. Sanjana
Figure 3: In left panel, the light curve predicted by SuperNu for the W7 SN Ia model is shown. The black dot marks the luminosity at 10 days since the time of explosion where profile was extracted from the SuperNu simulation for post-processing with SuperLite. In the right panel, a comparison of spectra at 10 days since the time of explosion produced by SuperNu and LTE version of SuperLite are shown.
Figure 2: The velocity, temperature, electron number density, and density profile plots at 10 days since the time of explosion for the W7 SN Ia model.
Curtis (private communication) to re-run the SuperNu simulation with the same setup as in their paper. The SuperNu model was run with \(2^{18}\) initial MC packets and additional \(2^{18}\) source packets were added every timestep, with a maximum limit of \(2^{28}\) MC packets. The model contains 689 spatial cells, and 1000 wavelength groups with the wavelength range of 1000-32,000 A. Similar to w7 model discussed earlier, we have used a modified version of SuperNu code that saves SN ejecta profiles at checkpoints every few days. The profiles from this SuperNu simulation are used for post-processing with SuperLite.
For a comparison of the SuperNu output spectra to that produced with SuperLite in 1D, we choose a phase corresponding to 20 days after the SN explosion. The hydrogen recombination front begins to recede into the H-rich SN ejecta at around 20 days, marking the beginning of the eponymous plateau phase that lasts for about 104 days (see figure 13 of Curtis et al.2021). For the 1-D simulation calculated with SuperLite, we generate the radial coordinates under the assumption of homologous expansion, as in the previous case, because the SuperNu model simulation is run on a homologous velocity grid. The ejecta was truncated at the Rosseland mean optical depth of 100. The model contains 172 spatial cells, and 6000 wavelength groups with the wavelength range of 1000-30,000 A. This simulation was run with about a million (\(2^{20}\)) MC particles is the same as that used in the w7 model. The spectra generated with SuperNu and SuperLite are shown in Figure 5. The spectrum generated with SuperNu from figure 5 of Curtis et al. (2021) is superimposed on the SuperLite spectrum. The SuperNu spectrum is smoothed, while the SuperLite spectrum is not smoothed. The two spectra are in a good agreement with each other.
Figure 4: The effects of truncation depth of the ejecta on the predicted spectra are shown. In the left panel, the optical depth (\(\tau\)) calculated from the Rosseland mean opacity is shown for the ejecta profile of the W7 SN Ia model. The optical depth is set to 0 at the location of the observer at infinity, so the optical depth increases into the ejecta. The vertical lines in the left panel mark the location of \(\tau\) = 100, 50, 10, and 1, respectively, from left to right. In the right panel, a comparison of spectra produced by SuperLite by truncating the ejecta at different optical depths from the surface inward is shown. The black, solid line shows the spectrum for the whole ejecta profile. The spectrum remains mostly unaffected up to \(\tau\) =50 (red, dashed line). It changes slightly at longer wavelengths for \(\tau\) =10 (green, dashed-dotted line). However, it changes significantly when the ejecta profile is truncated at \(\tau\) =1 (cyan, thin solid line).
Figure 5: A comparison of spectra produced with the SuperNu and the LTE version of the SuperLite codes for a Type IIP model at 20 days since the time of explosion is shown. The SuperNu spectrum is taken from figure 5 of Curtis et al. (2021). The two spectra are in good agreement with each other.
#### 5.1.2 A classical SN IIP: SN 1999em
Here, we present a comparison between the synthetic spectra predicted by SuperLite to the observed spectra for a typical SN IIP, SN 1999em that exploded in the host galaxy NGC 1637 (Lick Observatory Supernova Search Li, 1999). Based on the pre-explosion observations of the progenitor star, Smartt et al. (2002) derived an upper limit on of 12\(\pm\)1 M\({}_{\odot}\) on the ZAMS mass of the progenitor star of SN 1999em, assuming a distance of 7.5\(\pm\)0.5 Mpc determined by the expanding photosphere method (Hamuy et al., 2001). Smartt et al. (2009) derived an upper limit of 15 M\({}_{\odot}\) for the same, assuming a distance of 11.7\(\pm\)1.0 Mpc determined using the Cepheid variables in the host galaxy (Leonard et al., 2003). Utrobin (2007) performed optical hydrodynamic model fitting to the bolometric light curve and spectroscopic evolution of H\(\alpha\) line for SN 1999em to determine the progenitor properties. With an adopted distance of 11.7 Mpc, their model with solar metallicity and a mass of 19\(\pm\)1.2 M\({}_{\odot}\) for the ejected envelope mass, a pre-SN radius of 500\(\pm\)200 R\({}_{\odot}\) exploded with 1.3\(\pm\)0.1 Bethe energy and 0.036\(\pm\)0.009 M\({}_{\odot}\) of nickel mass fitted well with observations. For the shorter distance of 7.5 Mpc, their model with 16 M\({}_{\odot}\) of envelope mass, pre-SN radius of 375 R\({}_{\odot}\) exploded with 0.686 Bethe energy and 0.0162 M\({}_{\odot}\) of nickel mass was the best fit model. We do not intend to fit the observed SN 1999em spectra, but we use the example of SN 1999em to illustrate the capacity of our code to model a typical SN IIP spectrum. To this effect, we use one of the two models of SN 1999em-like progenitor star explored by Paxton et al. (2018), which we describe below. The properties of the model are listed in their table 3.
We evolve a progenitor star with solar metallicity and ZAMS mass of 19 M\({}_{\odot}\) with the stellar evolution code MESA (Modules for Experiments in Stellar Astrophysics; Paxton et al., 2011, 2013, 2015, 2018, 2019). This progenitor star has a pre-SN mass of 17.8 M\({}_{\odot}\), with a massive H-rich envelope and pre-SN radius of 603 R\({}_{\odot}\). The model star is exploded with STELLA (a version of the code provided with the MESA distribution), by injecting energy of 1 Bethe at the base of the envelope and assuming nickel mass of 0.042 M\({}_{\odot}\). The profiles are generated at several checkpoints by STELLA. These profiles include the electron and radiation temperatures, as well as the total bolometric luminosity estimated by STELLA. These profiles are used for post-processing with SuperLite to generate synthetic spectra. For the SuperLite simulation, we choose a phase corresponding to 40 days since the maximum bolometric luminosity of the SN. This phase is within the plateau of the SN light curve. We truncate the ejecta profile at the STELLA-predicted optical depth of 100 for the SuperLite simulation. The input structure contains 154 spatial cells. We choose 6000 wavelength groups with a range of 1 to 30,000 A. About a million (\(2^{20}\)) MC particles are instantiated for the simulation. Figure 6 shows the spectra predicted by the LTE and NLTE versions of the SuperLite code. The observed spectra taken at 38 and 42 days since the peak in the light curve of the SN is plotted in the figure for comparison with SuperLite prediction. The observed spectroscopic data is obtained using 1m Nickel reflector at Lick Observatory (Faran et al., 2014; Leonard et al., 2002). The SuperLite simulation qualitatively predicts the observed line features with the P-Cygni profiles, with blue-shifted emission components, and the continuum in that taken at day 38. However, this observed spectrum extends only to wavelength of 6800 A. The Ca II feature around 8500 A in the spectrum observed at day 42 are also well-predicted by SuperLite code. The observed spectrum at day 42 is not corrected for the effects of dust absorption, while the model spectra does not take these effects into account. This explains the differences in the continuum in the two spectra at the shorter wavelengths.
### Type IIn SN spectra
One of the main motivations behind the development of SuperLite was to model the spectra of astrophysical transients that are affected by strong interaction with a
Figure 6: A comparison of spectra produced with the the LTE and the NLTE versions of the SuperLite code for a SN IIP model for SN 1999em at 40 days since explosion. The NLTE implementation is still being tested. The observed spectra from Faran et al. (2014); Leonard et al. (2002) for SN 1999em is shown in the background for comparison. The lightgrey curve shows the observed spectra obtained at 38d and the grey curve shows the observed spectra obtained at 40d since the peak luminosity.
CSM. Toward that end, in this section we present the first interacting SN spectra simulated with our code.
#### 5.2.1 A classical SN IIn
We construct a model progenitor star for a typical SN IIn with ZAMS mass of 19 M\({}_{\odot}\) with the same specifications as described in section 5.1.2 for the SN 1999em. In addition, we append a dense H-rich CSM of 0.2 M\({}_{\odot}\) around the progenitor star that corresponds to a mass-loss rate of 0.025 M\({}_{\odot}\)/yr for 8 years before the core-collapse stage, with the velocity of the stellar wind set to 200 km s\({}^{-1}\). A power law index of 2 is used to construct the CSM density profile around the progenitor star. The model progenitor star is then exploded into the surrounding CSM using STELLA. The explosion energy of 0.78 Bethe is injected at the base of the envelope. A total nickel mass of 0.042 M\({}_{\odot}\) is used for this simulation. The non-homologous expansion of the SN ejecta is tracked by the time-dependent RHD evolution. As earlier, the profiles generated at several checkpoints by STELLA that include the electron and radiation temperatures, as well as the total bolometric luminosity are used for the SuperLite simulation. We refer to this model as A4, and it belongs to a suite of models that we have explored with STELLA for the interacting SNe. The complete suite of models along with the post-processed SuperLite simulations will be presented in a future publication. The profiles generated by STELLA at 5, 10, 20, and 30 days since the maximum in the bolometric luminosity are used to run the post-processing model simulations with the SuperLite code to generate spectra for the A4 model. In the top 4 panels of Figure 7, velocity, temperature, density, and velocity gradient profiles that we use as an input for our post-processing simulations with SuperLite are shown for each of these phases. The vertical lines show the location of the Rosseland mean optical depth of 1. The optical depth is set to 0 at the location of the observer at infinity, so the optical depth increases in to the profile. As the time progresses, the photosphere recedes deeper traversing the shocked region around 30 days. The bottom 4 panels in Figure 7 show the spectral evolution of the SN ejecta for the A4 model at these selected phases. The strong and narrow emission lines, especially the Balmer series lines H\(\alpha\) and H\(\beta\) can be seen in the spectra. As the ejecta expands and cools down, the continuum shifts toward longer wavelengths. The spectra resulting from our experimental NLTE implementation has also been shown in this figure. The LTE and NLTE spectra look very similar until 30 days phase, where the photosphere is traversing the shocked region, where the LTE assumption breaks.
#### 5.2.2 A Luminous SN IIn - SN 2017hcc
We simulated another example of a SN IIn that undergoes stronger CSI. This model is similar to the progenitor of SN 2017hcc that exploded in a spiral dwarf galaxy of near-solar metallicity at a distance of 73 Mpc (Tonry, 2011). It was classified as a SLSN IIn based on its peak absolute magnitude of -20.7 mag (Prieto et al., 2017). A RHD simulation of the evolution of SN ejecta was performed with HERACLES using a progenitor star similar to that of SN 2017hcc. The profiles at several different checkpoints during the RHD evolution of the SN ejecta were kindly provided by Luc Dessart, along with the synthetic spectra predicted by CMFGEN (private communication). It is an engineered model with 10 M\({}_{\odot}\) ejecta mass and 5.7 M\({}_{\odot}\) CSM mass with mass loss of 0.2 M\({}_{\odot}\)/year. The radius at the interface between the ejecta and the CSM at the start of the RHD simulation is \(10^{15}\) cm. The maximum velocity of ejecta is 16,600 km/s and the velocity of the CSM is 100 km/s. The RHD calculations with HERACLES do not include the radioctive decay of \({}^{56}\)Ni. As the power from interaction is more dominant compared to radioactive decay, this shall not affect the results. The profiles are truncated at the HERACLES predicted electron scattering optical depth of 30. Here, we present the results of the synthetic spectrum produced with SuperLite by post-processing the checkpoint profile at 36 days from the time of explosion for this model and compare it with the same predicted by CMFGEN. The profile consists for 80 spatial cells. For the SuperLite simulation, 6000 wavelength groups are used in the range of 1 to 30,000 A. The profiles from HERACLES did not include the radiation temperature, so it was set equal to the electron temperature in our simulations.
The electron density calculated with SuperLite using LTE assumption qualitatively matches that predicted by CMFGEN as shown in the left panel of Figure 8. In the right panel of Figure 8, the spectrum predicted by SuperLite using LTE assumption at 36 days since explosion is compared with that predicted by CMFGEN. The line ratios in this plot show that the H\(\alpha\) line is comparable in strength to the H\(\beta\) line in the spectrum predicted by SuperLite, while the H\(\alpha\) line is stronger in CMFGEN spectrum. The differences in the line ratios and continuum emission can be attributed to the lack of radiation temperature structure in the input profile, as well as the LTE assumption. We are actively working on resolving these discrepancies. We would like to note, however, that the spectra generated with CMFGEN and SuperLite are otherwise similar.
In this paper we have presented the first publicly-available, Monte Carlo radiation transport code, SuperLite, that is optimized to compute synthetic spectra for astrophysical transients strongly affected by circumstellar interactions (CSI). SuperLite enables users to post-process the hydrodynamic profiles of interact
Figure 7: In panel (a), the velocity, electron temperature, electron density, and density profiles at 5, 10, 15, and 20 days from the time of explosion for a SN IIn model A4 are shown. The vertical dashed lines in the velocity plot mark the radial coordinate of the cell where the Rosseland mean optical depth is 1. As the ejecta expands and cools down, the photosphere recedes into the shocked region. In panel(b), the spectra produced with the LTE and NLTE versions of SuperLite corresponding to the same phases are shown. The spectra are normalized to H\(\alpha\) line strength for ease of comparison. The NLTE version of the code is still being tested.
ing transient events with non-homologously expanding outflows. SuperLite is published2 under GNU3 public license and it is free to use and develop. The SuperLite code is developed by significantly modifying the existing SuperNu code (Wollaeger et al., 2013; Wollaeger & van Rossum, 2014). SuperLite uses the multigroup IMC-DDMC approach with accelerated diffusion theory and relativistic corrections of the first order. It has Message-Passing Interface (MPI) and OpenMP capabilities for parallelization of large grids, which allows faster computations. The code uses standard LTE assumptions to calculate the opacities in each domain cell that include the bound-bound opacities for several thousand lines up to \(Z=30\) and analytic fits for bound-free and free-free opacities. The standard elastic Thomson scattering opacity is also included in the calculations. It has the following enhancements that distinguish the code from its parent code and some other publicly-available SN spectral synthesis codes:
Footnote 2: [https://github.com/gururajw/superlite](https://github.com/gururajw/superlite)
* The commonly used assumption of the SN ejecta to be homologously expanding (i.e. \(v\propto r\) at any given time \(t\) since explosion) is not made in the SuperLite code. The equations such as the transport equation to advance the MC paricles and the Doppler shift corrections are solved in their original form without assuming homology. This is important for the outflows affected by CSI where the assumption of homology breaks.
* We have also implemented and are currently testing the non-LTE treatment for calculating the b-b opacities for several hundred hydrogen lines using the rate coefficients that include radiative recombination and photoionization rates, and electron-impact excitation and de-excitation rates for excitation states of singly ionized hydrogen up to the principal quantum number of \(n=10\).
We have tested SuperLite by performing some standard MCRT tests, as well as, by successfully simulating spectra for typical SNe of different types as demonstrated in sections 4 & 5. We have compared the spectra simulated by our code to that simulated by its parent code SuperNu for a SN Ia and SN IIP with homologous outflows. We have also compared the the spectra simulated by our code for a SN IIP to the observations of a similar SN, and for a SN IIn to the calculations performed by the CMFGEN code. Our code compares well with the other codes and observations for these cases.
With SuperLite, we can model a multitude of transients powered by strong CSI, such as SLSNe, PPISN shell collisions, collisions with material expelled by a past merger and can model luminous, rare and uncharacteristic transient phenomena. This will help with interpretation of transient events discovered by the current and next generation high-cadence, wide-field transient surveys conducted by facilities such as the Vera Rubin
Figure 8: Panel (a) shows a comparison of the electron number density calculated with the SuperLite code under LTE assumption with the CMFGEN code for a SN IIn model provided by Luc Dessart (private communication), both of which show a good agreement. Panel (b) shows a comparison of the spectrum produced with SuperLite under LTE conditions (blue) to that produced by CMFGEN (grey) for the same model. The spectra are normalized to the H\(\alpha\) line strength for comparison. The two codes show a good agreement.
Telescope, ZTF and others by enabling direct comparisons to observed time-series of transient spectra.
SuperLite, being a Monte Carlo code, is easily extendable to work with 2-D and 3-D geometries. We plan to adapt SuperLite to multi-D in the near future, which will enable it to model spectra for transient events interacting with non spherically-symmetric circumstellar environments and help us explore viewing angle effects. Furthermore, a new implementation that implicitly couples a hydrodynamic solver with MCRT will be introduced to properly model the feedback of radiation into the expanding material, similar to that presented by Noebauer and Sim (2019); Roth and Kasen (2015). This will further enhance the capability of SuperLite as an open-source radiation transport tool to accelerate the interpretation of new intriguing transient astrophysical phenomena that challenge our understanding of extreme cosmic catastrophes.
We thank Luc Dessart, Sanjana Curtis for kindly providing us their model inputs to test SuperLite code. We would like to thank J. Craig Wheeler, Chris Fryer, Nathan Smith, David Branch for useful discussions. GW and EC would like to thank the National Science Foundation (NSF) for their support made possible by the NSF grant AST-1907617. SuperNu (Wollaeger et al., 2013; Wollaeger and van Rossum, 2014), STELLA (Blinnikov and Bartunov, 1993; Blinnikov et al., 1998; Blinnikov and Sorokina, 2004; Blinnikov et al., 2006), MESA (Paxton et al., 2011, 2013, 2015, 2018, 2019),
|
2304.11804 | A note on contact manifolds with infinite fillings | We use spinal open books to construct contact manifolds with infinitely many
different Weinstein fillings in any odd dimension $> 1$, which were previously
unknown for dimensions equal to $4n+1$. The argument does not involve
understanding factorizations in the symplectic mapping class group. | Zhengyi Zhou | 2023-04-24T03:27:59Z | http://arxiv.org/abs/2304.11804v1 | # A note on contact manifolds with infinite fillings
###### Abstract.
We use spinal open books to construct contact manifolds with infinitely many different Weinstein fillings in any odd dimension \(>1\), which were previously unknown for dimensions equal to \(4n+1\). The argument does not involve understanding factorizations in the symplectic mapping class group.
## 1. Introduction
Contact manifolds arise naturally as convex boundaries of symplectic manifolds, it was known by Gromov [14] and Eliashberg [9] in the late 1980s that not all contact manifolds can be realized in such a way. Therefore understanding symplectic fillings of contact manifolds is a fundamental question in contact topology. Such questions were extensively studied by many researchers starting from the case of no fillings [6, 7, 8, 10, 11, 13, 22, 29], the case of unique (only for the topological type in many cases) fillings [4, 8, 12, 28, 30, 23, 4], and at the end of this spectrum, the case of infinitely many fillings [25, 26]. As one can always blow up a symplectic filling to change the topology, we need to restrict to Liouville or Weinstein fillings for the question of infinite fillings. Unlike the no-filling and unique-filling situations, which typically depend on some rigidity arguments using pseudo-holomorphic curves, the construction of contact manifolds of infinitely many fillings usually uses the topological or flexible side of symplectic topology. The first contact manifold (beyond the trivial case of \(S^{1}\)) with infinitely many different Weinstein fillings was constructed by Ozbagci and Stipsicz [26] in dimension \(3\). Nowadays, there are many constructions with various constraints on the topology of fillings in dimension \(3\), see [1, 2, 3, 5]. Oba [25] generalized Ozbagci and Stipsicz's result to dimension \(4n-1\). Their constructions were based on the open book construction of contact manifolds and finding infinitely many different factorizations by positive Dehn-Seidel twists of the monodromy in the symplectic mapping class group. Such an approach is most efficient in dimension \(3\), as the symplectic mapping class group agrees with the classical mapping class group in the case of surfaces. In higher dimensions, the symplectic mapping class group is different from the classical mapping class group in general, and much less is known. It is worth pointing out that Lazarev [17] constructed contact manifolds with many different Weinstein fillings in dimension \(\geq 5\), where the number of fillings can be arbitrarily large, from a surgical perspective.
In this note, we give a new construction of contact manifolds with infinitely many different Weinstein fillings in any dimension. The construction is based on spinal open books-a generalization of contact open books, introduced by Lisi, Van Horn-Morris and Wendl [18]. The spinal open book was used by Baykur and Van Horn-Morris [5] to construct contact \(3\)-manifolds which admit infinitely many Weinstein fillings with arbitrarily big Euler characteristics and arbitrarily small signatures. Heuristically speaking, spinal open books arise as the contact boundary of a Lefschetz fibration over a general surface with boundary. Such contact manifolds, especially in dimension \(4\), were studied systematically in [18, 19, 21]. Moreover, there are notions of spinal open books, which fiber over Liouville domains of dimension higher than \(2\), see e.g. [20, 24]. In this note, we restrict to the case of the surface base.
**Theorem 1.1**.: _Let \(V\) be the plumbing of two \(T^{*}S^{n}\) along three points. Then the contact boundary \(\partial(\Sigma_{1.1}\times V)\) has infinitely many different Weinstein fillings, where \(\Sigma_{1,1}\) is a genus one Riemann surface with one boundary component, viewed as a Weinstein filling of \(S^{1}\)._
We point out that our construction is local in nature, i.e. if \(V\) contains the domain in Theorem 1.1 as a symplectic subdomain and the homology computation in the proof works, then the conclusion can be drawn for \(\partial(\Sigma_{1.1}\times V)\). For example, it holds for \(V\) in Theorem 1.1 taking boundary connected sum with any Weinstein domain. Moreover, similar phenomena hold for more general plumbings of spheres. Moreover, when \(n\) is odd, i.e. when the contact manifold is of dimension \(4k-1\), we can take \(V\) to be two \(T^{*}S^{n}\) plumbed at one point.
Our strategy is similar to [25, 26]: the contact boundary \(\partial(\Sigma_{1.1}\times V)\) is the trivial spinal open book over \(\Sigma_{1,1}\), and any representation \(\rho:\pi_{1}(\Sigma_{1,1})\to Symp_{c}(V)\) such that \(\rho(\partial\Sigma_{1,1})=\mathrm{id}\) gives rise to a Weinstein filling by a \(V\)-fiber bundle over \(\Sigma_{1,1}\). Sending a generator of \(\pi_{1}(\Sigma_{1,1})\) to identity always yields one such representation, hence every element \(\phi\) of \(Symp_{c}(V)\), i.e. the image of the other generator under \(\rho\), induces a filling. Then by understanding the effect of \(\phi\) on the homology of \(V\), we can get infinite many fillings. In particular, we do not need to consider factorizations in the symplectic mapping class group.
From Theorem 1.1, the next natural question is depriving the question of the classical topological aspect, namely:
**Question 1.2**.: _Are there contact manifolds with infinitely many different Weinstein/Liouville fillings with the same formal data, i.e. as the same differential/almost complex/almost Weinstein manifold (relative to the boundary)?_
We expect the answer to the question to be yes, at least for dimensions high enough. However, to the best of the author's knowledge, we do not even know examples of contact manifolds with more than one smoothly same, but symplectically different fillings. Unlike Theorem 1.1, rigidity techniques, e.g. holomorphic curves or sheaves, must enter the picture to solve the above question. Using spinal open books, we have candidates for Question 1.2 at least for in dimension \(4n+1\).
**Question 1.3**.: _Let \(\phi\in Symp_{c}(V)\) be generated by eighth powers of the Dehn-Seidel twist along Lagrangian spheres in \(V\), where \(\dim V=4n\). Is the symplectic fiber bundle induced from \(\pi_{1}(\Sigma_{1,1})\to Symp_{c}(V)\) by sending one generator to \(\phi\) and the other to \(\mathrm{id}\) symplectomorphic to \(\Sigma_{1,1}\times V\)?_
Clearly, the motivation behind such a question is the fact that eighth powers of the Dehn-Seidel twist are smoothly isotopic to identity in dimension \(4n\)[15], yet not symplectically isotopic to identity [4, 27]. In dimensions \(4\) and \(12\), one can replace the eighth power with a square.
### Acknowledgments
The author is grateful to Fabio Gironella for productive discussions and interest in the project, and to Samuel Lisi, Takahiro Oba, Chris Wendl for helpful comments. The author is supported by National Natural Science Foundation of China under Grant No. 12288201 and 12231010.
## 2. Proof
Let \(V\) be a Liouville domain and \(\phi\in\pi_{0}(Symp_{c}(V))\). We can endow
\[\Sigma_{g,1}\times\partial V\cup_{S^{1}\times\partial V}V_{\phi}\]
a contact structure by the Thurston-Winkelnkemper construction, see [18, SS2.1], where \(\Sigma_{g,1}\) is a genus \(g\) surface with one boundary component and \(V_{\phi}\) is the mapping torus \(V\times[0,1]/(x,0)\sim(\phi(x),1)\). This is a very special case of the spinal open book considered in [18], where the vertebrae (\(\Sigma_{g,1}\) here) can have more
boundary components and be disconnected. In this paper, we only consider the case of \(g=1\) and \(\phi=\mathrm{id}\). Then the contact manifold is the contact boundary \(\partial(\Sigma_{1,1}\times V)\).
**Lemma 2.1** ([5, 18]).: _Let \(\Sigma\) be a connected Riemann surface with boundary and \(V\) be a Weinstein domain, any representation \(\pi_{1}(\Sigma)\to\pi_{0}(Symp_{c}(V))\) mapping the boundary to \(\mathrm{id}\) gives rise to a Weinstein filling of \(\partial(\Sigma\times V)\), which is diffeomorphic to the \(V\)-bundle over \(\Sigma\) from \(\pi_{1}(\Sigma)\to\pi_{0}(Symp_{c}(V))\)._
More generally, if the monodromy of the spinal open book is \(\phi\) and there exist \(\phi_{1},\psi_{1},\dots,\phi_{g},\psi_{g}\in Symp_{c}(V)\) and \(\tau_{1},\dots,\tau_{k}\) are Dehn-Seidel twists along some exact Lagrangian spheres in \(V\), such that
\[\phi=\prod\tau_{i}\prod[\phi_{i},\psi_{i}]\]
Then the spinal open book given by \((\Sigma_{g,1},V,\phi)\) is the contact boundary of a symplectic Lefschetz fibration over \(\Sigma_{g,1}\) with \(k\) singular fibers. When \(V\) is Weinstein, the total space of the Lefschetz fibration is a Weinstein filling of the spinal open book.
**Lemma 2.2**.: _Let \(V_{\phi}\) be the mapping torus, then we have short exact sequences_
\[0\to\ker(\phi_{*}-\mathrm{id})\to H_{*}(V_{\phi})\to\mathrm{coker}(\phi_{*}- \mathrm{id})[-1]\to 0\]
Proof.: The homology of \(V_{\phi}\) can be computed from the cone of \(C_{*}(V)\stackrel{{\phi_{*}-\mathrm{id}}}{{\longrightarrow}}C_{*} (V)\). The induced long exact sequence implies the short exact sequences above.
More generally, let \(V_{\phi\vee\mathrm{id}}\) be the \(V\)-fiber bundle over \(S^{1}\lor S^{1}\) (or homotopically equivalently over \(\Sigma_{1,1}\)), such that the monodromy over one \(S^{1}\) is \(\phi\) and is \(\mathrm{id}\) over the other \(S^{1}\). Then we have a short exact sequence
\[0\to\ker(\phi_{*}-\mathrm{id})|_{H_{k}(V;\mathbb{Z})}\to H_{k}(V_{\phi\vee \mathrm{id}};\mathbb{Z})\to H_{k-1}(V;\mathbb{Z})\oplus\mathrm{coker}(\phi_{* }-\mathrm{id})|_{H_{k-1}(V;\mathbb{Z})}\to 0\]
for \(k\geq 1\). In particular, when \(V\) is a Weinstein domain of dimension \(2n\), then the cardinality of the torsion of \(H_{n+1}(V_{\phi\vee\mathrm{id}})\) will be at least that of \(\mathrm{coker}(\phi_{*}-\mathrm{id})|_{H_{n}(V;\mathbb{Z})}\)
**Lemma 2.3** (Picard-Lefschetz formula, [16, (6.3.3)]).: _Let \(L\) be a Lagrangian \(n\)-sphere in an exact domain \(W\) and \(\tau_{L}\) the Dehn-Seidel twist along \(L\), then \((\tau_{L})_{*}:H_{*}(W;\mathbb{Z})\to H_{*}(W;\mathbb{Z})\) is given by_
\[(\tau_{L})_{*}(c)=\left\{\begin{array}{ll}c+(-1)^{\frac{(n+1)(n+2)}{2}} \langle\,c,[L]\,\rangle[L],&c\in H_{n}(W;\mathbb{Z});\\ c,&c\in H_{j}(W;\mathbb{Z}),j\neq n.\end{array}\right.\]
_where \(\langle\,\cdot,\cdot\,\rangle:H_{n}(W;\mathbb{Z})\otimes H_{n}(W;\mathbb{Z}) \to\mathbb{Z}\) is the the intersection product._
Let \(V^{2n}\) be plumbing of two \(T^{*}S^{n}\) along three points. We use \(L_{1},L_{2}\) to denote the two Lagrangian spheres, oriented such that \(\langle\,[L_{1}],[L_{2}]\,\rangle=(-1)^{\frac{n(n+1)}{2}}3\)1. When \(n>1\), under the free basis \([L_{1}],[L_{2}]\) of \(H_{n}(V^{2n};\mathbb{Z})\), by the Picard-Lefschetz formula, the effect of the Dehn-Seidel twists \(\tau_{L_{1}},\tau_{L_{2}}\) on \(H_{n}(V^{2n};\mathbb{Z})\) is given by
Footnote 1: That is we orient \(L_{2}\) by the induced orientation of \(T^{*}_{q}L_{1}\), where \(q\) is an intersection point. Then the intersection number using the orientation on \(T^{*}L_{1}\) induced from the orientation of \(L_{1}\) is \(3\). The extra sign comes from that the symplectic orientation (using \(-\mathrm{d}\sum p_{i}\mathrm{d}q_{i}\), i.e. the standard symplectic orientation on \(\mathbb{R}^{2n}=T^{*}\mathbb{R}^{n}\)) is different from the induced orientation on \(T^{*}L_{1}\) from that of \(L_{1}\) by \((-1)^{\frac{n(n+1)}{2}}\).
\[\left[\begin{array}{cc}1&-3\\ 0&1\end{array}\right],\left[\begin{array}{cc}1&0\\ 3&1\end{array}\right]\]
for \(n\) odd respectively, and
\[\left[\begin{array}{cc}-1&-3\\ 0&1\end{array}\right]\left[\begin{array}{cc}1&0\\ -3&-1\end{array}\right]\]
for \(n\) even respectively. When \(n=1\), \(H_{1}(V;\mathbb{Z})=\mathbb{Z}^{4}\) and \((\tau_{L_{1}})_{*}\) using the basis \([L_{1}],[L_{2}]\) and two other cycles (with suitable orientation) glued from two arcs from \(L_{1},L_{2}\) is given by
\[\left[\begin{array}{cccc}1&-3&-1&-1\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right]\]
Proof of Theorem 1.1.: Let \(\gamma_{1},\gamma_{2}\) be two loops in \(\Sigma_{1,1}\), representing the bases of the fundamental group in the torus. We consider the representation \(\rho:\pi_{1}(\Sigma_{1,1})\to Symp_{c}(V),\gamma_{1}\mapsto\phi,\gamma_{2}\mapsto \operatorname{id}\). By Lemma 2.1, it gives rise to a filling of \(\partial(\Sigma_{1,1}\times V)\), which is homotopy equivalent to \(V_{\phi\vee\operatorname{id}}\).
When \(n>1\) is odd, we take \(\phi\) to be \(\tau_{L_{1}}\). Since \(\phi_{*}^{k}\) on \(H_{n}(V;\mathbb{Z})\) is given by
\[\left[\begin{array}{cc}1&-3k\\ 0&1\end{array}\right]\]
Then by the discussion after Lemma 2.2, we know that \(H_{n+1}(V_{\phi^{k}\vee\operatorname{id}};\mathbb{Z})\) has a torsion of \(\mathbb{Z}/3k\). As a consequence, each \(k\) yields a different Weinstein filling.
When \(n=1\), we take \(\phi\) to be \(\tau_{L_{1}}\). Then \(\phi_{*}^{k}\) on \(H_{n}(V;\mathbb{Z})\) is given by
\[\left[\begin{array}{cccc}1&-3k&-k&-k\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right]\]
We know that \(H_{2}(V_{\phi^{k}\vee\operatorname{id}};\mathbb{Z})\) has a torsion of \(\mathbb{Z}/k\). As a consequence, each \(k\) yields a different Weinstein filling.
When \(n\) is even, we take \(\phi\) to be \(\tau_{L_{1}}\circ\tau_{L_{2}}\). Then \(\phi_{*}\) on \(H_{n}(V;\mathbb{Z})\) is given by
\[\left[\begin{array}{cc}8&3\\ -3&-1\end{array}\right]\]
This matrix has positive eigenvalues \(\lambda_{1}=\frac{7+3\sqrt{5}}{2}>1,\lambda_{2}=\frac{7-3\sqrt{5}}{2}<1\). As a consequence, we have
\[|\det((\phi_{*})^{k}-\operatorname{id})|=|2-\lambda_{1}^{k}-\lambda_{2}^{k}|,\]
which grows exponentially. The the torsion of \(H_{n+1}(V_{\phi^{k}\vee\operatorname{id}})\) is of size \(|2-\lambda_{1}^{k}-\lambda_{2}^{k}|\), which yields infinitely many different fillings as before.
When \(n\) is odd, we can simply take \(V\) to be the plumbing of \(T^{*}S^{n}\) at one point. Then \(\tau_{L_{1}}^{k}\) acts on \(H_{n}(V;\mathbb{Z})\) by
\[\left[\begin{array}{cc}1&-k\\ 0&1\end{array}\right]\]
which yields infinitely many fillings.
|
2307.08805 | Decoding chirality in circuit topology of a self entangled chain through
braiding | Circuit topology employs fundamental units of entanglement, known as soft
contacts, for constructing knots from the bottom up, utilising circuit topology
relations, namely parallel, series, cross, and concerted relations. In this
article, we further develop this approach to facilitate the analysis of
chirality, which is a significant quantity in polymer chemistry. To achieve
this, we translate the circuit topology approach to knot engineering into a
braid-theoretic framework. This enables us to calculate the Jones polynomial
for all possible binary combinations of contacts in cross or concerted
relations and to show that, for series and parallel relations, the polynomial
factorises. Our results demonstrate that the Jones polynomial provides a
powerful tool for analysing the chirality of molecular knots constructed using
circuit topology. The framework presented here can be used to design and
engineer a wide range of entangled chain with desired chiral properties, with
potential applications in fields such as materials science and nanotechnology. | Jonas Berx, Alireza Mashaghi | 2023-07-17T19:40:57Z | http://arxiv.org/abs/2307.08805v1 | # Decoding chirality in circuit topology of a self entangled chain through braiding
###### Abstract
Circuit topology employs fundamental units of entanglement, known as soft contacts, for constructing knots from the bottom up, utilizing circuit topology relations, namely parallel, series, cross, and concerted relations. In this article, we further develop this approach to facilitate the analysis of chirality, which is a significant quantity in polymer chemistry. To achieve this, we translate the circuit topology approach to knot engineering into a braid-theoretic framework. This enables us to calculate the Jones polynomial for all possible binary combinations of contacts in cross or concerted relations and to show that, for series and parallel relations, the polynomial factorises. Our results demonstrate that the Jones polynomial provides a powerful tool for analysing the chirality of molecular knots constructed using circuit topology. The framework presented here can be used to design and engineer a wide range of entangled chain with desired chiral properties, with potential applications in fields such as materials science and nanotechnology.
Circuit topology, molecular engineering, chirality, knot theory
## I Introduction
Molecular chirality plays a crucial role in biology and soft matter and can generally be classified into chemical, geometrical, and topological chirality [1]. It follows that there are likely couplings between these different types of chirality. Indeed, previous research has shown that chemical chirality or helicity, which refers to the inability of a molecule to switch between enantiomeric configurations by means of intramolecular operations, influences the topological chirality of macromolecular structures, such as polymers. Furthermore, it was recently shown that continuously variable chiral geometries that consist of chemical building blocks with discrete binary chirality can emerge for nano-structured microparticles with bowtie shapes [2]. In generic worm-like chain (WLC) models, it was demonstrated that chiral coupling between segments breaks topological mirror symmetry, such that molecular knots formed by closing open-ended chains with a given helicity prefer one macroscopic handedness over the other [3]. The ability to tune the chirality in complex molecular configurations is expected to lead to novel innovations in, for example, the design and synthesis of molecular machines [4; 5; 6] or the development of chiral photonics [7].
Complex knotted chains can be constructed using multiple entangled fundamental building blocks, which we refer to as "soft contacts". These chain configurations are created from the bottom up using an approach called circuit topology. By combining different soft contacts in specific ways, it is possible to engineer knots with tailored properties for specific applications. The ability to design and control the chirality of these structures is particularly important, as it can influence their stability, thermodynamics, and response to external stimuli. In these composite systems, there exists an entropic attraction between entangled structures on the same chain [8; 9], which can even pass through one-another [10]. Experiments with the DNA of a T4 bacteriophage that was stretched in a microfluidic channel using a planar elongational electric field confirmed the theoretical and numerical results related to this entropic attraction [11]. Importantly, the chirality of these structures influences the mutual attraction and the thermodynamics of the macromolecular chain. In particular, a stretched polymer containing two soft contacts possesses a free-energy minimum when both structures are intertwined, and the depth of the minimum is determined by the relative chirality [12]. The relative chirality, in turn, determines the stability of the knotted chain configuration.
Circuit topology [13] works in 3D and acknowledges the different chiralities that can be present in molecular knots, and encodes chirality in its string notation. To better understand the circuit topology approach, complementary approaches from knot theory can be employed. For example, an Alexander polynomial approach was recently utilised to find parallels between circuit topology operations and knot theory [14]. However, the Alexander polynomial is incapable of distinguishing between enantiomorphic configurations of the same entangled structure. In order to be able to use circuit topology as an effective tool for the characterisation of molecular chains, we need to facilitate the analysis of chirality. In this work, we will do this by calculating the Jones polynomial for binary combinations of soft contacts in all possible circuit topological arrangements. To do so, we will take the approach originally introduced by Jones, i.e., by means of a braid representation into the Temperley-Lieb algebra [15].
Soft-contacts: Building Blocks of Molecular Knots
Circuit topology, a theory originally developed to investigate the arrangement of contacts in a folded linear chain [16], has recently been generalised to encode chain entanglement [17; 14]. The smallest structural unit of entanglement in generalised circuit topology (gCT) is the soft contact, or s-contact. This unit is the simplest stable entangled structure that does not change when the chain is deformed. We consider a linear chain that is folded once, and where the end is passed through the resulting loop, creating another loop. There are only four non-trivial configurations that cannot by untied by pulling the ends. One can identify two crossings that form the loops and define a chirality according to the right-hand rule. If both loops have different chirality, the structure will disentangle, and the resulting configuration is not an s-contact. The two remaining crossings define how both loops are locked together; if the chain end is passed through the loops in the same direction as the chirality, we call the resulting s-contact "even". When it is passed through in the opposite direction we call the contact "odd". We will denote the resulting contacts by a string with superscripts \(+e,\,-e,\,+o,\,-o\), depending on the chirality \(\{+,-\}\) of the loop and the manner in which the chain end passes through the loop \(\{e,o\}\). Only four s-contacts are necessary to construct the complete gCT framework. To make the connection with classical circuit topology (i.e., with hard contacts) more apparent, we will also introduce contact sites, i.e., regions of the chain where the chain gets trapped. Since the soft contacts can be deformed without altering the global structure, the location of these sites is not exact, allowing for a more flexible use of the term. We denote these sites by a red dot in Fig. 1, where we list all possible s-contacts.
The central question is now: how can s-contacts be arranged to construct the topology of a linear chain? Before addressing this question, we first discuss the necessary mathematical tools we will use in this work by means of the \(A^{+e}A\) and \(A^{-e}A\) s-contacts.
## III Braid Closures and Invariant Polynomials
By joining the ends of the \(A^{+e}A\) and \(A^{-e}A\) contacts without introducing additional crossings, we create the right-handed and left-handed trefoil knots, \(K_{\pm}=\overline{A^{\pm e}A}\), respectively. In Alexander-Briggs notation, these read as \(3_{1}^{\pm}\), where the superscript denotes the chirality. To proceed, we perform the Yamada-Vogel algorithm [18; 19] to turn the knot into a closed braid representation, see Fig. 2.
Since the Yamada-Vogel algorithm is quite involved and depends on the specific sign convention one adheres to, we refer the reader to Ref. [19] for a detailed discussion and more examples. Subsequently "cutting" the closed braid yields a family of algebraic braids that are related to one another by Markov moves, i.e., conjugation and (de-)stabilisation, that can be described by means of a braid word \(\beta_{K}\). The braid word is a string of operators \(\sigma_{i}^{\pm 1}\) that describe whether the \(i\)-th strand crosses over (positive exponent \(+1\)) or under (negative exponent \(-1\)) the \((i+1)\)-th strand. For the trefoil knots, the braid word is simply \(\beta_{K_{\pm}}=\sigma_{1}^{\pm 3}\), where the positive index corresponds to the right-handed trefoil knot \(K_{+}\) and the negative index to the left-handed trefoil knot \(K_{-}\). The writhe \(w\) of the braid is then easily found by taking the sum of the exponents in the braid word, i.e., \(w_{\pm}=\pm 3\).
To find polynomials that describe the knot resulting from the braid closure, we will look at two techniques (although others exist that may be simpler): the reduced Burau representation to find the Alexander polynomial \(\Delta_{K}(t)\), and the Kauffmann bracket approach to find the Jones polynomial \(J_{K}(t)\).
Figure 1: The four fundamental soft contacts needed in the circuit topology framework (top), together with a possible braid representation (bottom). The braid is read from left to right and from bottom to top.
Figure 3: The fundamental braid operators \(\sigma_{i}^{\pm 1}\) indicating over and undercrossings for the \(i\)-th strand.
Figure 2: A simple illustration of the Yamada-Vogel algorithm to convert the right-handed trefoil \(3_{1}^{+}\), corresponding to the closure (denoted by the dashed line) of \(A^{+e}A\), into the braid \(\beta=\sigma_{1}^{3}\).
The Burau representation \(B\) of a braid with index \(n\) is a matrix representation of the operators constituting the braid word. The standard generators \(\sigma_{i}\) of the braid group \(B_{n}\) can explicitly be described in this representation by the matrices
\[\sigma_{i}\to\begin{pmatrix}I_{i-1}&0&0&0\\ \hline 0&1-t&t&0\\ 0&1&0&0\\ \hline 0&0&0&I_{n-i-1}\end{pmatrix}\,,\qquad i=1,\ldots,n-1 \tag{1}\]
where \(0<t\leq 1\) and \(I_{n}\) is the \(n\times n\) identity matrix. To find \(\sigma_{i}^{-1}\), it can easily be checked that \(\sigma_{i}\) is invertible by making use of the block diagonal structure. The Burau matrix \(B\) for the braid can then be found by matrix multiplication of the generators. Note that since the Burau matrices are row-stochastic, the representation is not irreducible. The generators \(\tilde{\sigma}_{i}\) in the _reduced_ Burau representation are then given by
\[\tilde{\sigma}_{1}\to\begin{pmatrix}-t&1&0\\ \hline 0&1&0\\ \hline 0&0&I_{n-3}\end{pmatrix}\,,\tilde{\sigma}_{n-1}\to\begin{pmatrix}I_{n-3}& 0&0\\ \hline 0&1&0\\ 0&t&-t\end{pmatrix}\,, \tag{2}\] \[\tilde{\sigma}_{i}\to\begin{pmatrix}I_{i-2}&0&0&0\\ \hline 0&1&0&0\\ 0&t&-t&1&0\\ 0&0&0&1&0\\ \hline 0&0&0&0&I_{n-i-2}\end{pmatrix}\,,\qquad i=1,\ldots,n-2\]
and for \(n=2\), \(\tilde{\sigma}_{1}=-t\). The relation between the reduced Burau matrix \(\tilde{B}\) and the Alexander polynomial \(\Delta_{K}(t)\) for the braid closure \(K\) is given by
\[\Delta_{K}(t)=\frac{1-t}{1-t^{n}}\det(I_{n}-\widetilde{B})\,. \tag{3}\]
Let us now explicitly perform the calculation for the \(A^{+e}A\) and \(A^{-e}A\) contacts. Multiplying the matrices corresponding to the reduced operators for \(\beta_{\pm}\), we get the matrix \(\tilde{B}_{\pm}=-t^{\pm 3}\). The Alexander polynomials are then determined by inserting \(\tilde{B}_{\pm}\) into equation (3), i.e.,
\[\Delta_{K_{\pm}}(t)=\begin{cases}1-t+t^{2}&\text{for }K_{+}\\ t^{-3}-t^{-2}+t^{-1}&\text{for }K_{-}\,.\end{cases} \tag{4}\]
Although both Laurent polynomials may seem to be different, the Alexander polynomials are only unique up to multiplication by the Laurent monomial \(\pm t^{n}\). If we make the choice to normalise \(\Delta_{K_{\pm}}(t)\) in such a manner that the constant term is positive, we can easily see that both Laurent polynomials are, in fact, equal. The consequence of this result is that the Alexander polynomial is insufficient to distinguish between the different chiralities for soft contacts.
We now shift our attention to the Jones polynomial. Although this polynomial can be calculated easily by means of the Kauffmann bracket algorithm applied directly to the planar projection of the trefoil knots, we want to keep the braid representation as our starting point. To proceed, we define the homomorphism on \(n\) strands \(\rho_{n}:B_{n}\to\mathrm{TL}_{n}\) between the braid group \(B_{n}\) and the Temperley-Lieb algebra \(\mathrm{TL}_{n}\) over the ring \(\mathbb{Z}[A,A^{-1}]\) as follows:
\[\rho_{n}(\sigma_{i}) =A\mathbb{1}_{n}+A^{-1}U_{i}\,, \tag{5}\] \[\rho_{n}(\sigma_{i}^{-1}) =A^{-1}\mathbb{1}_{n}+AU_{i}\,,\]
where the \(U_{i}\) are the generators of the TL algebra. By mapping the braid word \(\beta_{K}\) to a multiplication of factors from (5) and using the Jones relations for multiplications of \(U_{i}\), we find a polynomial in \(U_{t}\), where \(t\) indexes all fundamental generators in the Temperly-Lieb algebra. The number of such generators for a braid of \(n\) strands is equal to the Catalan number \(C_{n}=\binom{2n}{n}/(n+1)\). The coefficient of \(U_{t}\) is defined as \(\langle\beta_{K}|t\rangle\), such that the expression for the braid becomes
\[\rho_{n}(\beta_{K})=\sum_{t}\langle\beta_{K}|t\rangle U_{t}\,. \tag{6}\]
We define the bracket for an operator \(U_{t}\) as \(\delta^{||U||}\), where \(\delta=-A^{2}-A^{-2}\) and \(||U||\) is the number of components in the unlink minus one, obtained by taking the closure of the Temperley-Lieb diagrams corresponding to \(U_{t}\). A detailed discussion on the algorithm can be found in [20].
The normalised bracket polynomial, denoted by \(\langle.\rangle\) for the braid representation \(\beta_{\pm}\) of the even \(A^{\pm e}A\) contacts, i.e., \(\langle\beta_{\pm}\rangle\) is then
\[\langle\beta_{\pm}\rangle=A^{\mp 4}+A^{\mp 12}-A^{\mp 16}\,. \tag{7}\]
By inserting \(A=t^{-1/4}\), we finally get the Jones polynomials,
\[J_{K_{\pm}}(t)=\begin{cases}t+t^{3}-t^{4}&\text{for }K_{+}\\ t^{-1}+t^{-3}-t^{-4}&\text{for }K_{-}\,,\end{cases} \tag{8}\]
where it can now easily be seen that \(J_{K_{+}}(t)=J_{K_{-}}(1/t)\). Since the Jones polynomial is already normalised, there is no freedom in choosing a monomial prefactor, making the polynomials unique. Hence, the Jones polynomial can differentiate between different chiralities. Note that the Jones polynomial can only be used as proof of the chirality of a knot, not as proof of amphichirality.
\begin{table}
\begin{tabular}{c c c} \hline \(A^{+e}A\) & \(A^{-e}A\) & \(A^{\pm e}A\) \\ \hline \(3_{1}^{+}\) & \(3_{1}^{-}\) & \(4_{1}\) \\ \hline \end{tabular}
\end{table}
Table 1: The knots resulting from the closure of the fundamental s-contacts. Green and red colours indicate that knots have opposite chirality.
What about the odd s-contacts? Repeating the previous calculations for the knots \(K_{\pm}=\overline{A^{\pm o}A}\) resulting from the closures of the corresponding s-contacts yields the Alexander polynomial
\[\Delta_{K_{\pm}}(t)=1-3t+t^{2}\,, \tag{9}\]
for both s-contact chiralities. This was the expected result, as positive and negative chiralities are simply reflections of the contact in the plane. The Jones polynomials are
\[J_{K_{\pm}}(t)=t^{2}-t+1-t^{-1}+t^{-2}\,. \tag{10}\]
They are also the same. This of course makes sense, since the closure of the \(A^{\pm o}A\) contacts is the figure-eight knot (\(4_{1}\)), which is known to be amphichiral, i.e., capable of being continuously deformed into its own reflection in the plane.
We list the knots obtained by closing the different fundamental s-contacts in Table 1. The chirality associated with the closure of a specific s-contact is given by its colour: green indicates that the sign of the highest-valued power (in absolute value) in the corresponding Jones polynomial is _positive_, while red indicates that it's _negative_. Blue indicates that the resulting closure is _amphichiral_. Red and green colours hence pertain to similar knots with opposite chiralities.
## IV Generalised circuit topology
We now return to circuit topology's central question: how can s-contacts be combined to build all known molecular knots? The CT framework focuses on pairwise relations between contacts \(A\) and \(B\). Only three relations can be defined for soft contacts: the series (\(S\)), parallel (\(P\)) and cross (\(X\)) configurations, with corresponding string notation \(AABB\), \(ABBA\) and \(ABAB\), respectively. A fourth category is possible if one allows for one shared contact site, the so-called concerted \(C\) contacts, with string notation \((AB)AB\). These relations are visualised in Fig. 4.
Let us start by considering the \(S\) and \(P\) configurations. Suppose we concatenate two \(A^{+e}A\) contacts (the series \(S\) configuration in the language of CT), resulting in \(A^{+e}AB^{+e}B\) and subsequently close the ends. The resulting knot \(K^{\prime}_{+}=\overline{A^{+e}AB^{+e}B}\) is in fact the connected sum of the two individual knots, i.e., \(K^{\prime}_{+}=K_{+}\,\#\,K_{+}\), where \(K_{+}=\overline{A^{+e}A}\), and hence the Alexander polynomial \(\Delta_{K^{\prime}_{+}}(t)\) is the product of the polynomials of the constituent knots, i.e.,
\[\Delta_{K^{\prime}_{+}}(t)=\Delta_{K_{+}}^{2}(t)=1-2t+3t^{2}-2t^{3}+t^{4}\,, \tag{11}\]
which is the so-called granny knot. If, however, one performs a similar calculation for a series combination of two \(A^{-e}A\) contacts, i.e., for \(A^{-e}AB^{-e}B\), or a series combination of \(A^{+e}A\) and \(A^{-e}A\), i.e., for \(A^{+e}AB^{-e}B\), the same Alexander polynomial as in equation (11) arises. For the \(A^{-e}AB^{-e}B\) arrangement this does not present a problem, since it is again a reflection of the \(A^{+e}AB^{+e}B\) series contacts in the plane. The series combination \(A^{+e}AB^{-e}B\) (i.e., the square knot upon closure), however, cannot be related to the other configurations in any manner, and so the Alexander polynomial is insufficient to resolve this degeneracy.
Conversely, since the Jones polynomial can also be factored for connected sums, for the knot \(K^{\prime}_{+}=\overline{A^{+e}AB^{+e}B}\) it becomes
\[J_{K^{\prime}_{+}}(t)=J_{K_{+}}^{2}(t)=t^{8}-2t^{7}+t^{6}-2t^{5}+2t^{4}+t^{2}\,, \tag{12}\]
while for the knot \(K^{\prime}_{-}=\overline{A^{-e}AB^{-e}B}\) it becomes
\[J_{K^{\prime}_{-}}(t)=J_{K_{-}}^{2}(t)=t^{-8}-2t^{-7}+t^{-6}-2t^{-5}+2t^{-4}+t ^{-2}\,. \tag{13}\]
The mixed series combination \(K^{\prime}_{0}=\overline{A^{+e}AB^{-e}B}\) then results in a Jones polynomial
\[J_{K^{\prime}_{0}}(t)=-t^{3}+t^{2}-t+3-t^{-1}+t^{-2}-t^{-3}\,. \tag{14}\]
We immediately see from equations (12) - (14) that the Jones polynomials are all different, and that \(J_{K^{\prime}_{+}}(t)=J_{K_{-}}(t^{-1})\).
Note that we did not consider the \(P\) configuration of contacts. This is because the \(S\) and \(P\) configurations are identical. One can "shrink" one of the two soft contacts in a series arrangement and slide it through the other contact, until it is located in the loop. Expanding the contact again results in an overall parallel arrangement of the circuit [11]. Hence, both configurations are described by a connected sum between closures of the constituent s-contacts. A good example of this equivalence is given in [13]. In fact, it can be shown that the chain entropy is maximal when the two soft contacts are intertwined, i.e., in the \(P\) configuration, where the distance between the centres of mass is minimal.
Figure 4: Series (\(S\)), parallel (\(P\)), cross (\(X\)) and concerted (\(C\)) configurations of two \(A^{+e}A\) soft contacts, together with the corresponding string notation.
With the polynomial invariants from the closures of the even and odd s-contacts, one can construct any arrangement of series or parallel configurations. The polynomials for the cross and concerted configurations, however, are not trivially found by taking connected sums. In the next section, we will devote our attention to these topological arrangements.
## V The cross and concerted configurations
We now consider the nontrivial combinations of s-contacts, i.e., the cross (\(X\)) and concerted (\(C\)) configurations. Although they may seem similar, there are crucial differences between them; \(C\) configurations are only nontrivial for some combinations of s-contacts while nontrivial \(X\) configurations are always possible to construct.
### Cross contacts
For the cross configuration, we will consider all 16 combinations of \(A^{\pm e/o}B^{\pm e/o}AB\), which are not necessarily distinct, see Table 2.
Let us consider a simple example for illustration purposes: the \(A^{+e}B^{+o}AB\) cross configuration. By closing this configuration and performing, e.g., Vogel's algorithm, we get the knot \(K\) with possible braid representation \(\beta_{K}=\sigma_{3}\sigma_{2}^{4}\sigma_{1}\sigma_{2}^{-1}\sigma_{3}^{-1} \sigma_{2}^{-1}\sigma_{1}^{-1}\sigma_{2}^{-1}\) on four strands. The resulting Alexander and Jones polynomials are
\[\begin{split}\Delta_{K}(t)&=1-3t+3t^{2}-3t^{3}+t^ {4}\,,\\ J_{K}(t)&=t^{-1}-1+2t-2t^{2}+2t^{3}-2t^{4}+t^{5}\,, \end{split} \tag{15}\]
which leads to the conclusion that the closure of \(A^{+e}B^{+o}AB\) results in the \(6_{2}^{+}\) prime knot (or Miller Institute knot). Its chiral opposite \(6_{2}^{-}\) can be found by either flipping the chirality of the entire contact, i.e., \(A^{-e}B^{-o}AB\), or by switching the order, i.e., \(A^{+o}B^{+e}AB\). Applying both operations on the contact yields \(A^{-o}B^{-e}AB\) and results in the exact same prime knot \(6_{2}^{+}\).
A similar line of reasoning can be applied to every configuration; when the two s-contacts are flipped, the chirality of the cross contact changes; this amounts to reversing the orientation in which we read the contact. When the chirality of both components are flipped simultaneously, the total chirality of the cross contact also flips. Therefore, we can construct Table 2 by only considering a small subset of contacts and derive the others by symmetry arguments. The table is symmetric up to a change of the total chirality, so this leaves 10 configurations to consider. However, since flipping the chirality of both contacts flips the total chirality, the contacts \(A^{+e}B^{+e}AB\) and \(A^{-e}B^{-e}AB\) also yield the same knot, but with different chirality. Similar for \(A^{+o}B^{+o}AB\), \(A^{+e}B^{+o}\), and \(A^{+e}B^{-o}\). This leaves us with only six independent configurations that need to be checked, reducing the number of computations required.
### Concerted contacts
In analogy with circuit topology for hard contacts, we define a concerted contact to be the contact formed by assuming that a contact site is shared, i.e., the two s-contacts share a single loop. We denote a concerted contact in a similar manner as a cross contact, but with brackets around the shared contact pair, e.g., \((A^{+e}B^{+e})AB\), which is different from the cross contact \(A^{+e}B^{+e}AB\). Since both contacts share a single loop, their chirality must be identical, otherwise the structure would disentangle. If one were to try to create such a knot, it would lead to so-called _slip-knots_, which are isotopic to the unknot. This then leaves us with eight nontrivial options. Moreover, it is also easily seen that concerted contacts consisting of both an even and an odd contact result in the unknot. The only remaining possibilities are then the contacts \((A^{+e}B^{+e})AB\), \((A^{+o}B^{+o})AB\) and their chiral opposites which we list in Table 3. To simplify notation, we introduce the following abbreviation \((A^{+e}B^{+e})AB\to A^{+2e}A\), where the exponent indicates the number of concerted contacts. The closure \(K\) of the \(A^{+2e}A\) concerted con
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(B^{+e}\) & \(B^{-e}\) & \(B^{+o}\) & \(B^{-o}\) \\ \hline \(A^{+e}\) & \(5_{1}^{+}\) & \(6_{3}\) & \(6_{2}^{+}\) & \(7_{6}^{+}\) \\ \(A^{-e}\) & \(6_{3}\) & \(5_{1}^{-}\) & \(7_{6}^{+}\) & \(8_{12}\) \\ \(A^{+o}\) & \(6_{2}^{-}\) & \(7_{6}^{+}\) & \(7_{7}^{+}\) & \(8_{12}\) \\ \(A^{-o}\) & \(7_{6}^{-}\) & \(6_{2}^{+}\) & \(8_{12}\) & \(7_{7}^{-}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The knots resulting from the closure of cross-contacts with different chiralities. Green and red colours indicate that knots have opposite chirality, while blue knots are amphichiral.
tact can be reshaped into a braid with word \(\beta=\sigma_{2}^{2}\sigma_{3}\sigma_{2}\sigma_{1}\sigma_{2}\sigma_{3}^{-1} \sigma_{2}\sigma_{1}^{-1}\). The associated Alexander and Jones polynomials are
\[\Delta_{K}(t) =2-3t+2t^{2} \tag{16}\] \[J_{K}(t) =t-t^{2}+2t^{3}-t^{4}+t^{5}-t^{6}\,, \tag{17}\]
while for the chiral opposite \(K^{\prime}=\overline{A^{-2e}}\), the Jones polynomial is \(J_{K^{\prime}}(t)=J_{K}(1/t)\).
## VI Building circuits
Of course, in more complex molecules, more than two s-contacts need to be considered. The modular arrangement of all of these s-contacts constitutes a _circuit_, which can be described by judiciously combining them in any of the aforementioned configurations (\(S,\,P,\,X,\,C\)).
Again, for the \(S\) and \(P\) configurations, this does not present a problem, since the Jones polynomial factorises into simpler polynomials, but for consecutive \(X\) or \(C\) configurations we require a general rule. Let us first consider concerted topologies, since they are somewhat simpler. The closure of the concerted s-contacts (\(A^{+e}B^{+e}C^{+e}\))\(ABC\), or \(A^{+3e}A\) for simplicity, yields a configuration where one end of the chain is looped three times around the large loop. This results in a twist knot with five half-twists and positive chirality. We quickly notice a pattern here; if the concerted contact consists of \(k\in\mathbb{N}\) positive even s-contacts, i.e., \(A^{+ke}A\), the resulting closure will be a twist knot with \(n=2k-1\) half-twists, and the Jones polynomial will consequently be
\[J_{A^{+ke}A}(t)=\frac{t+t^{3}+t^{2k}-t^{3+2k}}{t+1}\,. \tag{18}\]
Flipping the chirality of all the s-contacts then gives us that \(J_{A^{-ke}A}(t)=J_{A^{+ke}A}(t^{-1})\). The odd contacts \(A^{+ko}A\) give a twist knot with an even number of half-twists \(n=2k\), resulting in the Jones polynomial
\[J_{A^{+ko}A}(t)=\frac{t+t^{3}+t^{-2k}-t^{3-2k}}{t+1}\,. \tag{19}\]
What about more exotic combinations of \(C\) contacts? It can easily be shown that in a string of \(C\) contacts, one can look at binary combinations of pairs of contacts. Since pairs that alternate between even and odd always untie, a string of concerted contacts can be reduced to a single twist knot with Jones polynomial given by equation (18) or (19). For example, the configuration \((A^{+e}B^{+3o}C^{+e})ABC\) reduces to the single s-contact \(B^{+o}B\) by first eliminating the \((A^{+e}B^{+o})AB\) contact, and then by eliminating \((B^{+o}C^{+e})BC\).
For cross contacts, the computations are significantly more difficult. We can recognise that combining three identical s-contacts in an \(X\) configuration, e.g., \(A^{+e}B^{+e}C^{+e}ABC\) yields a torus knot \(T_{p,q}\), where \(p\) and \(q\) are coprime integers, with an odd number of crossings. The Jones polynomial of a torus knot is
\[J(t)=t^{(p-1)(q-1)/2}\left(\frac{1-t^{p+1}-t^{q+1}+t^{p+q}}{1-t^{2}}\right)\,. \tag{20}\]
The \(p\) and \(q\) indices indicate the number of crossings and the number of strands in the closed braid representation, respectively. It can be shown graphically that \(q=2\) for all torus knots obtained in the manner described above. We will henceforth also denote the \(k-\)fold cross contact as \(T_{\pm(2k+1),2}\), where the sign corresponds to the global chirality of the contact. As a check, we can confirm that the \(2-\)fold \(A^{+e}B^{+e}AB\) contact can be written as \(T_{5,2}\), which is the \(5_{1}^{+}\) knot, and that, e.g., \(A^{-e}B^{-e}C^{-e}ABC\to T_{-7,2}\), which is the \(7_{1}^{-}\) knot.
Twist and torus knots are of the utmost importance in the study of DNA knots. Of these two knot families, twist knots are more commonly found in nature since their unknotting number \(u\) is always equal to one, while torus knots are characteristically over-represented in experiments with viral DNA due to specific properties of DNA packing [21; 22]. The unknotting number for the \(k-\)fold cross contact equals \(u=(p-1)(q-1)/2=k\), i.e., the minimum number of times the strand must be passed through itself to untie it exactly equals the number of identical s-contacts that constitute the circuit. Hence, we can expect concerted contacts to be ubiquitous in applications involving more complicated circuits.
Since there exists only one torus knot for each crossing number, we can immediately deduce that knots formed from other s-contact combinations must be described by other rules. To the best of our knowledge, we do not know whether such a framework exists, and hence we leave it for future research.
For circuits involving more complex combinations of s-contacts, the modularity of those contacts can be used to derive Jones polynomials. An example is given in Fig. 5, where the circuit \(A^{+e}(B^{+e}C^{+e})BCD^{+e}AD\) is shown; for simplicity we have chosen only the \(+e\) chirality for all contacts. It can be easily seen from the string notation that contacts \(B\) and \(C\) are in a concerted relation and can hence be simplified to \((B^{+e}C^{+e})BC=B^{+2e}\), where the equality sign denotes ambient isotopic equivalence of the corresponding knot closure. The concerted contact itself is now in a parallel relation with the \(A^{+e}A\) contact, which is in turn in a cross relation with the \(D^{+e}D\) contact.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(A^{+2e}A\) & \(A^{-2e}A\) & \(A^{+2o}A\) & \(A^{-2o}A\) \\ \hline \multicolumn{4}{c}{} \\ \multicolumn{4}{c}{} \\ \(5_{2}^{+}\) & \(5_{2}^{-}\) & \(6_{1}^{-}\) & \(6_{1}^{+}\) \\ \hline \end{tabular}
\end{table}
Table 3: The knots resulting from the closure of nontrivial concerted contacts with different chiralities. Green and red colours indicate that knots have opposite chirality.
Since the \(B^{+2e}\) configuration is in parallel with the rest of the circuit, it can be moved out of the string notation, i.e., \(A^{+e}(B^{+e}C^{+e})BCD^{+e}AD=A^{+e}D^{+e}ADB^{+2e}\). The resulting Jones polynomial of the entire circuit is then the product of the Jones polynomials of the cross contact \(A^{+e}D^{+e}AD\) and the concerted contact \(B^{+2e}\), i.e., from Table 4 in the appendix A,
\[J(t) =J_{A^{+e}D^{+e}AD}(t)\times J_{B^{+2e}}(t) \tag{21}\] \[=\left(-t^{7}+t^{6}-t^{5}+t^{4}+t^{2}\right)\] \[\times\left(-t^{6}+t^{5}-t^{4}+2t^{3}-t^{2}+t\right)\,.\]
This approach is quite general; for circuits that involve _distinct_\(S\), \(P\) configurations, among others, one can separate the different subcircuits. The calculation of the Jones polynomial hence reduces to the individual computation of the subcircuits' polynomials. However, when the \(S\) or \(P\) contacts are in some sense "blocked" by others, e.g., in the circuit \(ACABCD\), they cannot be separated. Contacts \(A\) and \(B\) are connected in series, but they cannot be connected in parallel due to their inability to be pulled along the string. The dragging motion is hindered by contact \(C\), which intersects with contacts \(A\) and \(B\). A circuit can then be defined as a distinct section of a string that comprises exclusively of pairs of letters [14]. In essence, a circuit can be separated (put in series) or distinguished from other contacts and circuits. No general theory to compute the Jones polynomial of such inseparable circuits has been found (and is unlikely to exist); we hence leave this for future research.
## VII Conclusions
Circuit topology, a theory that describes intra-chain contacts in folded open chains, was recently generalised to account for chain entanglement [14]. This unique generalisation, specific to circuit topology and not derived from other theories, allows for the construction of various types of knots, including those commonly found in biomolecules [4, 13, 23, 24, 25]. The approach reveals patterns not apparent from traditional knot theoretic approaches [13]. For example, protein knots form a well-defined, distinct group, which naturally appears if expressed in terms of circuit topology units and operations. In this paper, we have studied the advantages of using a circuit topology framework for engineering of molecular knots, and paid special attention to the chirality of the structures by means of a braid-theoretic framework. Our approach demonstrates the importance of considering chirality in the construction of topological circuits, and show that the Jones polynomial is an effective invariant for characterising their properties.
The presented approach allows us to set the stage for future studies on (braided) multichain systems that can exhibit more complex intra- and interchain entanglement, and hence can contain links, necessitating the use of invariant polynomials beyond the Alexander polynomial. By expanding the scope of our framework, we can gain deeper insights into the structure and function of biomolecules and other complex materials, paving the way for the development of new technologies and applications in fields such as biophysics, materials science, and nanotechnology.
## Appendix A
We present a list of the Jones polynomials for all s-contacts, as well as their combinations using cross and concerted relations, together with the knot and associated unknotting number they represent when closed.
|
2307.06828 | Non-looseness of boundaries of Legendrian ribbons | Every null-homologous link in an oriented 3-manifold is isotopic to the
boundary of a ribbon of a Legendrian graph for any overtwisted contact
structure. However this is not the case if the boundary is required to be
non-loose. Here, we define the `Tight Reattachment Property' for a Legendrian
graph and show that it implies the boundary of its ribbon is non-loose. We also
discuss the applicability of this property and examine examples and
constructions of Legendrian graphs with this property. | Kenneth L. Baker, Sinem Onaran | 2023-07-13T15:44:17Z | http://arxiv.org/abs/2307.06828v1 | # Non-looseness of boundaries of Legendrian ribbons
###### Abstract.
Every null-homologous link in an oriented 3-manifold is isotopic to the boundary of a ribbon of a Legendrian graph for any overtwisted contact structure. However this is not the case if the boundary is required to be non-loose. Here, we define the 'Tight Reattachment Property' for a Legendrian graph and show that it implies the boundary of its ribbon is non-loose. We also discuss the applicability of this property and examine examples and constructions of Legendrian graphs with this property.
## 1. Introduction
The notion that a link in \(S^{3}\) is strongly quasipositive, SQP, has been extended to other 3-manifolds via open books by several sets of authors in essentially the same way [1, 1, 19, 21]. In turn, such links correspond to boundaries of ribbons of Legendrian graphs in the contact structure supported by the open book.
In a contact 3-manifold \((M,\xi)\), a graph \(\Lambda\) is _Legendrian_ if it is everywhere tangent to \(\xi\). Its _(Legendrian) ribbon_\(R(\Lambda)\) is an embedded compact surface that is tangent to \(\xi\) along \(\Lambda\), is otherwise transverse to \(\xi\), and retracts to \(\Lambda\) under a flow tangent to \(\xi|_{R(\Lambda)}\). As such \(\partial R(\Lambda)\), the boundary of the ribbon, is a transverse link.
Baader-Ishikawa show that quasipositive surfaces in \(S^{3}\) correspond to ribbons of connected Legendrian graphs in the standard tight contact structure \((S^{3},\xi_{std})\)[1]. Consequently, strongly quasipositive links in \(S^{3}\) are exactly those links that arise as the boundary of the ribbon of a connected Legendrian graph in \((S^{3},\xi_{std})\). (See [14] for an overview of quasipositive surfaces and strongly quasipositive links.)
Generalizing this to other closed manifolds, Hayden showed that a link is SQP with respect to an open book if and only if it is the boundary of a ribbon of a Legendrian graph in the contact structure supported by the open book [19, Theorem 1]. However, while the property of being SQP places significant restrictions on links in tight contact structures, every null-homologous link is the boundary of a Legendrian ribbon in an overtwisted contact structure on a closed 3-manifold [1]. Thus this notion of SQP for overtwisted contact structures is no longer discriminating the way it is for tight contact structures.
What enables this failure of discrimination is that the complement of the Legendrian graph is permitted to be overtwisted. To regain some of the strength of the theory of strongly quasipositive links in tight manifolds, we shift focus to Legendrian graphs and their ribbons with tight complement.
The term 'non-loose' is typically reserved for subsets of overtwisted contact manifolds with tight complement, but we find it convenient to relax this constraint. Throughout this article, let us say a subset of a contact 3-manifold is _non-loose_ if its complement is tight, even if the ambient contact structure is not overtwisted. Also, following [20], we say a subset is _strongly non-loose_ if its complement is tight and has no Giroux torsion. With this terminology, subsets of tight manifolds are necessarily strongly non-loose.
As a starting point, one readily observes that a Legendrian graph \(\Lambda\) and its ribbon \(R(\Lambda)\) have regular neighborhoods with contactomorphic exteriors. Consequently, if \(\Lambda\) is loose if and only if \(R(\Lambda)\) is too, and these further imply that \(\partial R(\Lambda)\) is loose as well. Therefore we have:
**Lemma 1.1**.: _For a Legendrian graph \(\Lambda\),_
\[\partial R(\Lambda)\text{ is non-loose }\Longrightarrow\text{ $\Lambda$ is non-loose }\Longleftrightarrow\text{ $R(\Lambda)$ is non-loose}.\qed\]
However, the first implication doesn't immediately go the other way. To what extent do we have the missing implication?
**Question 1.2**.: _If \(\Lambda\) is a non-loose Legendrian graph, is the transverse link \(\partial R(\Lambda)\) also non-loose?_
As evidence for the possibility that there may exist an example of a non-loose Legendrian graph \(\Lambda\) with \(\partial R(\Lambda)\) loose, we observe that there are non-loose Legendrian _knots_\(\Lambda\) with positive transverse push-offs \(T_{+}(\Lambda)\) that are loose. However in this situation \(\partial R(\Lambda)\) is the double transverse push-off \(T_{+}(\Lambda)\cup T_{-}(\Lambda)\) which is non-loose in all the cases we know. Basic examples of this behavior come from the non-loose Legendrian unknots in an overtwisted \(S^{3}\), see [1] (also presented in Theorem 3.2) and the discussion in [10]. Similar examples can be found for other connected Legendrian graphs \(\Lambda\) in which \(\partial R(\Lambda)\) is a transverse link of more than one component and some proper sublink is loose.
If the Legendrian graph \(\Lambda\) is more than just non-loose but actually has universally tight exterior, then there is an affirmative answer to Question 1.2. It follows from Convex Decomposition Theory and Theorem 2.7 of [13] that the Legendrian approximations of \(\partial R(\Lambda)\) all have universally tight exterior. Hence \(\partial R(\Lambda)\) is non-loose.
Hedden and Tovstopyat-Nelip give another sort of answer, showing that every null-homologous link in a closed oriented \(3\)-manifold is the non-loose boundary of a ribbon for some contact structure.
**Theorem 1.3** ([13]).: _For any null-homologous link \(L\) in a closed oriented \(3\)-manifold \(M\), there is some contact structure \(\xi\) on \(M\) and Legendrian graph \(\Lambda\) so that \(\partial R(\Lambda)\) is non-loose and isotopic to \(L\)._
Sketch of proof.: Given a minimal genus Seifert surface \(F\) for \(L\), the complementary sutured manifold \((M_{F},\gamma_{F})\) is taut and therefore admits a sutured manifold hierarchy [12]. This hierarchy can be turned into a convex decomposition hierarchy [13] in which the convex decomposing surfaces all have non-nested boundary parallel dividing curves. Then tightness of the resulting product pieces can be pulled back to a tight contact structure on \(M_{F}\) with convex boundary corresponding to the sutured structure. Finally the positive and negative regions of the boundary, \(R_{+}(\gamma_{F})\) and \(R_{-}(\gamma_{F})\), may be viewed as ribbons of Legendrian graphs \(\Lambda_{+}\) and \(\Lambda_{-}\) that can be refolued to a Legendrian graph \(\Lambda\) whose ribbon \(R(\Lambda)\) is isotopic to \(F\). The tightness also pulls back through this gluing giving a tight contact structure in the complement of \(\partial R(\Lambda)\). As \(F\) is isotopic to \(R(\Lambda)\), \(L\) is isotopic to \(\partial R(\Lambda)\).
There are potentially many choices in the above sketch, leading to the link \(L\) being isotopic to a non-loose link \(\partial R(\Lambda)\) for some Legendrian graph \(\Lambda\) in many different contact structures \(\xi\). However, not every Legendrian ribbon with tight complement arises from this construction. Hedden and Tovstopyat-Nelip further show that the transverse link \(\partial R(\Lambda)\) constructed in their proof always has non-zero \(\hat{t}\) invariant [13], extending a previous result for the transverse bindings of open books [14]. See [11] for the definition of the transverse invariant \(\hat{t}\).
**Remark 1.4**.: Non-loose transverse links \(\partial R(\Lambda)\) with trivial \(\hat{t}\) invariant do exist. Examples may be constructed by taking connected sums with tight contact manifolds that have trivial contact invariant.
More specifically, Let \(\Lambda_{1}\) be a Legendrian graph in \((M_{1},\xi_{1})\) with \(\partial R(\Lambda_{1})\) non-loose. Let \((M_{0},\xi_{0})\) be a tight contact manifold with trivial contact invariant. Then, letting \(\Lambda\) be the image of \(\Lambda_{1}\) in the connected sum \((M_{1}\#M_{0},\xi_{1}\#\xi_{0})\), the transverse link \(\partial R(\Lambda_{1})\) is non-loose since its exterior is the connected sum of tight manifolds (eg. Lemma 2.1). However \(\hat{t}(\partial R(\Lambda_{1}))=0\) due to the behavior of these contact invariants under connected sum.
### The Tight Reattachment Property
Our main theorem gives an affirmative answer to Question 1.2 in a different setting, when the Legendrian graph satisfies a condition that we call "the Tight Reattachment Property". To state it, we need a few definitions.
In a contact \(3\)-manifold \((M,\xi)\), let \(\Lambda\) be a Legendrian graph with regular closed neighborhood \(N(\Lambda)\). That is, \(N(\Lambda)\) is a thickening of a ribbon \(R(\Lambda)\) to a handlebody with convex boundary whose dividing curves are \(\partial R(\Lambda)\). Then, letting \(\xi_{\Lambda}\) be the restriction of \(\xi\) to \(M_{\Lambda}=M\backslash intN(\Lambda)\), the contact manifold \((M_{\Lambda},\xi_{\Lambda})\) is the _exterior_ of \(\Lambda\) and has convex boundary \(\partial N(\Lambda)\).
We may regule \(\partial_{-}N(\Lambda)\) to \(\partial_{+}N(\Lambda)\) by any orientation preserving diffeomorphism \(\psi\) that is the identity in a collar of the dividing curves \(\partial R(\Lambda)\) to make a new contact \(3\)-manifold \((M_{\Lambda,\psi},\xi_{\Lambda,\psi})\). Say \((M_{\Lambda,\psi},\xi_{\Lambda,\psi})\) is obtained from \((M,\xi)\) by a _reattachment along \(\Lambda\)_. If some reattachment along \(\Lambda\) is tight, then we say \(\Lambda\) has the _Tight Reattachment Property_, TRP for short. Observe that a Legendrian graph with the TRP necessarily has tight complement as its exterior embeds in a tight manifold.
As these reattachments are really about the ribbon, we may also speak of a ribbon having the TRP. Indeed, the ribbon \(R(\Lambda)\) has the same exterior as the Legendrian graph \(\Lambda\). To that point, Lemma 2.3 shows that any spine of a ribbon can be Legendrian realized to have an isotopic ribbon. Furthermore, one may view the exterior \((M_{\Lambda},\xi_{\Lambda})\) of \(\Lambda\) as being supported by a partial open book [10]. In the language of [11], the TRP asks that such a partial open book extends to an open book supporting a tight contact structure.
With the definition of TRP, we may now state our main theorem.
**Theorem 2.6**.: _Let \(\Lambda\) be a connected Legendrian graph in a closed contact \(3\)-manifold. If \(\Lambda\) has the Tight Reattachment Property, then the transverse link \(\partial R(\Lambda)\) is non-loose._
This theorem developed from and generalizes an initial approach for determining non-looseness of boundaries of Legendrian ribbons using Heegaard Floer contact invariants which we present is Section 2.5.
To show this property applies somewhat broadly, give two constructions of Legendrian graphs with the TRP. Observe that one may express Murasugi sums of ribbons as ribbons of 'Legendrian fusions' of Legendrian graphs, see section 2.3.
**Theorem 2.12**.: _Suppose \(\Lambda\) is the Legendrian fusion \(\Lambda_{+}*\Lambda_{-}\) of two Legendrian graphs. If each \(\Lambda_{+}\) and \(\Lambda_{-}\) have the Tight Reattachment Property, then \(\Lambda\) has the Tight Reattachment Property._
Restated for ribbons, together these give the following:
**Corollary 1.5**.: _Suppose \(R_{i}\) is a connected Legendrian ribbon with the Tight Reattachment Property in \((M_{i},\xi_{i})\) for each \(i=1,2\). Then any Murasugi sum \(R_{1}*R_{2}\) may be regarded as a connected Legendrian ribbon \(R\) with the Tight Reattachment Property so that the transverse link \(\partial R\) is non-loose. _
We further observe that the TRP is inherited from subgraphs.
**Theorem 2.13**.: _Let \(\Lambda^{\prime}\) be a connected subgraph of a connected Legendrian graph \(\Lambda\) in a closed contact \(3\)-manifold. If \(\Lambda^{\prime}\) has the TRP, then \(\Lambda\) does too._
In general there is no reason that the non-looseness of \(\partial R(\Lambda^{\prime})\) should confer the non-looseness of \(\partial R(\Lambda)\). However, with Theorem 2.6, Theorem 2.13 shows the non-looseness of \(\partial R(\Lambda)\) follows from this stronger sense of non-looseness of \(\partial R(\Lambda^{\prime})\).
If a Legendrian graph \(\Lambda\) is actually a Legendrian knot, then \(\Lambda\) has the TRP if and only if some sequence of Legendrian surgeries (either positive or negative) on \(\Lambda\) yields a tight manifold. Thus we immediately have the following corollary.
**Corollary 1.6**.: _Let \(L\) be a Legendrian knot in a closed contact \(3\)-manifold. If some sequence of \((\pm 1)\) Legendrian surgeries on \(L\) yields a tight contact \(3\)-manifold, then the transverse link \(\partial R(L)\) is non-loose. _
**Remark 1.7**.: Not every non-loose Legendrian knot has the TRP. Since any contact surgery on a non-loose Legendrian knot with boundary parallel full Giroux torsion will be overtwisted, such knots cannot have the TRP. Furthermore there are even examples of _strongly_ non-loose Legendrian knots in closed contact manifolds for which any sequence of \((\pm 1)\) Legendrian surgeries is overtwisted; therefore such knots do not have the TRP.
One may observe that the Legendrian knots of [14, Theorem 1] actually are such examples of strongly non-loose Legendrian knots without the TRP. Moreover, for one orientation, the transverse pushoff is non-loose (but with boundary parallel full Giroux torsion). Consequently, the boundary of a ribbon of such Legendrian knots is non-loose, but that fact doesn't follow from our Theorem 2.6.
More generally, one should be able to create such examples by, say, starting with a transverse knot in a tight contact structure, inserting full Giroux torsion along it, and then taking a parallel Legendrian curve on the Giroux torsion torus out where the planes have rotated by \(\pi/2\). Such a Legendrian knot cannot have the TRP and we expect it to also be strongly non-loose.
**Question 1.8**.: _Let \(\Lambda\) be a non-loose connected Legendrian graph. If \(\partial R(\Lambda)\) is non-loose but \(\Lambda\) does not have the TRP, then is \(\Lambda\) a knot?_
**Question 1.9**.: _A Legendrian graph with the TRP is necessarily non-loose. Since some sequence of stabilizations will loosen a Legendrian graph in an overtwisted contact structure (cf. [1]), the TRP cannot be preserved by stabilization. However, is the TRP preserved by Legendrian destabilization?_
### The Bennequin bound
The Bennequin bound was originally stated by Bennequin for transverse knots in \((S^{3},\xi_{std})\)[1] and extended by Eliashberg to null-homologous transverse links in any tight \(3\)-manifold [10]. Its extension to non-loose null-homologous transverse links in any contact \(3\)-manifold is attributed to Swiatowski; see [1, Proposition 1.3] and [13, Theorem 1.5].
**Proposition 1.10** (The Bennequin bound).: _Let \(T\) be a non-loose, null-homologous transverse link. Then for any Seifert surface \(\Sigma\) we have \(sl(T,[\Sigma])\leq-\chi(\Sigma)\). _
As observed in [1, Lemma 2.2], one readily sees that for a Legendrian graph \(\Lambda\) we have
\[sl(\partial R(\Lambda),[R(\Lambda)])=-\chi(R(\Lambda)).\]
As such, for null-homologous transverse links in tight manifolds, it has been conjectured by many that the Bennequin bound is always realized by a such a ribbon, see for example [11, Conjecture 4.1] and the conjectures in [13].
**Conjecture 1.11**.: _Suppose a transverse null-homologous link \(T\) in a tight \(3\)-manifold \((M,\xi)\) realizes the Bennequin bound. Then there is a Legendrian graph \(\Lambda\) so that \(T=\partial R(\Lambda)\)._
In general, the presence of Giroux torsion in a link exterior can prevent this from holding true for non-loose links. However, perhaps the conjecture continues to hold for strongly non-loose links.
**Conjecture 1.12**.: _Suppose a transverse null-homologous link \(T\) in a contact \(3\)-manifold \((M,\xi)\) is strongly non-loose and realizes the Bennequin bound. Then there is a Legendrian graph \(\Lambda\) so that \(T=\partial R(\Lambda)\)._
**Lemma 1.13**.: _If the transverse link \(\partial R(\Lambda)\) is non-loose then \(R(\Lambda)\) minimizes genus among Seifert surfaces for \(\partial R(\Lambda)\) in the same relative homology class._
Proof.: Since \(sl(\partial R(\Lambda),[R(\Lambda)])=-\chi(R(\Lambda))\) as observed above, any other Seifert surface \(\Sigma\) for \(L\) that is homologous to \(R(\Lambda)\) would have to satisfy \(sl(\partial R(\Lambda),[R(\Lambda)])\leq-\chi(\Sigma)\) by the Bennequin bound of Proposition 1.10. Hence \(R(\Lambda)\) minimizes genus.
**Question 1.14**.: _If a Legendrian graph \(\Lambda\) is non-loose but \(\partial R(\Lambda)\) is loose, must \(R(\Lambda)\) still minimize genus in its homology class?_
**Remark 1.15**.: Note that if \(R(\Lambda)\) were compressible, then one may find an overtwisted disk in a convex product neighborhood over \(R(\Lambda)\). Furthermore, if \(\partial R(\Lambda)\) is loose while \(R(\Lambda)\) is not minimal genus, then the proof of Proposition 1.10 (see eg. proof of [1, Theorem 4.15]) adapts to show that any minimal genus Seifert surface for \(\partial R(\Lambda)\) is loose.
## 2. Legendrian graphs and non-looseness
For the fundamentals of contact topology including the theory of Legendrian and transverse links as well as convex surface theory, we refer the reader to [1]. For the basics of the theory of non-loose knots in overtwisted contact manifolds, we refer the reader to [1].
Let us also recall the following elementary lemma.
**Lemma 2.1**.: _Let \((M_{1},\xi_{1})\) and \((M_{2},\xi_{2})\) be contact \(3\)-manifolds. If the contact connected sum \((M_{1}\#M_{2},\xi_{1}\#\xi_{2})\) is tight, then both \((M_{1},\xi_{1})\) and \((M_{2},\xi_{2})\) are tight._
Proof.: Say \((M_{1},\xi_{1})\) is not tight. Then it contains an overtwisted disk. Moreover, it contains an overtwisted disk in the complement of any Darboux ball. Thus the overtwisted disk persists in the connected sum with \((M_{2},\xi_{2})\) making \((M_{1}\#M_{2},\xi_{1}\#\xi_{2})\) overtwisted.
### Surfaces, Ribbons, and Legendrian Realization
Here we present versions of Legendrian Realization for graphs, cf. [1, Theorem 2.1]. For this, we say a graph \(G\) embedded in a connected surface \(S\) with boundary is _non-isolating_ if every component of \(S-G\) contains a component of \(\partial S\). Observe that this implies that \(G\) is a subgraph of a _spine_\(\bar{G}\) of \(S\), a graph onto which \(S\) deformation retracts. When [1] defines 'non-isolating' for graphs in convex surfaces, they require that the graph transversally intersects the dividing curves and any univalent leaves must lie on the dividing curves. However, this constraint on univalent leaves is unnecessary.
Recall that an oriented _convex_ surface \(\Sigma\) in a contact \(3\)-manifold \((M,\xi)\) has a dividing set \(\Gamma\), an (isotopy class of) embedded multicurve that chops \(\Sigma\) into surfaces \(\Sigma_{+}\) and \(\Sigma_{-}\) so that all tangencies with \(\xi\) of sign \(\pm\) are in \(\Sigma_{\pm}\). Furthermore there is a vector field \(v\) transverse to \(\Sigma\) whose flow preserves \(\xi\).
Any singular foliation \(\mathcal{F}\) on \(\Sigma\) is also said to be _divided by_ the multicurve \(\Gamma\) if there is an \(I\)-invariant contact structure \(\xi^{\prime}\) on \(\Sigma\times I\) where \(\mathcal{F}=\chi|_{\Sigma\times\{0\}}\). By Giroux Flexibility [11] (see also [1, Theorem 4.8.11] or [1, Theorem 3.4]), if \(\mathcal{F}\) is divided by \(\Gamma\), then there is an isotopy \(\phi_{t}\) for \(t\in[0,1]\) of \(\Sigma\) in a neighborhood of \(\Sigma\) that fixes \(\Gamma\) so that \(\phi_{0}(\Sigma)=0\), \(\xi|_{\phi_{1}(\Sigma)}=\mathcal{F}\), and \(\phi_{t}(\Sigma)\) is transverse to \(v\) for all \(t\).
**Lemma 2.2**.: _Let \(\Sigma\) be a closed convex surface with dividing set \(\Gamma\). Then for any choice of spines \(G\) of the components of \(\Sigma\backslash\Gamma\), \(\Sigma\) is isotopic rel-\(\Gamma\) through convex surfaces to a surface \(\Sigma^{\prime}\) so that the union of the spines \(G\) is now a Legendrian graph \(\Lambda^{\prime}\) and each component \(\Sigma^{\prime}_{0}\) of \(\Sigma^{\prime}\backslash\Gamma\) is a ribbon of the Legendrian graph \(\Lambda^{\prime}_{0}=\Lambda^{\prime}\cap\Sigma^{\prime}_{0}\)._
Proof.: This is basically just an application to the spines of \(\Sigma\backslash\Gamma\) of the Legendrian Realization for graphs [1, Theorem 2.1] (a straightforward extension of Legendrian realization for curves [1, Theorem 3.7] which itself is a consequence of Giroux Flexibility). The only thing to observe is that a foliation indeed exists on \(\Sigma\) realizable as a characteristic foliation \(\mathcal{F}\) that has singular set equal to \(G\), where the flow lines are the union of spanning arcs of the annuli of \(\Sigma\backslash G\) flowing transverse to \(\Gamma\) from \(G\cap\Sigma_{+}\) to \(G\cap\Sigma_{-}\). Having isotoped \(\Sigma\) rel-\(\Gamma\) to \(\Sigma^{\prime}\) so that \(\xi|_{\Sigma^{\prime}}=\mathcal{F}\), \(G\) is realized as the Legendrian graph \(\Lambda^{\prime}\) and each component of \(\Sigma^{\prime}\backslash\Gamma\) is a ribbon of its component of \(\Lambda^{\prime}\).
**Lemma 2.3**.: _Let \(R\) be the ribbon of a Legendrian graph. Then any spine of \(R\) may be realized as a Legendrian graph \(\Lambda\) so that \(R(\Lambda)\) is isotopic to \(R\) rel-\(\partial\)._
Proof.: Suppose \(R\) is the ribbon \(R(\Lambda_{0})\). Then \(S=\partial N(\Lambda_{0})\) is a convex surface divided by \(\partial R\) and \(R\) is isotopic to \(S_{+}\) rel-\(\partial R\). Given a spine of \(R\), let \(G\) be its image in \(S_{+}\) under the isotopy. The isotopy of Lemma 2.2 takes \(S\) rel-\(\partial R\) to a convex surface \(S^{\prime}\) in which \(G\) is now a Legendrian graph \(\Lambda\) so that \(S^{\prime}_{+}\) is the ribbon \(R(\Lambda)\).
**Lemma 2.4**.: _Given an open book supporting the contact \(3\)-manifold \((M,\xi)\) and containing a non-isolating connected graph \(G\) in the interior of a page, the open book may be isotoped so that it supports \((M,\xi)\) with the graph \(G\) now Legendrian in its page._
_Conversely, for any connected Legendrian graph \(\Lambda\) in a closed contact \(3\)-manifold \((M,\xi)\), there is a supporting open book that contains \(\Lambda\) as a non-isolating graph in a page._
Etnyre also discusses the Legendrian realization of a spine of a page of an open book in [1, Remark 30] and thereabouts.
Proof.: By [1, Lemma 2.5], using Legendrian Realization for graphs [1, Theorem 2.1], if \(G\) is a non-isolating graph in a page \(S\) of an open book supporting a contact structure \((M,\xi)\), then \(S\) may be isotoped to make \(G\) Legendrian. In particular \(G\) extends to a spine \(\bar{G}\) of \(S\), and \(S\) may be isotoped to make the spine Legendrian so that \(S\) is a ribbon, cf. Lemma 2.3. Indeed, as the above isotopy is induced from the Legendrian realization of this spine as a graph in the closed convex surface made from two pages (where the dividing set corresponds to the binding \(\partial S\)) the entire open book may be isotoped so that it supports \((M,\xi)\) while the spine \(\bar{G}\) is Legendrian and the page \(S\) is a ribbon. Since \(G\) is now a Legendrian subgraph of the spine \(\bar{G}\), its Legendrian framing is the framing by \(S\). Moreover there is a regular neighborhood of \(G\) in the interior of \(S\) that is nearly a ribbon of \(G\), only needing slight modification it encounters \(\bar{G}-G\). (A slight isotopy of the interior of \(S\) would allow for the ribbon of \(G\) to be contained in \(S\) at the expense of maintaining that \(S\) is a ribbon of \(\bar{G}\).)
On the other hand, given a connected Legendrian graph \(\Lambda\) in a closed contact \(3\)-manifold \((M,\xi)\), its exterior \((M_{\Lambda},\xi_{\Lambda})\) is a contact \(3\)-manifold with convex boundary. As such, there is a partial open book supporting \((M_{\Lambda},\xi_{\Lambda})\). Yet since the exterior \((M_{\Lambda},\xi_{\Lambda})\) may also be viewed as the exterior of a ribbon \(R(\Lambda)\), the partial open book extends to an open book supporting \((M,\xi)\) that contains \(\Lambda\) and \(R(\Lambda)\) in a page. Furthermore, by construction of the partial open books, \(\Lambda\) is necessarily non-isolating in its page.
**Remark 2.5**.: By [1, Theorem 1.8], for any Seifert surface \(S\) of a link in a \(3\)-manifold \(Y\), there is a contact structure on \(Y\) so that \(S\) is the ribbon of some Legendrian graph. However the Legendrian graph may have overtwisted complement.
### Proof of main theorem
**Theorem 2.6**.: _Let \(\Lambda\) be a connected Legendrian graph in a closed contact \(3\)-manifold. If \(\Lambda\) has the Tight Reattachment Property, then the transverse link \(\partial R(\Lambda)\) is non-loose._
Proof.: Say \(\Lambda\) is a connected Legendrian graph in \((M,\xi)\). By Lemma 2.4, there is an open book \((S,\phi)\) supporting \((M,\xi)\) that contains \(\Lambda\) as a non-isolating Legendrian graph in a page. In particular, the exterior \((M_{\Lambda},\Gamma_{\Lambda},\xi_{\Lambda})\) of \(\Lambda\) is supported by a partial open book \((S,P,\phi_{P})\) that extends to an open book \((S,\phi)\) for \((M,\xi)\). Here, \(P\) is a subsurface of \(S\) and \(\phi_{P}\colon P\to S\) is a homeomorphism to its image that is the identity on \(\partial P\cap\partial S\) that extends across \(S\backslash P\) to the homeomorphism \(\phi\colon S\to S\).
By the Tight Reattachment Property there is a map \(\psi\colon S\to S\) with support in \(S\backslash P\) so that the open book \((S,\phi\psi)\) supports a tight contact structure. Setting \(h=\phi\psi\) and \(g=\psi^{-1}\) so that \(hg=\phi\), an application of Theorem 2.7 below then further shows that the transverse link \(\partial R(\Lambda)\) is also non-loose.
**Theorem 2.7**.: _Let \(\Lambda\) be a connected Legendrian graph in \((M_{S,hg},\xi_{S,hg})\) that is non-isolating in a page of the supporting open book \((S,hg)\). Suppose \((M_{S,h},\xi_{S,h})\) is tight and the support of \(g\) is contained in a regular neighborhood of \(\Lambda\) in \(S\). Then the transverse link \(\partial R(\Lambda)\) has tight complement._
Proof.: First let \(F\) be a regular neighborhood of \(\Lambda\) in \(S\) that contains the support of \(g\). Since \(\Lambda\) is connected, \(F\) is connected. Then observe that the binding of the open book \((F,g|_{F})\) supporting \((M_{F,g|_{F}},\xi_{F,g|_{F}})\) is a transverse link with tight complement (see proof of [1, Lemma 3.1]). Furthermore, the supported contact structure \(\xi_{F,g|_{F}}\) may be taken so that any chosen spine of any page is a Legendrian graph for which the page is a ribbon. As such, we may regard the page \(F\times\{1/2\}\) in \((M_{F,g|_{F}},\xi_{F,g|_{F}})\) as the ribbon of a Legendrian graph \(\Lambda_{F}\). Letting \(R(\Lambda_{F})\) be a thinner ribbon contained in the interior of the page \(F\times\{1/2\}\), the transverse link \(\partial R(\Lambda_{F})\) is transversally isotopic to the binding of \((F,g|_{F})\). Thus \(\partial R(\Lambda_{F})\) is a transverse link with tight complement in the interior of the page \(F\times\{1/2\}\).
Since \(g|_{S\backslash F}\) is trivial and \(F\) is non-isolating, the extension of \((F,g|_{F})\) to \((S,g)\) may be viewed as a Murasugi sum of an open book \((S^{\prime},id)\) with trivial monodromy upon \((F,g|_{F})\). Moreover this Murasugi sum may be done within a neighborhood of the page \(F\times\{0\}\) so that it is disjoint from \(\Lambda_{F}\) and the copy \(R(\Lambda_{F})\) of \(F\) that it bounds in \(F\times\{1/2\}\). So after this Murasugi sum, we have a natural inclusion \(F\times\{1/2\}\hookrightarrow S\times\{1/2\}\) giving an inclusion of the Legendrian graph \(\Lambda_{F}\), its ribbon \(R(\Lambda_{F})\), and transverse boundary \(\partial R(\Lambda_{F})\) into the page \(S\times\{1/2\}\) of the supported contact manifold \((M_{S,g},\xi_{S,g})\). We denote their images as \(\Lambda_{g}\), \(R(\Lambda_{g})\), and \(\partial R(\Lambda_{g})\). The complement of \(\partial R(\Lambda_{g})\) in \((M_{S,g},\xi_{S,g})\) is then the connected sum of \((M_{S^{\prime},id},\xi_{S^{\prime},id})\) and the tight complement of the transverse link \(\partial R(\Lambda_{F})\) in \((M_{F,g|_{F}},\xi_{F,g|_{F}})\). Since \((M_{S^{\prime},id},\xi_{S^{\prime},id})\) is a connected sum of several copies of \((S^{1}\times S^{2},\xi_{std})\), it is tight. Since the two summands are tight, the complement of \(\partial R(\Lambda_{g})\) tight too by Lemma 2.1.
Now consider the contact connected sum of \((M_{S,h},\xi_{S,h})\) with \((M_{S,g},\xi_{S,g})\) along a neighborhood of a point in the interior of the page \(S\times\{0\}\) of each open book. Since \(\Lambda_{g}\) and \(R(\Lambda_{g})\) are contained in the page \(S\times\{1/2\}\) of its open book, they are disjoint from the summing sphere. So let \(\Lambda_{h\neq g}\) be the Legendrian graph that is the image of \(\Lambda_{g}\) in this sum and let \(R(\Lambda_{h\neq g})\) be the corresponding ribbon. Since \((M_{S,h},\xi_{S,h})\) is tight by hypothesis and the complement of \(\partial R(\Lambda_{g})\) is tight, we now have that the transverse link \(\partial R(\Lambda_{h\neq g})\) has tight complement by Lemma 2.1 as well.
Finally, Baldwin [12] exhibited a Legendrian link \(\mathbb{L}\) in \((M_{S,hg},\xi_{S,hg})\) upon which \((+1)\) contact surgery yields the connected sum \((M_{S,h},\xi_{S,h})\#(M_{S,g},\xi_{S,g})\). In fact, the surgery dual link \(\mathbb{L}^{*}\) may be regarded as a Legendrian link in (a neighborhood of) the connected sum of the two pages \(S\times\{0\}\) from each summand so that \((-1)\) contact surgery on \(\mathbb{L}^{*}\) yields \((M_{S,hg},\xi_{S,hg})\). In particular, for \(t\in(\epsilon,1-\epsilon)\) with \(0<\epsilon<1/2\), the pages \(S\times\{t\}\) of \((M_{S,h},\xi_{S,h})\) are identified with the the pages \(S\times\{t/2\}\) of \((M_{S,hg},\xi_{S,hg})\), and the pages \(S\times\{t\}\) of \((M_{S,g},\xi_{S,g})\) are identified with the the pages \(S\times\{t/2+1/2\}\) of \((M_{S,hg},\xi_{S,hg})\). Thus after the \((-1)\) contact surgery on \(\mathbb{L}^{*}\), the Legendrian graph \(\Lambda_{h\neq g}\) and ribbon \(R(\Lambda_{h\neq g})\) become the Legendrian graph \(\Lambda_{hg}\) and ribbon \(R(\Lambda_{hg})\) in the page \(S\times\{3/4\}\). As the complement of \(\partial R(\Lambda_{hg})\) is obtained by \((-1)\) contact surgery on the link \(\mathbb{L}^{*}\) in the tight complement of \(\partial R(\Lambda_{h\neq g})\), we would like to say that this implies it is tight as well.
While Wand shows that \((-1)\) contact surgery on a closed manifold preserves tightness [15], the complement of \(\partial R(\Lambda_{h\neq g})\) is not closed. Nevertheless, we may adapt the above construction so that the complement of \(\partial R(\Lambda_{h\neq g})\) is embedded in a closed tight manifold. Then by [15], the \((-1)\) contact
surgery on \(\mathbb{L}^{*}\) now in this closed manifold will yield a closed tight manifold in which the complement of \(\partial R(\Lambda_{hg})\) is embedded.
To do this, we first observe that, from the argument in the 2nd paragraph of the proof of [1, Theorem 1.6], the contact complement of our initial transverse link as the binding \(B\) of an open book \((F,g|_{F})\) embeds in a universally tight contact manifold as a component of the complement of pre-Lagrangian tori \(T\). Let \(N^{\prime}\) be the union of the complement of \(B\) and these tori \(T\), the 'closed complement' of \(B\). Hence \(N^{\prime}\) is universally tight and contains the complement of \(B\) as its interior.
Let \(N^{\prime\prime}\) be a second copy of \(N^{\prime}\) (or other similarly constructed universally tight manifold with incompressible boundary) and glue it to the original \(N^{\prime}\) so that the contact structures match up along collars of \(\partial N^{\prime}\) and \(\partial N^{\prime\prime}\). Colin shows that this resulting manifold is also universally tight [15]. For short, we may regard this as replacing a neighborhood of \(B\) by \(N^{\prime\prime}\). Carry this replacement through the above construction to where we eventually have a neighborhood of \(\partial R(\Lambda_{h\#g})\) replaced by \(N^{\prime\prime}\). The construction now gives the exterior of \(\partial R(\Lambda_{h\#g})\) embedded in a tight manifold as desired. Thus we may apply [14] to obtain the exterior of \(\partial R(\Lambda_{hg})\) embedded in a tight manifold. Hence the transverse link \(\partial R(\Lambda_{hg})\) is tight.
**Remark 2.8**.: An alternative approach would be to use the (Stein) cobordism from \((M_{S,h},\xi_{S,h})\sqcup(M_{S,g},\xi_{S,g})\) to \((M_{S,hg},\xi_{S,hg})\) of [1, Section 8.2] instead of [1]. However, the exposition would take a bit more clarity on the required Hopf deplumbings involved.
### Fusions of Legendrian graphs and Murasugi sums
Given the standard contact structure \(\xi=\ker(dz-y\,dx)\) on \(\mathbb{R}^{3}\), let \(\xi^{n}\) be the contact structure on \(\mathbb{R}^{3}\) induced from the \(n\)-fold cyclic branched cover along the \(z\)-axis for each positive integer \(n\). Observe that \((\mathbb{R}^{3},\xi^{n})\) is in fact contactomorphic to \((\mathbb{R}^{3},\xi)\). Let \(\lambda\) be the non-negative \(x\)-axis and let \(\lambda^{n}\) be its lift to \((\mathbb{R}^{3},\xi^{n})\). Any valence \(n\) vertex of a Legendrian graph has a neighborhood contactomorphic to \((\mathbb{R}^{3},\xi^{n},\lambda^{n})\).
Next, let \(\lambda_{+}\) be a Legendrian arc starting at the origin whose front projection continues along the positive \(x\)-axis to \(x=1\) and thereafter has positive slope. Similarly, let \(\lambda_{-}\) be a Legendrian arc starting at the origin whose front projection continues along the negative \(x\)-axis to \(x=-1\) and thereafter has negative slope. Note that \(\lambda_{+}\) is Legendrian isotopic to \(\lambda\) rel-\(\partial\lambda\) and we may choose \(\lambda_{-}\) to be a reflection of \(\lambda_{+}\) through the origin. Define \(\lambda_{+}^{n}\) and \(\lambda_{-}^{n}\) similarly to \(\lambda^{n}\) via cyclic branched covers along the \(z\)-axis. Hence any valence \(n\) vertex of a Legendrian graph has a neighborhood contactomorphic to each \((\mathbb{R}^{3},\xi^{n},\lambda_{+}^{n})\) and \((\mathbb{R}^{3},\xi^{n},\lambda_{-}^{n})\).
**Definition 2.9** (Fusions of Legendrian graphs).: Suppose for each \(i=+,-\) that \(v_{i}\) is a valence \(n\) vertex of a Legendrian graph \(\Lambda_{i}\) in \((M_{i},\xi_{i})\). Choose a contactomorphism \(\phi_{i}\) of a neighborhood \(B_{i}\) of \(\Lambda_{i}\) at \(v_{i}\) with \((\mathbb{R}^{3},\xi^{n},\lambda_{i}^{n})\). The _Legendrian fusion (of order \(n\))_\(\Lambda_{+}*\Lambda_{-}\) of \(\Lambda_{+}\) and \(\Lambda_{-}\) along vertices \(v_{+}\) and \(v_{-}\) with respect to these chosen contactomorphisms \(\phi_{+}\) and \(\phi_{-}\) is defined follows. Form the connected sum \((M_{+}\#M_{-},\xi_{+}\#\xi_{-})\) by identifying the complements of \(\phi_{+}^{-1}(\mathbb{R}^{3}|_{z<0},\xi^{n},\lambda_{+}^{n})\) and \(\phi_{-}^{-1}(\mathbb{R}^{3}|_{z>0},\xi^{n},\lambda_{-}^{n})\) along their boundaries so that \(\phi_{+}^{-1}(\mathbb{R}^{3}|_{z\geq 0},\xi^{n},\lambda_{i}^{n})\cup\phi_{-}^{-1}( \mathbb{R}^{3}|_{z<0},\xi^{n},\lambda_{i}^{n})\) is contactomorphic to \((\mathbb{R}^{3},\xi^{n},\lambda_{+}^{n}\cup\lambda_{-}^{n})\). From the inclusions of \((M_{+},\xi_{+})-\phi_{+}^{-1}(\mathbb{R}^{3}|_{z<0},\xi^{n},\lambda_{+}^{n})\) and \((M_{\xi}\_{-})-\phi_{-}^{-1}(\mathbb{R}^{3}|_{z>0},\xi^{n},\lambda_{-}^{n})\) into \((M_{+}\#M_{-},\xi_{+}\#\xi_{-})\), we may regard \(\Lambda_{+}\) and \(\Lambda_{-}\) as Legendrian graphs in \((M_{+}\#M_{-},\xi_{+}\#\xi_{-})\) whose intersection is a single point \(v\), the image of the points \(v_{+}\) and \(v_{-}\). Then the fusion \(\Lambda_{+}*\Lambda_{-}\) is the union of \(\Lambda_{+}\) with \(\Lambda_{-}\) in \((M_{+}\#M_{-},\xi_{+}\#\xi_{-})\). Figure 1 illustrates this Legendrian fusion operation.
**Definition 2.10** (Murasugi sums of ribbons of Legendrian graphs).: At the point of fusion, \(\Lambda_{+}*\Lambda_{-}\) is locally contactomorphic to \(\lambda_{+}^{n}*\lambda_{-}^{n}\) which is the \(n\)-fold branched cover along the \(z\)-axis of \(\lambda_{+}*\lambda_{-}=\lambda_{+}\cup\lambda_{-}\) in \((\mathbb{R}^{3},\xi)\). We may choose ribbons \(R(\lambda_{+})\), \(R(\lambda_{-})\), and \(R(\lambda_{+}*\lambda_{-})\) so that for some ball neighborhood \(B\) of the origin, \(R(\lambda_{+})-B\) and \(R(\lambda_{-})-B\) are disjoint with union equalling \(R(\lambda_{+}*\lambda_{-})-B\). Observe that within \(B\), the surfaces \(R(\lambda_{+})\) and \(R(\lambda_{-})\) form a clasp intersection and project to subsurfaces of \(R(\lambda_{+}*\lambda_{-})\cap B\). Nevertheless, we may regard \(R(\lambda_{+}*\lambda_{-})\) as obtained from trimming \(R(\lambda_{+})\) and \(R(\lambda_{-})\) near their vertices and then joining to form \(R(\lambda_{+}*\lambda_{-})\). Topologically, we obtain \(R(\lambda_{+}*\lambda_{-})\) as a 2-Murasugi sum of \(R(\lambda_{+})\) and \(R(\lambda_{-})\), and we define this to be what's meant by the 2-Murasugi sum of the ribbons \(R(\lambda_{+})\) and \(R(\lambda_{-})\). Figure 2(Top) shows this operation. As \(R(\lambda_{+}^{n}*\lambda_{-}^{n})\) is the \(n\)-fold branched cover of \(R(\lambda_{+}*\lambda_{-})\), we define \(R(\lambda_{+}^{n}*\lambda_{-}^{n})\) to be the \(2n\)_-Murasugi sum_ of the ribbons \(R(\lambda_{+}^{n})\) and \(R(\lambda_{-}^{n})\), shown in Figure 2(Bottom) for \(n=3\). For the topological definition of a Murasugi sum of Seifert surfaces, see [1] for example. Thus we immediately have the following.
**Lemma 2.11**.: _Let \(\Lambda=\Lambda_{+}\ast\Lambda_{-}\) be a Legendrian fusion of \(\Lambda_{+}\) and \(\Lambda_{-}\). Then \(R(\Lambda)\) is a Murasugi sum of \(R(\Lambda_{+})\) and \(R(\Lambda_{-})\). \(\square\)_
Note that we cannot have either \(R(\lambda_{+})\) or \(R(\lambda_{-})\) as a subsurface of \(R(\lambda_{+}\ast\lambda_{-})\). Were, say \(R(\lambda_{+})\) contained in \(R(\lambda_{+}\ast\lambda_{-})\), then generically the portion of \(R(\lambda_{+})\) that flows to its vertex (at the origin) would flow to other points of \(\lambda_{-}\) in \(R(\lambda_{+}\ast\lambda_{-})\). However, this would contradict the non-integrability of the contact structure.
**Theorem 2.12**.: _Suppose \(\Lambda\) is the Legendrian fusion \(\Lambda_{+}\ast\Lambda_{-}\) of two Legendrian graphs. If each \(\Lambda_{+}\) and \(\Lambda_{-}\) have the Tight Reattachment Property, then \(\Lambda\) has the Tight Reattachment Property._
Proof.: By Lemma 2.11, \(R(\Lambda)\) is a Murasugi sum of \(R(\Lambda_{+})\) and \(R(\Lambda_{-})\). Indeed, the natural inclusions of \(\Lambda_{+}\) and \(\Lambda_{-}\) into \(M_{+}\#M_{-}\) extend to inclusions of their ribbons \(R(\Lambda_{+})\) and \(R(\Lambda_{-})\) as _nearly_ subsurfaces of the ribbon \(R(\Lambda)=R(\Lambda_{+}\ast\Lambda_{-})\).
Let \(N(\Lambda_{i})\) be a thickening of \(R(\Lambda_{i})\) so that \(\partial N(\Lambda_{i})\) is convex with dividing curves \(\partial R(\Lambda_{i})\). Again, the natural inclusions allow us to regard \(N(\Lambda_{i})\) as also a thickening of \(\Lambda_{i}\) in \(M_{+}\#M_{-}\). These can further be arranged so that \(N(\Lambda_{+})\) and \(N(\Lambda_{-})\) intersect within a ball neighborhood of the fusion vertex of \(\Lambda=\Lambda_{+}\cap\Lambda_{-}\) so that the components \(\partial_{\pm}N(\Lambda_{+})\) and the components \(\partial_{\pm}N(\Lambda_{-})\) are naturally identified with subsurfaces of \(\partial_{\pm}N(\Lambda)\).
Suppose for each \(i=+,-\) that \(\Lambda_{i}\) has the TRP. Then choose reattaching maps \(\psi_{i}\colon\partial_{-}N(\Lambda_{i})\to\partial_{+}N(\Lambda_{i})\) so that \((M_{\Lambda_{i},\psi_{i}},\xi_{\Lambda_{i},\psi_{i}})\), the resulting reattachment along \(\Lambda_{i}\), is a tight manifold. With the natural inclusions of \(N(\Lambda_{i})\) into \(M_{+}\#M_{-}\), we may regard \(\psi_{i}\) as a map from \(\partial_{-}N(\Lambda_{i})\) to \(\partial_{+}N(\Lambda_{i})\) there. Then, after the identifications of \(\partial_{\pm}N(\Lambda_{i})\) with subsurfaces of \(\partial_{\pm}N(\Lambda)\), let \(\hat{\psi}_{i}\colon\partial_{-}N(\Lambda)\to\partial_{+}N(\Lambda)\) be the extension of \(\psi_{i}\) by the identity. Now define \(\psi\colon\partial_{-}N(\Lambda)\to\partial_{+}N(\Lambda)\) as \(\hat{\psi}_{+}\circ\hat{\psi}_{-}\). We then observe that the rettachement along \(\Lambda\) by \(\psi\) is the connected sum of the rettachments along \(\Lambda_{i}\) by \(\psi_{i}\) for \(i=+,-\). Since the connected sum of tight manifolds is tight, this shows that \(\Lambda\) has the TRP.
### Subgraphs and the TRP
In general, non-looseness of a subset is conferred to any other subset that contains it. If \(T\) is a transverse link containing a a non-loose sub-link \(T^{\prime}\), then \(T\) is non-loose as well. Similarly, if \(\Lambda\) is a Legendrian graph containing a non-loose subgraph \(\Lambda^{\prime}\), then \(\Lambda\) is non-loose. Here we observe that the TRP is also passed upwards from \(\Lambda^{\prime}\) to \(\Lambda\). With Theorem 2.6, this shows how this stronger sense of non-looseness of \(T^{\prime}=\partial R(\Lambda^{\prime})\) is passed to the non-looseness of \(T=\partial R(\Lambda)\) even though \(T^{\prime}\) is not contained in \(T\).
**Theorem 2.13**.: _Let \(\Lambda^{\prime}\) be a connected subgraph of a connected Legendrian graph \(\Lambda\) in a closed contact \(3\)-manifold. If \(\Lambda^{\prime}\) has the TRP, then \(\Lambda\) does too._
Proof.: Suppose \(\Lambda\) is a connected Legendrian graph in \((M,\xi)\). Since \(\Lambda^{\prime}\) is a subgraph of \(\Lambda\), the components \(\partial_{\pm}N(\Lambda^{\prime})\) are naturally identified with subsurfaces of \(\partial_{\pm}N(\Lambda)\). Since \(\Lambda^{\prime}\) has the TRP, choose reattaching map \(\psi^{\prime}\colon\partial_{-}N(\Lambda^{\prime})\to\partial_{+}N(\Lambda^{ \prime})\) so that the resulting attachment is a tight manifold \((M_{\Lambda^{\prime},\psi^{\prime}})\). With the natural identifications, \(\psi^{\prime}\) extends by the identity to an attaching map \(\psi\colon\partial_{-}N(\Lambda)\to\partial_{+}N(\Lambda)\). Since \(\psi\) is an extension of \(\psi^{\prime}\) by the identity, the reattached manifold \((M_{\Lambda,\psi},\xi_{\Lambda,\psi})\) is just the tight manifold \((M_{\Lambda^{\prime},\psi^{\prime}},\xi_{\Lambda^{\prime},\psi^{\prime}})\).
### Comultiplication and the LOSS invariant
Theorem 2.7 developed from an adaptation of Baldwin's comultiplication of the Heegaard Floer contact invariant [1] to a sort of comultiplication for the LOSS invariant [1] given in Theorem 2.14 below. One may view Proposition 2.15 as an algebraic version of Theorem 2.6 where a transverse knot has non-trivial LOSS invariant if a reattachment along a Seifert surface yields a manifold with non-trivial contact invariant. Since this may be of independent interest, we keep it here.
Let \((Y_{S,h},\xi_{S,h})\) be the contact \(3\)-manifold determined by the open book \((S,h)\). A choice of basis of arcs for \(S\) determines a chain complex \(\widehat{CF}(-Y_{S,h})\) and an special element \(\mathbf{x}_{S,h}\in\widehat{CF}(-Y_{S,h})\). The homology class of this element is the Ozsvath-Szabo contact invariant \(c(S,h)\in\widehat{HF}(-Y_{S,h})\) of this contact manifold [1]. Notably, if \((Y_{S,h},\xi_{S,h})\) contains an overtwisted disk, then \(c(S,h)=0\). Hence \(c(S,h)\neq 0\) implies \((Y_{S,h},\xi_{S,h})\) is tight.
A non-separating oriented simple closed curve \(K\) in the page \(S\) determines an oriented Legendrian knot \(K_{h}\) in \((Y_{S,h},\xi_{S,h})\). If the basis of arcs for \(S\) is chosen to intersect \(K\) exactly once, the chain complex \(\widehat{CF}(-Y_{S,h})\) is adapted to \(K_{h}\). When \(K_{h}\) is null-homologous in \(Y_{S,h}\), it provides a filtration of \(\widehat{CF}(-Y_{S,h})\) yielding a filtered complex \(\widehat{CFK}(-Y_{S,h},K_{h})\) and associated homology \(\widehat{HFK}(-Y_{S,h},K_{h})\). The LOSS invariant of \(K_{h}\) may be regarded as the homology class \(\widehat{\mathcal{L}}(S,h,K)\in\widehat{HFK}(-Y_{S,h},K_{h})\) of the special element \(\mathbf{x}_{S,h}\)[1]. Notably, if the complement of \(K_{h}\) contains an overtwisted disk, then \(\widehat{\mathcal{L}}(S,h,K)=0\). So if \(\widehat{\mathcal{L}}(S,h,K)\neq 0\), then the complement of \(K_{h}\) is tight. Accordingly if \(c(S,h)\neq 0\), then it follows that \(\widehat{\mathcal{L}}(S,h,K)\neq 0\).
(More accurately, the LOSS invariant is an equivalence class \(\widehat{\mathcal{L}}(K_{h})\) of pairs \((\widehat{HFK}(-Y_{S,h},K_{h}),[\mathbf{x}_{S,h}])\) up to isomorphism. However, as our interest is in determining tightness, here we only concern ourselves with whether or not the invariant is trivial.)
Given an open book \((S,hg)\) whose monodromy is expressed as a composition, Baldwin [1] defines a comultiplication map on the contact invariant
\[\widetilde{\mu}\colon\widehat{HF}(-Y_{S,hg})\to\widehat{HF}(-Y_{S,h})\otimes_{ \mathbb{Z}_{2}}\widehat{HF}(-Y_{S,g})\]
that sends \(c(S,hg)\) to \(c(S,h)\otimes_{\mathbb{Z}_{2}}c(S,g)\). So if \(c(S,h)\neq 0\) and \(c(S,g)\neq 0\) then \(c(S,hg)\neq 0\).
Here we extend this to a kind of comultiplicity for the LOSS invariant.
**Theorem 2.14**.: _Suppose \(K\) is a non-separating oriented simple close curve in the surface \(S\). If the knots \(K_{hg}\) and \(K_{g}\) are null-homologous in their manifolds \(Y_{S,hg}\) and \(Y_{S,g}\), then there is a comultiplication map_
\[\widetilde{\mu}\colon\widehat{HFK}(-Y_{S,hg},K_{hg})\to\widehat{HF}(-Y_{S,h}) \otimes_{\mathbb{Z}_{2}}\widehat{HFK}(-Y_{S,g},K_{g})\]
_that sends \(\widehat{\mathcal{L}}(S,hg,K)\) to \(c(S,h)\otimes_{\mathbb{Z}_{2}}\widehat{\mathcal{L}}(S,g,K)\)._
_In particular, if \(c(S,h)\neq 0\) and \(\widehat{\mathcal{L}}(S,g,K)\neq 0\) then \(\widehat{\mathcal{L}}(S,hg,K)\neq 0\)._
Proof.: Baldwin shows that his map \(\widetilde{\mu}\) is induced from a map
\[\mu\colon\widehat{CF}(-Y_{S,hg})\to\widehat{CF}(-Y_{S,h})\otimes_{\mathbb{Z} _{2}}\widehat{CF}(-Y_{S,g})\]
in which
\[\mu(\mathbf{x}_{S,hg})=\mathbf{x}_{S,h}\otimes_{\mathbb{Z}_{2}}\mathbf{x}_{S,g}.\]
Along with the basepoint \(z\) used to define these chain complexes and map, our inclusion of the oriented curve \(K\) in \(S\) and the corresponding knots adds another basepoint \(w\) which induces a filtration.
From a basis of arcs \(\mathbf{a}=\{a_{1},\dots,a_{n}\}\) in \(S\), a perturbation to a basis of arcs \(\mathbf{b}=\{b_{1},\dots,b_{n}\}\) where each endpoint of \(\partial b_{i}\) moves a bit in the direction of \(\partial S\) and so that \(b_{i}\) intersects \(a_{i}\) exactly once. Similarly a basis of arcs \(\mathbf{c}=\{c_{1},\dots,c_{n}\}\) is obtained by perturbing \(\mathbf{b}\). Together \(a_{i},b_{i},c_{i}\) appear as in Figure 3 in a strip neighborhood of \(a_{i}\). The basepoint \(z\) is placed outside these neighborhoods in \(S\) as also shown.
For the LOSS invariant, the arcs \(\mathbf{a}\) are also chosen so that \(K\) is disjoint from all \(a_{i}\) except \(a_{1}\) which it transversally intersects exactly once. Then, along with \(z\) as above, the basepoint \(w\) is chosen disjoint from \(\mathbf{a}\cup\mathbf{b}\cup\mathbf{c}\) in the strip neighborhood of \(a_{1}\) so that it is
* between \(a_{1}\) and \(b_{1}\) to define \(K_{g}\subset Y_{S,g}\) in the page of \((S,g)\) with respect to \(\mathbf{a}\) and \(\mathbf{b}\), and also
* between \(a_{1}\) and \(c_{1}\) to define \(K_{hg}\subset Y_{S,hg}\) in the page of \((S,hg)\) with respect to \(\mathbf{a}\) and \(\mathbf{c}\).
This however forces \(w\) to be in the same region as \(z\) making in a trivial curve in the page of \((S,h)\) with respect to \(\mathbf{b}\) and \(\mathbf{c}\) so that the induced knot \(K_{h}\subset Y_{h}\) is a local, trivial unknot.
For each \(f\in\{h,g,hg\}\), these basepoints filter the complex \(\widehat{CF}(-Y_{S,f})\) to produce the filtered complex \(\widehat{CFK}(-Y_{S,f},K_{f})\) whereupon the homology class of \(\mathbf{x}_{f}\) in \(\widehat{HFK}(-Y_{S,f},K_{f})\) becomes the LOSS invariant of \(K_{f}\). Of course, in the case of \(K_{h}\), the basepoint \(w\) provides no extra filtering, so we have that \(\widehat{CFK}(-Y_{S,h},K_{h})\cong\widehat{CF}(-Y_{S,h})\) and the LOSS invariant of \(K_{h}\) is equivalent to \(c(S,h)\).
The arguments of Baldwin extend to incorporate the filtration induced by the extra basepoint, allowing \(\mu\) to become a map
\[\mu\colon\widehat{CFK}(-Y_{S,hg},K_{hg})\to\widehat{CF}(-Y_{S,h},)\otimes_{ \mathbb{Z}_{2}}\widehat{CFK}(-Y_{S,g},K_{g})\]
that still takes \(\mathbf{x}_{S,hg}\) to \(\mathbf{x}_{S,h}\otimes_{\mathbb{Z}_{2}}\mathbf{x}_{S,g}\). Hence \(\widetilde{\mu}\) becomes a map
\[\widetilde{\mu}\colon\widehat{HFK}(-Y_{S,hg},K_{hg})\to\widehat{HF}(-Y_{S,h}) \otimes_{\mathbb{Z}_{2}}\widehat{HFK}(-Y_{S,g},K_{g})\]
that takes \(\widehat{\mathcal{L}}(S,hg,K)\) to \(c(S,h)\otimes_{\mathbb{Z}_{2}}\widehat{\mathcal{L}}(S,g,K)\).
Figure 3. The placement of the triple of arcs \(ai,b_{i},c_{i}\) and the basepoints \(z\) and \(w\).
**Proposition 2.15**.: _Let \(K\) be an oriented simple closed curve in the surface \(S\) that bounds a subsurface \(S_{K}\) with positive genus. Suppose that \(g\) and \(h\) are diffeomorphisms of \(S\) which restrict to the identity on \(\partial S\), that \(c(S,h)\neq 0\), and that \(g\) is supported in \(S_{K}\)._
_Let \((S^{\prime},hg\eta)\) be the boundary connected sum of \((S,hg)\) with a positive Hopf band \((H,\eta)\). Let \(K^{\prime}\) be the result of a single handle-slide of \(K\) over the core of the Hopf band so that \(K^{\prime}\) is non-separating in \(S^{\prime}\). Then \(\widehat{\mathcal{L}}(S^{\prime},hg\eta,K^{\prime})\neq 0\)._
_Furthermore, we may view \(S_{K}\) as the ribbon \(R(\Lambda)\) of a Legendrian graph \(\Lambda\) so that \(K=\partial R(\Lambda)\) is a transverse knot. Consequently, the complement of \(K\) is tight._
Proof.: By Theorem 2.14, \(\widehat{\mathcal{L}}(S^{\prime},hg\eta,K^{\prime})\neq 0\) will follow if \(\widehat{\mathcal{L}}(S^{\prime},g\eta,K^{\prime})\neq 0\) since \(c(S,h)\neq 0\) by assumption.
The curve \(K^{\prime}\) in \(S^{\prime}\) bounds a subsurface \(P\cup S_{K}\) where \(P\) is a pair of pants whose boundary is the two curves \(K\) and \(K^{\prime}\) and a component of \(\partial S^{\prime}\) so that \(P\) contains the support of \(\eta\). Recall that \(a_{1}\) is a properly embedded arc in \(S^{\prime}\) intersecting \(K^{\prime}\) exactly once. Let \(N\) be a closed small regular neighborhood of \(K^{\prime}\cup a_{1}\) (that also contains \(b_{1}\) and \(c_{1}\)). Then in \(\partial N\) is an properly embedded separating arc \(\delta\) in \(S^{\prime}\) that is disjoint from \(P\cup S_{K}\). Cutting \(S^{\prime}\) along \(\delta\) into a surface \(S_{0}\) and the surface \(S_{NPK}=N\cup P\cup S_{K}\) (where \(N\backslash(P\cup S_{K})\) is just an annulus). Conversely, \(S^{\prime}\) is a boundary connected sum of \(S_{0}\) and \(S_{NPK}\). Hence, as the support of \(g\eta\) is contained in \(P\cup S_{K}\), we may view the open book \((S^{\prime},g\eta)\) containing the curves \(K\) and \(K^{\prime}\) as the boundary connected sum of the open book \((S_{0},id)\) and the open book \((S_{NPK},g\eta)\) containing the curves \(K\) and \(K^{\prime}\). In particular, since the contact connected sum induces an isomorphism
\[\widehat{CF}(-Y_{S^{\prime},g\eta})\cong\widehat{CF}(-Y_{S_{0},id})\otimes_ {\mathbb{Z}_{2}}\widehat{CF}(-Y_{S_{NPK},g\eta})\]
in which \(\mathbf{x}_{S^{\prime},g\eta}\) is identified with \(\mathbf{x}_{S_{0},id}\otimes_{\mathbb{Z}_{2}}\mathbf{x}_{S_{NPK},g\eta}\). Thus this passes to a map involving the filtered complexes
\[\widehat{CFK}(-Y_{S^{\prime},g\eta},K^{\prime}_{g\eta})\to\widehat{CF}(-Y_{S_ {0},id})\otimes_{\mathbb{Z}_{2}}\widehat{CFK}(-Y_{S_{NPK},g\eta},K^{\prime}_{ g\eta})\]
so that, in homology, \(\widehat{\mathcal{L}}(S^{\prime},g\eta,K^{\prime})\) is sent to \(c(S_{0},id)\otimes_{\mathbb{Z}_{2}}\widehat{\mathcal{L}}(S_{NPK},g\eta,K^{ \prime})\).
Since \(c(S_{0},id)\neq 0\), we have that \(\widehat{\mathcal{L}}(S^{\prime},g\eta,K^{\prime})\neq 0\) if \(\widehat{\mathcal{L}}(S_{NPK},g\eta,K^{\prime})\neq 0\). To show this, observe that \(K\) is the connected binding of the open book \((S_{KPK},g\eta)\), and the open book \((S_{NPK},g\eta)\) contains \(K^{\prime}\) as a Legendrain approximation of this transverse knot \(K\). As the binding of \((S_{K},g)\), the transverse LOSS invariant of \(K\) is non-zero. Hence the LOSS invariant of its Legendrian approximation \(K^{\prime}\) is also non-zero. That is, \(\widehat{\mathcal{L}}(S_{NPK},g\eta,K^{\prime})\neq 0\) as desired.
Now having that \(\widehat{\mathcal{L}}(S^{\prime},hg\eta,K^{\prime})\neq 0\), \(K^{\prime}_{hg\eta}\) is a Legendrian knot in \(Y_{S^{\prime},hg\eta}\cong Y_{S,hg}\) with tight complement. By construction, it is a Legendrian approximation of the transverse knot \(K=\partial R(\Lambda)\). Thus \(K\) has tight complement as well.
## 3. Examples
Here we demonstrate basic applications of Theorem 2.6 and Theorem 2.12. For this, let us establish notation for the contact structures on \(S^{3}\). Of course we have the unique tight contact structure which we denote \(\xi_{std}\). Up to isotopy, there is an integer family of overtwisted contact structures \(\xi_{d}\) on \(S^{3}\), which are distinguished by their \(d_{3}\)-invariant \(d_{3}(\xi_{d})=d\in\mathbb{Z}+\frac{1}{2}\); see [1] for a general definition. For one's bearings, the open book of the negative Hopf band supports the contact structure \(\xi_{1/2}\) while the open book of the positive Hopf band supports \(\xi_{std}\) which has \(d_{3}(\xi_{std})=-1/2\).
### Legendrian unknots with the TRP
**Lemma 3.1**.: _Every non-loose Legendrian unknot has the TRP._
Proof.: If a Legendrian unknot is non-loose, it is an unknot either in a tight contact manifold or in an overtwisted \(S^{3}\). Legendrian unknots in tight manifolds trivially have the TRP. The classification of Legendrian unknots in overtwisted \(S^{3}\)'s given in Theorem 3.2 below lead to Legendrian surgery descriptions of these knots shown in Figure 4 demonstrating the TRP. In particular, a \((-1)\) Legendrian surgery on the Legendrian unknot will cancel the \((+1)\) surgery on the (purple) parallel push-off. For Figure 4(a), this leaves a single \(tb=-1\) unknot with a \((+1)\) surgery that yields the tight contact structure on \(S^{1}\times S^{2}\). For the Legendrian unknots of Figure 4(b) and (c), this leaves a surgery diagram with no \((+1)\) Legendrian surgeries; hence the resulting manifold is tight.
The classification of Legendrian unknots up to Legendrian isotopy in \((S^{3},\xi_{std})\) is due to Eliashberg and Fraser [1]. Non-loose Legendrian unknots in overtwisted \(S^{3}\)'s were classified up to coarse equivalence by Eliashberg and Fraser [1]; see [1] and [2] for alternative proofs. We say that two Legendrian knots \(L_{1}\) and \(L_{2}\) are _coarsely equivalent_ if there is a contactomorphism of the ambient manifold carrying \(L_{1}\) to \(L_{2}\).
**Theorem 3.2** (Classification of Legendrian unknots, [1]).:
1. _Let_ \(L\) _be an oriented Legendrian unknot in_ \((S^{3},\xi_{std})\)_. Then the classical invariants_ \((tb,rot)\) _determine_ \(L\) _up to Legendrian isotopy. All Legendrian unknots are stabilizations of the unique Legendrian unknot with_ \(\mathtt{tb}=-1\) _and_ \(\mathtt{rot}=0\)_. For each negative integer_ \(n\leq-1\)_, we have_ \(|n|\) _distinct Legendrian unknots with_ \(\mathtt{tb}=n\)_. The stabilization of_ \(L\) _is obtained by replacing the box in Figure_ 4 _with a sequence of_ \(n-1\) _stabilizations, each of type_ \(s\) _or_ \(z\) _shown on the bottom of Figure_ 4_. If the clockwise orientation is chosen for_ \(L\)_, the type_ \(s\) _stabilization gives a negative stabilization, and the type_ \(z\) _stabilization gives a positive stabilization of_ \(L\)_. (The type_ \(s\) _and_ \(z\) _stabilizations commute, so the order of the sequence of stabilizations does not matter.)_
2. _Let_ \(L\) _be an oriented non-loose Legendrian unknot in an overtwisted contact structure_ \(\xi\) _on_ \(S^{3}\)_. Then_ \(\xi\) _is the contact structure_ \(\xi_{1/2}\) _and the invariants_ \((\mathtt{tb}(L),\mathtt{rot}(L))\in\{(n,\pm(n-1)):n\in\mathbb{N}\}\) _determine_ \(L\) _up to coarse equivalence. These are illustrated in Figure_ 5_._
**Corollary 3.3** (Classification of unoriented Legendrian unknots).:
1. _Let_ \(L\) _be an unoriented Legendrian unknot in_ \((S^{3},\xi_{std})\)_. All Legendrian unknots are stabilizations of the unique Legendrian unknot with_ \(\mathtt{tb}=-1\)_. Up to coarse equivalence, for each positive integer_ \(k\geq 1\)_, we have_ \(k\) _distinct Legendrian unknots with even_ \(\mathtt{tb}=-2k\)_, and we have_ \(k\) _distinct Legendrian unknots with odd_ \(\mathtt{tb}=1-2k\)_._
2. _Let_ \(L\) _be an unoriented non-loose Legendrian unknot in_ \((S^{3},\xi_{1/2})\)_. Up to coarse equivalence, for each positive integer_ \(k\geq 1\)_, we have a unique non-loose Legendrian unknot with_ \(\mathtt{tb}=k\)
Figure 4. If the clockwise orientation is chosen for \(L\), the type \(s\) stabilization (left) gives a negative stabilization, and the type \(z\) stabilization (right) gives a positive stabilization of \(L\).
Figure 5. Legendrian surgery descriptions of non-loose Legendrian unknots. The left knot \(L\) has \((\mathtt{tb},\mathtt{rot})=(1,0)\). Depending on a choice of orientation of \(L\), the middle knot \(L\) has \((\mathtt{tb},\mathtt{rot})=(2,\pm 1)\) and the right one has \((\mathtt{tb},\mathtt{rot})=(n,\pm(n-1)),n\geq 3\).
### Generalized Hopf Links
For \(n\in\mathbb{Z}\), let \(H_{n}\) be the annulus whose boundary is the anti-parallel \((2,-2n)\)-torus link as shown in Figure 6(Left). So \(H_{1}\) is the positive Hopf band, \(H_{-1}\) is the negative Hopf band, and \(H_{0}\) is the planar annulus. We say \(H_{n}\) is a _generalized Hopf band_ and its boundary \(\partial H_{n}\) is a _generalized Hopf link_.
**Theorem 3.4**.: _For \(n\neq 0\), let \(\Lambda_{n}\) be a non-loose Legendrian unknot with \(tb=-n\) in \((S^{3},\xi)\) where \(\xi=\xi_{std}\) if \(n>0\) and \(\xi=\xi_{1/2}\) if \(n<0\). Then \(R(\Lambda_{n})\) is topologically the generalized Hopf band \(H_{n}\), and the transverse generalized Hopf link \(\partial R(\Lambda_{n})\) is non-loose._
_Conversely, suppose for \(n\neq 0\) that \(\Lambda\) is a Legendrian graph in \((S^{3},\xi)\) such that \(R(\Lambda)\) is topologically the generalized Hopf band \(H_{n}\) and the transverse generalized Hopf link \(\partial R(\Lambda)\) is non-loose. Then \(\Lambda\) deformation retracts to a non-loose Legendrian unknot with \(tb=-n\) and \(\xi=\xi_{std}\) if \(n>0\) and \(\xi=\xi_{1/2}\) if \(n<0\)._
Proof.: Since \(\mathsf{tb}(\Lambda_{n})=-n\), a positive push-off of \(\Lambda_{n}\) has linking number \(-n\) with \(\Lambda_{n}\). Thus, if they were oriented coherently, the components of \(\partial R(\Lambda_{n})\) would be the \((2,-2n)\) torus link. Hence \(R(\Lambda_{n})\) is the generalized Hopf band \(H_{n}\). The tightness of the complement of the transverse link \(\partial R(\Lambda_{n})\) follows from Theorem 2.6 since \(\Lambda_{n}\) has the TRP as discussed in Lemma 3.1.
For the converse, suppose for some Legendrian graph \(\Lambda\) in \((S^{3},\xi)\) that \(\partial R(\Lambda)\) is non-loose and \(R(\Lambda)\) is isotopic to the generalized Hopf band \(H_{n}\). As the deformation retract of \(\Lambda\) is also a spine for \(R(\Lambda)\), Lemma 2.4 shows we may assume \(\Lambda\) has no leaves. Thus, as a spine of \(H_{n}\), we may assume \(\Lambda\) is a Legendrian unknot with \(tb=-n\). Since \(\partial R(\Lambda)\) is non-loose, Lemma 1.1 implies that \(\Lambda\) is non-loose. The result now follows from the classification of non-loose Legendrian unknots, Theorem 3.2.
**Remark 3.5**.: When \(n\geq 3\), we have multiple transverse generalized Hopf links representing the same topological generalized Hopf link \(\partial H_{n}\). These correspond to having multiple Legendrian isotopy classes of unoriented Legendrian unknots, see Corollary 3.3. They can be distinguished by looking at the self-linking of their individual components.
When \(n\leq-1\), there is a unique (up to contactomorphism) unoriented non-loose \(\Lambda_{n}\) and hence unique (up to contactomorphism) non-loose transverse generalized Hopf link representing \(\partial H_{n}\).
### Double twist knots and further Murasugi sums
Define \(H_{m,n}\) to be the once-punctured torus embedded in \(S^{3}\) that is the plumbing of the generalized Hopf bands \(H_{m}\) and \(H_{n}\) as in Figure 6(Right). The knots \(K_{m,n}=\partial H_{m,n}\) are known as _double twist knots_ for \(m,n\neq 0\). If either \(m=0\) or \(n=0\), then \(H_{m,n}\) compresses and \(K_{m,n}\) is the unknot. Otherwise \(K_{m,n}\) is a genus \(1\) knot and \(H_{m,n}\) is an incompressible Seifert surface. This is the unique incompressible Seifert surface if \(m=\pm 1\) or \(n=\pm 1\). When \(|m|>1\) and \(|n|>1\), there are two non-isotopic ones related by changing the plumbing disk. (Since the double twist knots are two-bridge knots, this classification of incompressible Seifert surfaces follows from [11] for example.)
Figure 6. (Left) The generalized Hopf bands \(H_{2}\) and \(H_{-3}\). (Right) The plumbing \(H_{2,-3}\) of \(H_{2}\) with \(H_{-3}\).
For \(n\neq 0\), let \(\Lambda_{n}\) be a non-loose Legendrian unknot with \(tb=-n\) in \((S^{3},\xi)\). Then for \(m\neq 0\) and \(n\neq 0\), we define \(\Lambda_{m,n}\) to be a Legendrian graph \(\Lambda_{m}*\Lambda_{n}\) that is a fusion of \(\Lambda_{m}\) and \(\Lambda_{n}\) of order \(2\). Examples of such fusions are depicted via Legendrian surgery description in Figure 7. Topologically, \(H_{m,n}=R(\Lambda_{m,n})\).
**Theorem 3.6**.: _A Legendrian graph \(\Lambda_{m,n}\) for non-zero integers \(m,n\) is a non-loose Legendrian graph in \((S^{3},\xi)\) for_
* \(\xi=\xi_{std}\) _if_ \(m,n>0\)_,_
* \(\xi=\xi_{1/2}\) _if_ \(m>0>n\) _or_ \(n>0>m\)_, and_
* \(\xi=\xi_{3/2}\) _if_ \(0>m,n\)_._
_Furthermore, the transverse double twist knot \(K_{m,n}=\partial R(\Lambda_{m,n})\) is non-loose._
Proof.: By Theorem 3.2, \(\Lambda_{n}\) is a Legendrian unknot in \((S^{3},\xi)\) where \(\xi=\xi_{std})\) if and only if \(n>0\) while \(\xi=\xi_{1/2}\) if and only if \(n<0\). As \(\Lambda_{m,n}\) is in the connected sum of the contact manifolds containing \(\Lambda_{m}\) an \(\Lambda_{n}\), we obtain the classification of contact manifolds containing \(\Lambda_{m,n}\) as stated.
Since \(\Lambda_{m}\) and \(\Lambda_{n}\) have the TRP by Lemma 3.1, Theorem 2.12 implies that \(\Lambda_{m,n}\) also has the TRP. Hence Theorem 2.6 shows that \(\partial R(\Lambda_{m,n})\) is non-loose.
More generally, let \(\mathcal{H}\) be the minimal set of Seifert surfaces in \(S^{3}\) that contains the generalized Hopf bands \(H_{n}\) for \(n\neq 0\) and is closed under Murasugi sum. Thus, any \(H\in\mathcal{H}\) Murasugi de-sums into a collection of generalized Hopf bands. Let \(\nu(H)\) be the number of generalized Hopf bands \(H_{n}\) in this collection with \(n<0\).
**Theorem 3.7**.: _If \(H\in\mathcal{H}\) with \(\nu=\nu(H)\), then there is a Legendrian graph \(\Lambda\) with the TRP in \((S^{3},\xi)\) where \(\xi=\xi_{std}\) if \(\nu=0\) and \(\xi=\xi_{\nu-1/2}\) if \(\nu>0\) such that \(R(\Lambda)\) is topologically isotopic to \(H\) and the transverse link \(\partial R(\Lambda)\) is non-loose._
Figure 7. Legendrian surgery descriptions of examples of Legendrian graphs \(\Lambda_{m,n}\)
Proof.: If \(H\) is just a generalized Hopf band, then this follows from Theorem 3.4. So now by induction, assume that \(H\) is a Murasugi sum \(H_{1}*H_{2}\) of two surfaces \(H_{1}\) and \(H_{2}\) in \(\mathcal{H}\) where \(\nu(H_{i})=\nu_{i}\). Then \(\nu(H)=\nu=\nu_{1}+\nu_{2}\).
Furthermore, by induction, \(H_{i}\) is isotopic to ribbon \(R_{i}\) of a Legendrian graph with the TRP in \((S^{3},\xi_{i})\) for each \(i=1,2\). Since any spine for \(R_{i}\) may be Legendrian realized by Lemma 2.3, let \(\Lambda_{i}\) be one (for each \(i=1,2\)) so that the Murasugi sum \(H_{1}*H_{2}\) is realized as the ribbon of a Legendrian fusion \(\Lambda_{1}*\Lambda_{2}\). Thus \(H\) is isotopic to the ribbon \(R(\Lambda_{1}*\Lambda_{2})\) in \((S^{3},\xi_{1}\#\xi_{2})\). As \(\Lambda_{i}\) has the same exterior as \(R_{i}\), it has the TRP too. Then by Theorem 2.12\(\Lambda_{1}*\Lambda_{2}\) has the TRP, and by Theorem 2.6\(\partial R(\Lambda_{1}*\Lambda_{2})\) is non-loose.
Finally, observe that \(\nu=0\) if and only if \(\nu_{1}=\nu_{2}=0\) so \(\xi_{1}\#\xi_{2}=\xi_{std}\) if \(\nu=0\). Otherwise, since at least one of \(\xi_{i}\) is overtwisted, \(\xi_{1}\#\xi_{2}\) is overtwisted and the calculation \(\xi_{1}\#\xi_{2}=\xi_{\nu_{1}+\nu_{2}-1/2}=\xi_{\nu-1/2}\) follows.
**Remark 3.8**.: Though a ribbon \(R\) of a Legendrian graph \(\Lambda\) may be topologically expressed as a Murasugi sum, it is not clear that some spine of \(R\) may be Legendrian realized as a Legendrian fusion. Furthermore, even if \(\Lambda=\Lambda_{1}*\Lambda_{2}\) is non-loose, it may be that \(\Lambda_{1}\) or \(\Lambda_{2}\) is loose.
## Acknowledgements
We thank Lev Tovstopvat-Nelip for helpful discussions, sharing the work in progress [14], and pointing out Remark 1.4. SO also thanks both the Department of Mathematics and the Institute of the Mathematical Sciences of the Americas (IMSA) at the University of Miami for their hospitality while this work was developed and written.
KLB was partially supported by the Simons Foundation grant #523883 and gift #962034. SO was partially supported by the Turkish Fulbright Commission, IMSA Visitor Program, and TUBITAK 2219.
|
2305.10833 | Deep Learning Methods for Extracting Metaphorical Names of Flowers and
Plants | The domain of Botany is rich with metaphorical terms. Those terms play an
important role in the description and identification of flowers and plants.
However, the identification of such terms in discourse is an arduous task. This
leads in some cases to committing errors during translation processes and
lexicographic tasks. The process is even more challenging when it comes to
machine translation, both in the cases of single-word terms and multi-word
terms. One of the recent concerns of Natural Language Processing (NLP)
applications and Machine Translation (MT) technologies is the automatic
identification of metaphor-based words in discourse through Deep Learning (DL).
In this study, we seek to fill this gap through the use of thirteen popular
transformer based models, as well as ChatGPT, and we show that discriminative
models perform better than GPT-3.5 model with our best performer reporting
92.2349% F1 score in metaphoric flower and plant names identification task. | Amal Haddad Haddad, Damith Premasiri, Tharindu Ranasinghe, Ruslan Mitkov | 2023-05-18T09:22:29Z | http://arxiv.org/abs/2305.10833v3 | # Deep Learning Methods for Extracting Metaphorical Names of Flowers and Plants
# Deep Learning Methods for Extracting Metaphorical Names of Flowers and Plants
_Metodos de aprendizaje profundo para la extraccion de nombres metaforicos de fibres y plantas_
**Amal Haddad Haddad\({}^{1}\), Damith Premasiri\({}^{2}\), Tharindu Ranasinghe\({}^{3}\), Ruslan Mitkov\({}^{4}\)**
\({}^{1}\)University of Granada, Spain
\({}^{2}\)University of Wolverhampton, UK
\({}^{3}\)Aston University, UK
\({}^{4}\)Lancaster University, UK
[email protected]
**Abstract:** The domain of Botany is rich with metaphorical terms. Those terms play an important role in the description and identification of flowers and plants. However, the identification of such terms in discourse is an arduous task. This leads in some cases to committing errors during translation processes and lexicographic tasks. The process is even more challenging when it comes to machine translation, both in the cases of single-word terms and multi-word terms. One of the recent concerns of Natural Language Processing (NLP) applications and Machine Translation (MT) technologies is the automatic identification of metaphor-based words in discourse through Deep Learning (DL). In this study, we seek to fill this gap through the use of thirteen popular transformer based models, as well as ChatGPT, and we show that discriminative models perform better than GPT-3.5 model with our best performer reporting 92.2349% F1 score in metaphoric flower and plant names identification task.
**Keywords:** Deep Learning, Transformers, Automatic Extraction of Metaphor, Metaphor-based Terms
**Resumen:** El dominio de la Botanica es rico en terminos metaforicos. Estos terminos tienen un papel importante en la descripcion e identificacion de flores y plantas. Sin embargo, la identificacion de este tipo de terminos en el discurso es una tarea dificil. Esto puede conducir a errores en los procesos de traduccion y otras tareas lexicgraficas. Este proceso es aun mas dificil cuando se trata de traduccion automatica, tanto en el caso de las unidades monolexicas, como en el caso de las unidades multilexicas. Uno de los desafios a los que se enfrentan las aplicaciones del Procesamiento del Lenguaje Natural y las tecnologias de Traduccion Automatica es la identificacion de terminos basados en metafora a traves de metodos de aprendizaje profundo. En este estudio, tenemos el objetivo de rellenar este vacio a traves del uso de trece modelos populares basados en transformadores, ademas del ChatGPT. Asimismo, demostramos que los modelos discriminativos aportan mejores resultados que los modelos de GPT-3.5. El mejor resultado alcanzo una puntaucion de 92,2349% F1 en las tareas de identificacion de nombres metaforicos de flores y plantas.
**Palabras clave:** Aprendizaje profundo, Transformadores, Extraccion automatica de metafora, Terminos basados en metafora
## 1 Introduction
Metaphor is a pervasive phenomenon in human language Lakoff and Johnson (2008). It is defined as "mapping of conceptual structure from a source to a target domain" Ruiz de Mendoza Ibanez (2017). Depending on the dimension of complexity of metaphor, authors distinguish two types of metaphors: image metaphors and conceptual metaphors Lakoff and Johnson (2008). Image metaphors compare one single image in one domain with another image belonging to another domain, such as the image metaphor "she is as good as gold". Conceptual metaphors are more complex
at the conceptual and cognitive levels, and they refer to the resemblance established between a whole set of experiences, such as the metaphor in "life is a journey", which implies a whole set of elements activated within the metaphoric target domain. Metaphor-based terms, or the so-called terminological metaphors, are common in specialised languages. Their use is abundant, as they help in the conceptualisation of phenomena and their description by establishing a resemblance between images and domains. They also help in understanding abstract phenomena in terms of more concrete notions and in modelling scientific thought (Urena Gomez-Moreno and Faber, 2010).
However, the identification of metaphor-based terms in discourse is an arduous task. This leads in some cases to committing errors during translation processes and lexicographic tasks. The process is even more challenging when it comes to machine translation, both in the cases of single-word terms and multi-word terms, which are represented by Multiword Expressions (MWEs). The main common error while carrying out the translation processes is that the metaphorical lexical items forming part of a term would be transferred literally into other languages without taking into consideration its metaphoric and cultural dimension or without taking into account that they form part of an MWE.
Previous studies focused on the extraction of metaphorical terms from discourse, such as Mu, Yannakoudakis, and Shutova (2019) and Razali et al. (2022); however, to the best of our knowledge, there are no programs that could automatically retrieve those terms both as single-word terms and MWEs in specialised languages. This study seeks to fill in this gap and proposes a novel method based on transformer models Premasiri et al. (2022); Premasiri and Ranasinghe (2022); Ranasinghe et al. (2021) for automatic extraction of metaphor-based terms from the specialised domain of Botany and concerning the names of flowers and plants in English and Spanish. The main contributions of this study are:
1. We empirically evaluate thirteen discriminative transformer models and one generative transformer model (Chat-GPT) for the tasks of metaphoric flower and plant names identification on English and Spanish datasets.
2. We show that discriminative models perform better in the metaphoric flower and plant names identification task.
3. We release new annotated datasets for metaphoric names identification in English and Spanish.
4. We make our code freely available for further research1. Footnote 1: [https://bit.ly/3pYAYXK](https://bit.ly/3pYAYXK)
This paper is organised as follows: in Section 2 we present previous related work. In Section 3 we describe the dataset used and its annotation process. In Section 4 we detail the experimental set-up and methodology, while in Section 5 we report our experiment's results and evaluation. Finally, we summarise the main conclusions and propose future work in Section 6.
## 2 Related work
The study of metaphor-based terms in discourse has been a subject of study in the last few decades. One of the main concerns in this field is the detection of metaphor-based words in discourse. With this aim, the Pragglejaz Group suggested a method for the manual identification of metaphor, called Metaphor Identification Procedure (MIP) Group (2007). This method has been used extensively Nacey et al. (2019). Studies like Turney et al. (2011), Jang et al. (2015) and Coll-Florit and Climent (2019) have a similar approach. Other projects such as the VU Amsterdam Metaphor Corpus Leong et al. (2020) offer a manually annotated corpus for all metaphorical language use. Moreover, studies like Yaneva (2016), show how the use of metaphor and figurative language in discourse is of utmost difficulty for people with Autism Spectrum Disorder (ASD); hence, studies like Yaneva (2016) and Stajner et al. (2017) endeavour to identify and disambiguate complex sentences which contain metaphor and metonymy among other features through the application of Complex Word Identification modules. The above studies were partially inspired by the FIRST Project2 Orasan, Evans, and Mitkov (2018) and the
development of the Open Book tool which helps people with ASD.
Concurrently, one of the recent concerns of Natural Language Processing (NLP) applications and Machine Translation (MT) technologies is the automatic identification of metaphor-based words in discourse through Deep Learning Methods (DLM). For example, Mu, Yannakoudakis, and Shutova (2019) suggest working with large corpora and training _simple gradient boosting classifiers_ on representations of an utterance and its surrounding discourse learned with a variety of document embedding methods". Su et al. (2020) focus on token-level metaphor detection paradigm and propose using an end-to-end deep metaphor detection model. Authors like Razali et al. (2022) use machine learning to automatically detect metaphor instances in short texts by implementing Support Vector Machine algorithms, while other authors like Gutierrez et al. (2016) propose modelling metaphor explicitly within compositional distributional semantic models to improve the resulting vector representations. Those authors classify the already used methods in the following categories: clustering; topic modelling; topical structure and imageability analysis; semantic similarity graphs and feature-based classifiers (Gutierrez et al., 2016). Recent approaches are more centred on using dense embedding methods (Vitez et al., 2022).
On the other hand, the study of metaphor-based terms in specialised discourse has been subject to scientific and cognitive studies. The automatic identification of metaphor-based terms is considered a substantial challenge. Some studies highlight the importance of automatic extraction of terms in specialised discourse (Rodriguez Penagos and others, 2005)
while other studies, such as Urena Gomez-Moreno and Faber (2010),
propose a semi-automatic method for term retrieval in the domain of Marine Biology. However, to the best of our knowledge, there have been no previous studies or methodologies which cover the automatic extraction of those terms from scientific discourse in other domains and no previous studies were carried out in the domain of Botany.
## 3 Data
Specialised discourse is rich in metaphor-based terms; Botany is no exception. The semantic motivations for plant names are usually influenced by the appearance of the plant, the place of its occurrence, the properties of the plant, its usage, as well as other motivations typical of a specific genus of species (Debowiak and Waniakowa, 2019). Many studies have shown that metaphor is one of the most frequent techniques to coin flowers and plants names (Rastall, 1996); (Nissan, 2014); (Debowiak and Waniakowa, 2019). This metaphoric use may give clues to cultural references related to legends and beliefs associated with plants in general, like their healing properties and supposed magical powers (Debowiak and Waniakowa, 2019). At the same time, this shows that this metaphorical use may vary among languages and cultures. From another perspective, studies like Goodman (1963) highlight the importance of flower names based on metaphor for the study of colour and its comparison among languages. For this reason, we consider the study of metaphor-based terms in this domain relevant as a case-study.
The dataset we use to extract metaphor-based terms in English is the Encyclopaedia of Flowers and Plants, published by the American Horticultural Society (Brickell, 2012). We selected this edition as it is available in a digitalised format in the online library of the Internet Archive. This Encyclopaedia consists of 522,707 words. It contains a dictionary of names of flowers from around the world, with approximately 8000 terms referring to both scientific and common names and their origins, as well as 4000 images. It is divided into the following sections: firstly it has an introduction about how to use the book, plant names and origins and relevant information on how to create a garden and how to select plants. This introductory part shows that it is aimed at both professionals and laypersons. Secondly, it has a plant catalogue, subdivided into categories such as trees, shrubs, roses, climbers and wall shrubs, perennials, annuals, biennials and bedding, rock plants, bulbs, water and bog plants as well as tender and exotic plants. All those subsections contain rich contexts on each term, concerning the origin, uses, habitat, size, etc. Finally, the En
cyclopaedia offers a dictionary section with an index of common names and glossary of terms. We benefited from this last section to extract and annotate terms. The advantage of using this Encyclopaedia is that it includes a wide range of varieties of flowers and plants from all around the world. For this reason, the obtained results may be useful to be applied in different contexts and in multidisciplinary studies.
The data was pre-processed by annotating the proper names and their metaphorical condition. The MIP criteria for metaphor identification [11] was adapted to annotate the terms, considering a term as metaphor-based when one or more of the lexical units forming it or its etymology give evidence that they belong to different domains, based on its meaning in the dictionary. The annotated names represent both image metaphors and conceptual metaphors. An example of image metaphors, is the one-word name of the flower _Edelwiess_ which is a combination between the two lexical units _edel_ which means noble and _weiss_, which means white in German. This name represents an image metaphor where the flower is called as so as it symbolises purity. The scientific name of this flower is _Leontopodium Alpinum_, an MWE with Greek origin and etymology. It is also an image metaphor, as the lexical unit _Leontopodium_ means lion's foot [12], the resemblance is established between the for of the petals of the flowers and the aspect of the foot of a lion. Another example are the flowers _Sunburst_ and _Moonlight_. The name of the flower _Sunburst_ shows the resemblance between the colours of the flower and the colours of the sun, while the flower called _Moonlight_, alludes to the resemblance between the flower and the light of the moon. Other metaphor based-names represent a conceptual image, such as the MWE flower name _forget-me-not_ which refer to the association between the heart-shaped blue flowers that reminds the person of his or her beloved one; or the one-word name of the flower _cascade_ which associate the aspect of a flower with the whole process of the water falling in a real cascade.
Apart from the Encyclopaedia of Plants and Flowers, we also compiled a corpus of other resources related to Botany in English. It consists of 437,663 words. Some of the texts are monographs, others are journal articles, and some texts are retrieved from other online resources. The full list of references used to compile the English corpus are listed in Appendix 1. With respect to the Spanish dataset, we have annotated a list of flowers and plants names provided in selected monographs and glossaries following the same criteria as in the case of the English terms. Above all, we used books and articles in the domain of Botany and botanical glossaries, such as the glossaries provided in _Los Araboles en Espana_[11], _Biopola de la Conservacion de Plantas en Sierra Nevada_[10] and the glossary of scientific names of plants and their vernacular names provided by the Entomological Museum in Leon in the Bio-Nica webpage3. The list obtained from this source consists of more than 5000 scientific and vernacular names of flowers and plants. As for the book _Los Araboles en Espana_, it consists of almost 155,000 words with more than 600 terms in the section of Glossary. The book describes the details of each plant, its family names, its vernacular names and synonyms, its origin, etymology, description and cultivation information. It also provides illustrative images of each plant. The book _Biologia de la Conservacion de Plantas en Sierra Nevada_ was also valuable as some of its chapters contained lists of scientific names of endemic flowers from Sierra Nevada and its common names too. In order to enhance the datasets, we also added more specialised, semi-specialised and informative texts in the domain of botany to obtain more rich contexts. It consists of 460,258 words. The full list of the sources used to compile the Spanish corpus are listed in Appendix.
Footnote 3: [http://www.bio-nica.info/home/index.html](http://www.bio-nica.info/home/index.html)
With this paper, we release datasets of English and Spanish flower and plant names with their annotations metaphoric or not metaphoric. The English dataset consists of 6330 total plant and flower names as a combination of 1869 metaphorical names and 4461 non-metaphorical names. The Spanish dataset consists of 875 metaphoric names and 4,988 non-metaphoric names out of 5863 total.
Data PreparationSince we model the metaphoric name identification task as a token level classification task, we used IOB
format tagging for our corpus. IOB format is widely used in token level classification tasks (Tjong Kim Sang and De Meulder, 2003)
where B - Beginning, I - Inside and O - outside of a metaphoric flower or plant name; Table 1 shows an example IOB annotation. After tagging the sentences from the corpus, we identified that there were a very high number of sentences which do not have a single metaphoric name. In other words, the majority of the sentences only had 'O' as the tag for all their words. Since this has a negative impact on the model training process, we decided to balance the dataset by removing some sentences. Then we shuffled all the sentences and divided the training and test sets. Finally, we had 2020 total sentences divided 1500 and 520 in English training and test set respectively. For Spanish, we used only 250 sentences as the dataset.
Test sets were the same for discriminative and generative experiments. The only thing is that in the generative approach, we did not use the training set, since we cannot train ChatGPT.
## 4 Methodology
Discriminative ModelsTransformers Vaswani et al. (2017) have been a major breakthrough in Deep Learning research, since they provide a robust mechanism based on attention for the neural networks to flow information without recurrence and convolution. This architecture has produced state-of-the-art results in many NLP applications. With the introduction of BERT Devlin et al. (2019), which employs the transformers architecture, the pre-trained large language models have played an important role in pushing the boundaries of all NLP tasks such as text classification Ranasinghe et al. (2019), Uyangodage et al. (2020), question answering Premasiri et al. (2022), text similarity Mitkov et al. (2023) etc. and achieving new state-of-the-art. With this motivation, we use transformers as our primary experimental setup and evaluate multiple pre-trained language models. These models follow similar architectures to BERT Devlin et al. (2019) while they are pre-trained on different corpora and different objectives. Figure 1 Ranasinghe and Zampieri (2021) shows the transformer architecture we used where we input sentences which contain metaphoric flower and plant names, then we obtain BIO tags from the output layer by adding a softmax layer on top of the last hidden state of the deep network to classify each token into one of I,O,B tags. We used several popular transformers based pre-trained language models.
For the experiments on English dataset, we used the cased and uncased variants of BERT base and BERT large versions. In order to establish the capabilities of multilingual models, we experimented with the multilingual-bert Devlin et al. (2019) model with its cased and uncased variants and xlm-roberta-base Conneau et al. (2020) model and xlm-roberta-large Conneau et al. (2020) version. We further experimented with google/electra-base-discriminator Clark et al. (2020) model which is different from BERT architecture. Finally, within these discriminative models we evaluate allenai/scibert_scivocab_cased Beltagy et al. (2019) and allenai/scibert_scivocab_uncased Beltagy et al. (2019) variants which are specifically pre-trained on scientific corpora. We assume that flower and plant names could appear in those corpora such that the model can leverage the learning to produce better results.
Since Spanish is low in resources on metaphoric flower and plants names corpora, we experimented zero-shot learning for Spanish on English data. We specifically used the multilingual-bert Devlin et al. (2019) and xlm-roberta Conneau et al. (2020) for our experimental setting as these models provide multilingual capabilities.
All the models were trained for three epochs, learning rate 4e-5 with 32 training batch size and for the hardware we used a pre-Force RTX 3090 GPU.
Generative ModelsWhile all above methods rely on the discriminative approach, which tries to identify boundaries in the data space, generative models attempt to model the placement of the data throughout the space. This approach attracted huge attention in the research community with the release of ChatGPT4 by openAI5.
The research on Generative Pre-trained Transformer (GPT) (Radford et al., 2018) models have produced multiple versions of it including GPT-3, GPT-3.5 and GPT-4. The free version of ChatGPT only supports GPT-3.5 for the time being and all our experiments are based on ChatGPT free version. According to OpenAI, the most cost capable and cost effective models out their models is gpt-3.5-turbo, which we used to our experiments.
Since ChatGPT is a generalised conversational application, it does not essentially provide IOB tags as outputs. After experimenting with different prompts to retrieve IOB tags from ChatGPT, we decided it would be easier to retrieve the metaphoric flower or plant name in the sentence from the API6 and _No_ otherwise. Prompt we used: _Is there a metaphoric flower name or metaphoric plant name included in the following sentence, say yes or no, if yes what is the metaphoric flower or metaphoric plant names in the sentence separately : {sentence goes here}_. The outputs of ChatGPT are not uniform, and we had to post process the outputs using regular expressions to re-generate the IOB tags for evaluation.
Footnote 6: [https://bit.ly/30LCWFn](https://bit.ly/30LCWFn)
Since this is a token classification task, we use macro averaged Precision, Recall and F1 score as our evaluation metrics.
\[Precision=TP/(TP+FP) \tag{1}\]
\[Recall=TP/(TP+FN) \tag{2}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} Calliandra \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} haematcepphala \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} (Red \\ \end{tabular} powder \\ \end{tabular} } & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} puff) \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} is \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} an \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} evergreen, \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} spreading \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{
\begin{tabular}{c} shrub shrub \\ \end{tabular} }} \\ \hline O & O & B & I & I & O & O & O & O \\ \hline \hline \end{tabular}
\end{table}
Table 1: BIO annotation example
Figure 1: Transformers architecture for token level classification
## 5 Results and Discussion
### English
The results in table 2 show the competitive performance of transformer models, in the flower and plant names classification task. Despite the fact that most of the transformer models we experimented with are not specifically pre-trained on botanic corpora, almost all discriminative models were able to produce more than 90% F1 score in the task. Interestingly, the multilingual bert model could surpass the other models and mark the top results at 92.2349% F1 score.
Another noteworthy observation in our study was that cased models outperformed all the respective uncased models. Even though the xlm-roberta-base was the least performer in discriminative models, the performance gap to the best performer is only 2.3789% which shows the competitiveness of the transformers in token level classification tasks.
Even though scibert models are specifically trained on scientific corpus, these models were not able to outperform the bert multilingual model, which shows that the general knowledge could play a significant role in metaphoric identification task.
While ChatGPT seems very good at handling general text, it does not perform well in metaphoric names identification in flower and plant names. Given that we cannot further fine-tune the GPT model with our corpus, the ChatGPT is struggling to identify and generate text with metaphoric flower and plant names. Another important observation was, ChatGPT was not producing consistent results because we could observe different results for the same sentence if we retrieve twice. This shows that ChatGPT is uncertain about its answers on metaphoric flower and plant names, maybe with GPT-4 it may have a better understanding with more data. We leave it for future work.
### Spanish
Table 3 shows the results on Spanish data in zero-shot configuration on English data. We note that in all models, learning from English data has lead to decent results on Spanish metaphoric flower and plant names identification. Interestingly, bert-base-multilingual-cased model performs better in both languages marking over 52% F1 score on Spanish. It was noted that there is a significant difference between English and Spanish results, as expected because the English models were fine-tuned on English metaphoric data, but we were not able to do that in Spanish due to lack of resources.
ChatGPT has kept similar performance for Spanish recording over 51% F1 score. This is very close value to the best discriminative model but could not outperform bert-base-multilingual-cased model. Unlike ChatGPT, since discriminative models are able to fine-tune, we conjecture that their performance could be boosted with a fine-tuning step with more data.
## 6 Conclusions
The detection of metaphorical terms is an important research area for many NLP applications. Detecting metaphor-based terms of flowers and plants may give birth to different multidisciplinary research and applications. On the one hand, it may help in overcoming the so-called plant awareness disparity or plant blindness Parsley (2020) as the metaphoric factor would help in remembering the names of flowers and plants and their aspect. It may also give insightful information to Cognitive Studies towards understanding phenomena such as metaphor and metonymy, and even towards a more comprehensive understanding of conceptual complexes Ruiz de Mendoza Ibanez (2017). This may be carried out by comprehending the associations between metaphoric names and the image of the flower and plant rep
\begin{table}
\begin{tabular}{c|c|c|c} \hline Model & P & R & F1 \\ \hline bert-base-multilingual-uncased & 59.2957 & 40.3103 & 43.0472 \\ bert-base-multilingual-cased & 54.0904 & 52.1401 & **52.8657** \\ xlm-roberta-base & 67.4035 & 36.5622 & 37.4988 \\ xlm-roberta-large & 64.1040 & 47.4813 & 51.8174 \\ \hline ChatGPT & 63.1887 & 46.6820 & 51.4120 \\ \hline \end{tabular}
\end{table}
Table 3: Results on metaphoric flower and plant names identification in Spanish; P - The macro averaged precision, R - The macro averaged Recall, F1 - The macro averaged F1 score.
resenting them, and how the resemblance of images or the metonymic aspect is conceptualised through the coinage of terms. On the other hand, this information is also helpful for the studies of representation of abstract phenomena in art and its comprehension across languages. The automatic extraction of those terms is a step towards achieving more comprehensive and accurate results. In addition, this may help rendering texts more accessible to people with ASD. At the same time, these types of studies may also help in the development of software or mobile applications to be used by both laypersons and professionals.
In conclusion, we show that the state-of-the-art transformers are well capable of performing excellently in identifying metaphoric flower and plant names.
## 7 Acknowledgements
Part of this research was carried within the framework of the projects the projects PID2020-118369GB-I00 and A-HUM-600-UGR20, funded by the Spanish Ministry of Science and Innovation and the Regional Government of Andalusia. Funding was also provided by an FPU grant (FPU18/05327) given by the Spanish Ministry of Education. We also want to thank Elvira Camara Aguilera for her help in the annotation process.
|
2308.08058 | Hyper-Drive: Visible-Short Wave Infrared Hyperspectral Imaging Datasets
for Robots in Unstructured Environments | Hyperspectral sensors have enjoyed widespread use in the realm of remote
sensing; however, they must be adapted to a format in which they can be
operated onboard mobile robots. In this work, we introduce a first-of-its-kind
system architecture with snapshot hyperspectral cameras and point spectrometers
to efficiently generate composite datacubes from a robotic base. Our system
collects and registers datacubes spanning the visible to shortwave infrared
(660-1700 nm) spectrum while simultaneously capturing the ambient solar
spectrum reflected off a white reference tile. We collect and disseminate a
large dataset of more than 500 labeled datacubes from on-road and off-road
terrain compliant with the ATLAS ontology to further the integration and
demonstration of hyperspectral imaging (HSI) as beneficial in terrain class
separability. Our analysis of this data demonstrates that HSI is a significant
opportunity to increase understanding of scene composition from a robot-centric
context. All code and data are open source online:
https://river-lab.github.io/hyper_drive_data | Nathaniel Hanson, Benjamin Pyatski, Samuel Hibbard, Charles DiMarzio, Taşkın Padır | 2023-08-15T22:01:00Z | http://arxiv.org/abs/2308.08058v1 | Hyper-Drive: Visible-Short Wave Infrared Hyperspectral Imaging Datasets for Robots in Unstructured Environments
###### Abstract
Hyperspectral sensors have enjoyed widespread use in the realm of remote sensing; however, they must be adapted to a format in which they can be operated onboard mobile robots. In this work, we introduce a first-of-its-kind system architecture with snapshot hyperspectral cameras and point spectrometers to efficiently generate composite datacubes from a robotic base. Our system collects and registers datacubes spanning the visible to shortwave infrared (660-1700 nm) spectrum while simultaneously capturing the ambient solar spectrum reflected off a white reference tile. We collect and disseminate a large dataset of more than 500 labeled datacubes from on-road and off-road terrain compliant with the ATLAS ontology to further the integration and demonstration of hyperspectral imaging (HSI) as beneficial in terrain class separability. Our analysis of this data demonstrates that HSI is a significant opportunity to increase understanding of scene composition from a robot-centric context. All code and data are open source online: [https://river-lab.github.io/hyper_drive_data](https://river-lab.github.io/hyper_drive_data)
Nathaniel Hanson\({}^{1*}\), Benjamin Pyatski\({}^{1}\), Samuel Hibbard\({}^{1}\), Charles DiMarzio\({}^{2}\), Taskin Padur\({}^{1}\)\({}^{1}\)Institute for Experiential Robotics; \({}^{2}\)Electrical and Computer Engineering Department
Northeastern University, Boston, Massachusetts, USA
hyperspectral imaging, robot spectroscopy, multi-modal sensing, terrain segmentation
## 1 Introduction
Mirroring human-level terrain perception in robots is area of active research, given its criticality in enabling actionable intelligence prior to traversing the surface. Terrain can be best understood as a type of abstract material of unspecified extent [1]. Unlike objects defined by regular geometric properties, terrain challenges traditional perception systems because surface materials vary widely in their size and shape. For instance, a farm field and patch of soil have the same texture and micro appearance, but cover drastically different geographic areas. Coarse labels like grass, soil, and sand are useful in semantic segmentation, but intra-class differences affect traversability. On road surfaces, these features might manifest as oil slicks, standing water, or black ice.
Our approach to this problem builds on our previous work in terrain classification with point-based spectroscopy and multi-modal sensing [2]. This present work introduces hyperspectral imaging (HSI) to mobile robot terrain understanding. In this research, we develop a system, HYPER DRIVE, shown in Fig. 1, to capture datacubes from a moving platform with variable illumination. Our work shows forward-facing HSI is a powerful tool for robotics and is useful in a variety of terrain conditions. The contributions of this paper are as follows.
* Development of a sensing system to incorporate Visible to Short Wave Infrared hyperspectral cameras onto a mobile robot with solar reference spectrum.
* Open-source software framework and message types implemented for the Robot Operating System [3]
* Multi-modal dataset containing registered imaging products and ambient solar spectra data for system.
Figure 1: Hyper-Drive system mounted to off-road mobile robot, with sample data representations of white reference target from a) the Visible to Near Infrared (VNIR) hyperspectral camera b) Shortwave Infrared hyperspectral camera c) High resolution RGB camera d) Combined point spectrometers.
## 2 Related Work
Hyperspectral data acquired from moving vehicles is still in its infancy. The recent innovation of snapshot hyperspectral cameras with fast integration times has made it possible to capture images of a nonplanar scene, while either the camera or objects are moving relative to each other. The absence of artifacts like motion blurring is critical to obtain representative spectra and distinct spatial characteristics.
### Hyperspectral Terrain Datasets
There have been multiple efforts to develop vehicle-mounted hyperspectral cameras to collect datacubes from off-road [4, 5], on-road [6, 7, 8]. Notably, all the aforementioned examples make use of Visible-Near Infrared (VNIR) cameras, which are well-suited to detect vegetative properties, but do not have the same insight into the numerous absorption bands in the shortwave infrared spectrum [9]. Unlike RGB image terrain segmentations which have adopted standard label sets such as the widely-used KITTI classes [10], HSI datasets are contextually driven with labels largely chosen by the operating range of the camera. Examples of semantic label sets from the literature include (dataset name bolded):
* **HSI-Drive**: Road, road marks, vegetation, painted metal, sky, concrete/stone/brick, pedestrian/cyclist, water, unpainted metal, glass/transparent plastic [6].
* **Wilkens et al.** Drivable, rough, obstacle, sky[11]
* **HyKo** Plastic, soil, paper, street, wood, chlorophyll, metal, sky [5].
* **Jakubczyk et al.** Ground road, forest road, asphalt road, grass, forest [4].
* **Hyperspectral City V1.0**: Car, human, road, traffic light, traffic sign, tree, building, sky, object [7].
* **HSI Road**: Road, background [12].
[6] contains annotations for the time of day and the time of year and aggregations of classes into categories including drivable / non-drivable, drivable / road markers / vehicles and drivable / road markers / vehicles / pedestrians [6]. Similarly [5] included multi-modal sensors including a VNIR spectrometer and Light Detection and Ranging (LIDAR) sensors in their HyKo dataset, but did not elaborate on how the spectrometer could benefit the system calibration [5]. HyKo also contains condensed annotations on drivability classes (rough, sky, obstacle, drivable).
### Spectral Informed Terrain Understanding
[11] shows the potential of HSI in terrain classification, even with simple machine learning methods such as random forest, and later through manual feature extraction [13].
[8] claims HSI overcomes challenges induced by object metamerism. RGB images are empirically shown to have a lower degree of separability than HSI datacubes. The authors exploit this information in a semantic segmentation network, with a finetuning module leveraging 10 classes that combine semantic purpose and material information.
[14] explores the use of RGB imagery alone to perform material segmentation with transform-based neural networks. They also propose a dataset called KITTI-Materials, containing 1000 frames with 20 different material categories [14]. The same group also augmented RGB imagery with NIR and polarization images to improve classification accuracy on materials such as metal and water [15].
In our previous work, we demonstrated high accuracy in terrain classification by measuring the spectral signatures of terrain near the contact point on wheeled vehicles [2]. Combining feature-specific neural networks for IMU and RGB image classification through a fusion network provided increased performance. The results of [16] show that further fusing visual imagery and NIR spectral signatures aids in the classification of urban road surface types.
## 3 System Architecture
### Imaging System
For this research we integrated two snapshot hyperspectral cameras onto a reconfigurable platform. Both sensors are manufactured by IMEC. The VNIR camera captures light from 660-900 in a 5\(\times\)5 Fabry-Perot filters placed in front of a silicon-based photodetector element. When corrected, these filters produce a data cube with 24 spectral bands. Similarly, the IMEC SWIR camera captures wavelength information from 1100-1700 nm using a 3\(\times\)3 filter array. These filters are placed on an uncooled Indium Gallium Arsenide (InGaAs) photodetector. Unlike the VNIR camera, which has filters evenly spaced across the sensitive spectral range, the SWIR camera concentrates its bands in the lower range of the wavelength spectrum, where there are more prevalent spectral absorption features. Combined, our hyperspectral sensing solution features 33 channels, covering 1100 nm of the electromagnetic spectrum. The cameras are housed inside a 3D printed Onyx housing weatherized with epoxy. An uncoated Gorilla Glass shielding glass allows minimal perturbation of the light entering the lens.
The hyperspectral cameras are coaligned with a 5 megapixel RGB machine vision camera (Allied Vision). This system provides a high-resolution spatial reference containing the hyperspectral cameras' full field of view.
The hyperspectral cameras have a primary FOV of \(25^{\circ}\). Each is calibrated using a precision checkerboard target board. The individual bands of the camera are then radially undistorted. The combined hyperspectral datacube is generated by calculating projective transforms through the checkerboard images, using bands that reveal the highest contrast between the board's squares. The combined datacube dimension is \(1012\times 1666\times 33\); each cube is registered with the RGB camera.
### Point Spectrometer
The protective water-proof computer housing also contains two point spectrometers (Ibsen). The Pebble VIS-NIR, with a silicon detector, is sensitive between 500 and 1100 nm with 256 spectral pixels; the Pebble NIR, with an uncooled InGaAs detector, is sensitive from 950 to 1700 nm with 128 pixels. The two spectrometers have an overlap in the spectral signature range, which is advantageous because of the decreased quantum efficiency of the VIS-NIR spectrometer at wavelengths greater than 950 nm. As evidenced by Fig. 2, we leverage the greater efficiency of the InGaAs detector, and truncate the VIS-NIR device at wavelengths less than 950 nm. Together, the spectrometers cover a greater spectral range than the hyperspectral cameras and with an increased sensitivity. The InGaAs sensors are an order of magnitude less sensitive than the silicon-based devices, resulting in integration times that are appropriately larger.
The point spectrometers are coupled to low OH (Hydroxl) group fiber optic cables (Thor Labs). This allows the spectrometers to remain inside the weatherized housing, while still sensing light from the outside. The fiber optic cables are connected to an SMA fitting, which holds the end ferrules offset 4 cm above a 99% Spectralon white reference target (Labsphere). The spectrometers measure a white reference signal data under the current illumination conditions, both natural and artificial. This reference signal will be used to dynamically generate reflectance calibrations in future work.
### Messaging and Computation
Snapshot hyperspectral datacubes produced by the system are \(\approx\)20 megabytes. At its maximum operating rate, the system can generate nearly 1 gigabyte of data per second. To expedite data processing, the onboard compute system leverages an Intel Core i7 processor with 3 terabytes of SSD storage.
The Robot Operating System [3] is used to synchronize data collection from all the onboard systems. The three cameras are time-synchronized, so the RGB, VNIR, and SWIR images all correspond to the same scene at a rate of 10 Hz. All custom ROS compatible drivers, data structures, and algorithms are implemented in Python and C++ and are included as an open-source resource in the project repository. ROS enables time-synchronized data from the camera system and point spectrometers to be acquired, even when device drivers operate at different frequencies. Spectrometer messages contain time stamps, wavelength values, and raw digital counts, in addition to optional metadata fields, such as ambient humidity and device temperature. Hyperspectral data are transmitted as flattened 1-D arrays, with the dimensions of the 3-D cube, as well as meta-data fields for the central wavelengths, quantum efficiencies, and full-width half-maximum values. These message structures are extensible to other manufacturers, with examples available in the project repository.
## 4 Hyper Drive Dataset
### Dataset Construction
We conducted a field data collection at the autonomous systems track on the campus of Olin College. The compute and sensor systems were mounted atop a Warthog unmanned ground vehicle (Clearpath Robotics). The Warthog's large ground clearance and tire treads make it an ideal candidate for mobility in both on-road and off-road terrain.
Data were collected over two days including late afternoon, during the setting sun, early morning sunrise, and mid-day. This temporal variation increases the variation of solar illumination and image intensity. The vehicle was driven on 4.0 kilometers of paved roads, hiking trails, and dirt roads. We downsampled the data collection to 1 Hz to ensure scene differentiation between samples. The published dataset contains the following structure:
* ROS bag file containing compressed time synchronized datastream with all following datatypes (.bag)
* RGB image file registered to datacube (.png)
* Hyperspectral datacube compressed in 3-D array (.npz)
Figure 2: Quantum efficiencies of each of the spectrum sensing devices used in these experiments. _Note:_ the Fabry–Pérot filters allow a spectral response typically characterized by a primary transmission peak, followed by a smaller secondary peak. This smaller feature is corrected for in the hypercube demosaicing operation.
* White reference spectra from spectrometers (.npy)
* Image segmentation masks (.png)
### Annotations
Previous hyperspectral datasets suffer from a lack of consistent labels, making it difficult to compare the performance of classification algorithms between datasets. We address this problem by adopting the All-Terrain Labelset for Autonomous System (ATLAS) [17]. ATLAS provides an extensible, hierarchical ontology to generate fine-grain or coarse labels for off-road vehicle data. Each datacube has a set of labels for the whole scene: _biome, time of day, season, weather)_. Additionally, instance labels mark the presence of specific features in the image. At the highest level, these categorizations include: _[landscape, vegetation, animal, person, obstacle, atmospheric]_. Each of these labels can be decomposed into more specific classes or simplified to binary labels such _obstacle_ or _landscape_. Fig. 3 shows images from the dataset overlayed with ATLAS labels.
As part of the initial data release, we provide 12,874 datacubes and RGB images collected from the various data collections. 500 of these images have been finely labeled with segmentation masks. To the best of the authors' knowledge, this is the largest and most diverse vehicle-centric hyperspectral dataset and the first to include shortwave infrared information. Table 1 contains an extracted breakdown of the full label information as a function of the hierarchical classes.
## 5 Discussion
As a motivating example for why this dataset is beneficial in robotic terrain analysis, we consider the inter-class differences of the data. Fig. 4 shows a t-SNE [18] separability analysis conducted on the data. t-SNE attempts to find a two-dimensional embedding for the natively high-dimensional hyperspectral datacube. The plot on the left shows the separability from the RGB color space alone. The plot on the right is generated from the hyperspectral datacube.
From these embeddings, there is a clearer decision boundary between the dominant classes in the HSI t-SNE embedding. We also observe that there are more distinct groupings through the RGB data. There is still a significant amount of overlap between the classes in this two-feature representation, which is to be expected given the large number of classes (10) present in these data. The RGB plot contains a significant number of outliers in the clusters, especially in
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Level \#1 Label & Level \# 2 Label & \# Segments & \# Images \\ \hline \multirow{4}{*}{Path} & Dirt & 198 & 144 \\ & Rock/Gravel & 303 & 213 \\ & Paved & 143 & 116 \\ & Concrete & 117 & 92 \\ & Ground Cover & 806 & 464 \\ Vegetation & Bush/Tree & 795 & 503 \\ & Leaves/Match & 233 & 158 \\ & Vehicle & 92 & 68 \\ Obstacle & Infrastructure & 241 & 181 \\ & Road Signage & 127 & 98 \\ Person & - & 15 & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Class Structure Statistics in HYPER DRIVE Dataset
Figure 4: t-SNE two-dimensional embedding visualization of images of selected dataset image from RGB data (left) and HSI data (right).
Figure 3: a) Warthog collection vehicle traversing grass environment with dense surrounding vegetation. b) Ground-truth ATLAS segmentation labels overlayed on high-resolution RGB image. c) Reduced order labels for drivability assessments.
the "dirt" class. These tighter clusters suggest there are natural distinctions amongst the classes that semantic segmentation networks will exploit to generate more accurate classifications.
## 6 Conclusion
In this work, we presented a novel system architecture for collecting and associating snapshot hyperspectral data from a moving vehicle. We release a large dataset of VIS-SWIR images that encompass operating conditions as seen from a mobile robot according to the ATLAS ontology. The project also contains open-source software to integrate HSI into robotics applications. Future iterations of this system will generate fully normalized datacubes without imaging a camera white-reference image by predicting the unobserved white reference from the spectrometers' ambient spectra. We hope the introduction of a ROS framework for hyperspectral data and dissemination of our dataset will encourage further research on the applicability of HSI to other unstructured environments.
|
2301.04369 | Reproducibility Signals in Science: A preliminary analysis | Reproducibility is an important feature of science; experiments are retested,
and analyses are repeated. Trust in the findings increases when consistent
results are achieved. Despite the importance of reproducibility, significant
work is often involved in these efforts, and some published findings may not be
reproducible due to oversights or errors. In this paper, we examine a myriad of
features in scholarly articles published in computer science conferences and
journals and test how they correlate with reproducibility. We collected data
from three different sources that labeled publications as either reproducible
or irreproducible and employed statistical significance tests to identify
features of those publications that hold clues about reproducibility. We found
the readability of the scholarly article and accessibility of the software
artifacts through hyperlinks to be strong signals noticeable amongst
reproducible scholarly articles. | Akhil Pandey Akella, Hamed Alhoori, David Koop | 2023-01-11T09:28:48Z | http://arxiv.org/abs/2301.04369v1 | # Reproducibility Signals in Science: A preliminary analysis
###### Abstract
Reproducibility is an important feature of science; experiments are retested, and analyses are repeated. Trust in the findings increases when consistent results are achieved. Despite the importance of reproducibility, significant work is often involved in these efforts, and some published findings may not be reproducible due to oversights or errors. In this paper, we examine a myriad of features in scholarly articles published in computer science conferences and journals and test how they correlate with reproducibility. We collected data from three different sources that labeled publications as either reproducible or irreproducible and employed statistical significance tests to identify features of those publications that hold clues about reproducibility. We found the readability of the scholarly article and accessibility of the software artifacts through hyperlinks to be strong signals noticeable amongst reproducible scholarly articles.
## 1 Introduction
Transparency in the scientific process accelerates scientific discovery and strengthens public opinions on scientifically driven matters. Reproducibility plays a crucial role in aiding this transparency, and it is encouraging to have a consensus in the scientific community to address the problem of reproducibility in science. Policymakers, government entities, open source communities, peer-reviewed journals, conferences, and the academic community at large have a shared responsibility to promote reproducible research. Effective dissemination of science cannot happen without trust and integrity in the scientific process. Practically, reproducible science has a first-hand impact in notable places such as research labs, classrooms, industries, and academia. Lack of reproducible research could restrict attaining a deeper understanding of the original researcher's thought process and, therefore, severely impact people involved in the communities mentioned earlier.
The concept of reproducibility is intricate and stratified with different but complementary issues. Before we attempt to understand how to approach the problem of reproducibility, we must first provide some definition of what we mean by this term in this context. Studies such as (Gundersen and Kjensmo, 2018; Cohen et al., 2018; Barba, 2018) highlight how the definition of _reproducibility_ varies across different studies and disciplines and how differing definitions can result in confusion. For that reason, the flexible definition presented in Gundersen and Kjensmo (2018) is appealing: "the ability of an independent research team to produce the same results using the same method based on the documentation made by the original research team." Collective efforts from various players of the research community such as publishers, conference organizers, and journals in promoting good practices for ensuring reproducibility in the experimentation process is refreshing, but there is still a lack of agreement on what exactly constitutes a "good practice" which is a concern.
In this study, we attempt to understand the relationship between the structure of science (Thelwall, 2019) and the concept of reproducibility by using statistical significance tests. In doing so, our emphasis is to examine epistemic opacity (Newman, 2015) of linguistic features and structural features concerning reproducibility. We achieve this by running numerous hypothesis tests and identifying the significant factors affecting the reproducibility of scholarly articles. Our goal is to utilize statistical tests to pick signals that could help identify articles requiring more (or less) effort to reproduce.
## 2 Related Work
Reproducibility is an important concept that affects large communities in general (Mede et al., 2020; Hutson, 2018). The breadth of literature on re
producibility spanning different disciplines (Open Science Collaboration, 2012; Prinz et al., 2011; Begley and Ellis, 2012; Peers et al., 2012) has broadly focused on either performing large meta-analyses that reproduce a large set of scholarly articles or qualitative studies that encourage researchers to adopt a certain methodology.
Our study falls in line with the studies that attempts to quantify the factors important for reproducibility, e.g. (Raff, 2019). Identifying such important factors would also be helpful in building machine learning models that can estimate the degree of reproducibility in scholarly articles(Yang et al., 2020).
## 3 Data
While scientific publications often follow similar structures, there is significant freedom in how ideas are communicated and expressed. This lack of rigidity allows authors to weave stories around fundamental ideas, and the absorption of particular ideas can sometimes be related to how they are presented. We are interested in whether the structure of a publication reveals anything about its potential for (ir-)reproducibility. To examine this, we compiled a collection of scholarly articles that have been evaluated as either reproducible or irreproducible from three different sources. For each article, we gathered comprehensive metadata and extracted structural and linguistic features. These collections of articles include:
* **Brown University**: Collberg et al. (Collberg et al., 2015) conducted a meta-analysis that involved steps in reproducing scholarly articles published in ACM computer science conferences and journals. They found that nearly 50 percent of the examined scholarly articles required extra effort to reproduce the articles. Computer scientists at Brown University led an effort named "Examining Reproducibility in Computer Science" to crowdsource a reexamination of this study (Krishnamurthi, 2015). They performed a meta-analysis of the original study and offered new insights. The data collected provides significant detail about the effort involved in reproducing the studies in the original publications. The current repository provides results for 207 papers; 142 are classified as reproducible and 65 as non-reproducible.
* **Retraction Watch Database (RetractionDB)**: The Retraction Watch Database stores information about scholarly articles that are retracted from conferences and journals (Oransky and Marcus, 2010). It also logs information about the subject/area to which the scholarly article belongs, the country where the article is published, the name of the publisher, the journal name, and most importantly, the reason why the article was retracted. We used this database to find all the scholarly articles in the field of computer science that were retracted under reasons surrounding results not being reproducible, and 34 papers fit these criteria.
* **Badged ACM Papers**: The Association for Computing Machinery (ACM) has introduced badges as a way to signal when publications have been successfully reproduced. We began with 176 articles that were badged as having results reproduced. Of these, 90 were badged as having Reusable Artifacts, and 70 of those had a Functional Artifact badge. We were able to obtain 64 of the papers that had "Results Reproduced" badges and received both a Reusable Artifact and a Functional Artifact badge.
From each of the three sources, we used the available metadata to locate each article. In some cases, we searched by article and authors' names to obtain a DOI or, in some cases, a URL for an article. If we were unable to unambiguously determine this information, the article was dropped from the dataset. Using the DOI, we were able to obtain further metadata and the full text of the article, usually in PDF format. After filling out the metadata and obtaining the full text, we had 305 papers in total; 206 were classified as reproducible, and 99 were classified as non-reproducible. Data and code will be made available as supplementary information upon publishing.
## 4 Methodology
### Feature Engineering
The motivation for considering the below features stems from the shared intuitions highlighted in (Gundersen et al., 2018; Gundersen, 2020; Raff, 2019) along with checklists from popular publishing venues such as NeurIPS, ICML, etc.
1. **Structural features:** Quantitative and qualitative information pertaining to the structure of the scholarly article. This includes information about the existence of particular sections as well as counts of the tables, figures, or algorithms in a given scholarly article. We developed python modules to parse the PDF of the scholarly article in order to extract this information. The features along with respective Point Biserial correlations are mentioned in Table 1.
2. **Linguistic features:** Linguistic indicators quantifying different metrics based on the language used in the scholarly article to differentiate the writing styles of various authors. These indicators include Word count, Average word length, Average sentence length, Frequency of words greater than average word length, Syllable count, and Yule's I measure of lexical diversity (Yule, 2014). These features are general to computational linguistics and are easily understandable. Additionally, we considered metrics such as _Complex words_, which refer to the number of polysyllable words in a given text. This feature was extracted using the python _textblob_ library. _Mean Readability_ was measured by obtaining the mean of readability metrics such as Flesch Reading Ease Level, SMOG Index, Coleman-Liau index, Automated Readability Index, Dale-Chall Readability Score, Linear Write Formula, and Gunning FOG. We obtained the values from \(textstat\), a python package, to obtain the readability metrics. We also collected the _Sentiment_ score for the full text of a given scholarly article and attached a sentiment label (positive = 1, negative = 0) for the respective articles. A similar process was used to obtain the sentiment label for the title of the article.
We gathered this information by implementing python programs that used the python libraries such as _spaCy_ and _NLTK_ to build the methods for calculating the metrics. All of these linguistic measures were based on the full text of the scholarly article. The features along with respective Point Biserial correlations, are mentioned in Table. 2.
### Point Biserial Correlation
A preliminary statistical analysis of the dependent and independent variables could be performed using correlations. Since our target is a nominal variable, we could not use _Pearson_ correlation or _Spearman_ correlation as both of them presume the target variable to be continuous. The _point biserial_(Gupta, 1960) correlation matrix measures the correlation between a dichotomous target variable and continuous variables. The results in Table 1 and Table 2 are values obtained by calculating the point biserial correlation coefficient(s) and the associated p-value(s).
### Significance tests
The features mentioned in Tables 1 and 2 are a combination of ordinal and nominal attributes. In order to determine the significance of the features, we had to employ different statistical significance tests such as the _Mann-Whitney U_ test (Mann and Whitney, 1947) and _Chi-squared_ test (Yates, 1934).
\begin{table}
\begin{tabular}{l l} \hline
**Feature** & **p-value** \\ Presence of Introduction Section & 0.0808 \\ Presence of Methodology Section & 0.3112 \\ Presence of Results Section & 0.7006 \\ Number of Pages & 0.1630 \\ Number of Images & 0.3571 \\ Number of Tables & 0.7187 \\ Number of Algorithms & 0.0654 \\ Number of Hyperlinks & 0.0028 \\ Number of Equations & 0.4212 \\ \hline \end{tabular}
\end{table}
Table 1: List of Structural Features and respective Point Biserial Correlations against target variable
\begin{table}
\begin{tabular}{l l} \hline
**Feature** & **p-value** \\ Word count & 0.5357 \\ Average word length & 0.2379 \\ Frequency of words greater than average word length & \\ Complex words & 0.8394 \\ Syllable count & 0.7467 \\ Yule’s I measure of lexical diversity & 0.1102 \\ Mean Readability & 0.0000 \\ Article’s sentiment & 0.5659 \\ Title’s sentiment & 0.7335 \\ \hline \end{tabular}
\end{table}
Table 2: List of Linguistic Features and respective Point Biserial Correlations against target variable
Results
We computed correlations and performed statistical significance tests on the combined data sources to identify features that played a significant role in indicating the reproducibility of scholarly articles. The point biserial correlations as shown in Tables 1 and 2 suggested that only **mean readability** and **number of hyperlinks** significantly correlate with reproducibility.
The results of the _Mann-Whitney U_ and _Chi-squared_ tests show that **mean readability, number of hyperlinks, number of algorithms, average word length, and yule's measure of lexical diversity** to be statistically significant features that align and signal scholarly work that is reproducible with reasonable certainty. More significantly, the readability of a scholarly article and accessibility of software artifacts, either as code repositories, psuedo-code, or algorithms, could be considered strong indicators for reproducibility. It is important to note that these signals do not quantify or assure the reproducibility of a scholarly article but rather help identify articles that require more (or less) effort to reproduce.
Our findings were backed by results from statistical experiments such as Point Biserial Correlations, Chi-squared test, and Mann-Whitney U test, and p-values (p < 0.05) served as the basis for the significance of our findings. You can obtain a copy of the datasets, experiment setup, and additional software artifacts from Github repository. 1.
Footnote 1: [https://github.com/reproducibilityproject/reproducibilitysignals](https://github.com/reproducibilityproject/reproducibilitysignals)
## 6 Discussion
The structure of science involves a well-formed process that begins with factual and valid data, continues through detailed descriptions of experimental procedures, and follows on to clearly presented results. The scientific process has many tenets, but these represent some. They have been promulgated over the years to allow the scientific process to flourish with checks and balances in the form of peer reviews. Contextually, factors such as discipline, year, type of scientific study, etc., play a major role in identifying the effort required to reproduce articles. Therefore, the dataset we built is an essential factor to consider while interpreting our findings that the readability of the scholarly article and accessibility of the software artifacts through hyperlinks are significant features among reproducible scholarly articles. Our motivation is to discover additional latent variables that consider these contextual factors while identifying the effort required to reproduce articles.
## 7 Conclusions and Future Work
In this study, our pursuit of identifying features that can signal reproducible science involved correlations and significance tests. We found the readability of the scholarly article and accessibility of the software artifacts through hyperlinks to be significant features among reproducible scholarly articles. Our code repository with data and experiments will be available post-publishing.
In the future, we plan on expanding the scope of our study by 1) Gathering more Badged data from ACM; 2) Testing the validity of our findings against adversarial examples; and 3) Observing the effects
\begin{table}
\begin{tabular}{l l} \hline
**Feature** & **p-value** \\ Presence of Introduction Section & 0.1070 \\ Presence of Methodology Section & 0.3728 \\ Presence of Results Section & 0.8617 \\ Article Sentiment & 0.6646 \\ Title Sentiment & 0.8495 \\ \hline \end{tabular}
\end{table}
Table 4: Chi-squared Significance test for the categorical features
\begin{table}
\begin{tabular}{l l} \hline
**Feature** & **p-value** \\ Yule’s I measure of lexical diversity & 0.0131 \\ Word count & 0.6547 \\ Average word length & 0.0003 \\ Frequency of words greater than average word length & 0.9171 \\ Syllable count & 0.3910 \\ Complex words & 0.9596 \\ Mean Readability & 0.0001 \\ Number of Images & 0.2039 \\ Number of Tables & 0.9586 \\ Number of Algorithms & 0.0283 \\ Length of the paper & 0.5039 \\ Number of Hyperlinks & 0.0011 \\ Number of Equations & 0.2148 \\ \hline \end{tabular}
\end{table}
Table 3: Mann-Whitney U Significance test for the numerical features
of citing a reproducible article vs non-reproducible ones.
## 8 Acknowledgement
This work is supported in part by NSF Grant No. 2022443.
|
2310.15525 | Optimization of process parameters in additive manufacturing based on
the finite element method | A design optimization framework for process parameters of additive
manufacturing based on finite element simulation is proposed. The finite
element method uses a coupled thermomechanical model developed for fused
deposition modeling from the authors' previous work. Both gradient-based and
gradient-free optimization methods are proposed. The gradient-based approach,
which solves a PDE-constrained optimization problem, requires sensitivities
computed from the fully discretized finite element model. We show the
derivation of the sensitivities and apply them in a projected gradient descent
algorithm. For the gradient-free approach, we propose two distinct algorithms:
a local search algorithm called the method of local variations and a Bayesian
optimization algorithm using Gaussian processes. To illustrate the
effectiveness and differences of the methods, we provide two-dimensional design
optimization examples using all three proposed algorithms. | Jingyi Wang, Panayiotis Papadopoulos | 2023-10-24T05:08:42Z | http://arxiv.org/abs/2310.15525v1 | # Optimization of process parameters in additive manufacturing based on the finite element method
###### Abstract
A design optimization framework for process parameters of additive manufacturing based on finite element simulation is proposed. The finite element method uses a coupled thermomechanical model developed for fused deposition modeling from the authors' previous work. Both gradient-based and gradient-free optimization methods are proposed. The gradient-based approach, which solves a PDE-constrained optimization problem, requires sensitivities computed from the fully discretized finite element model. We show the derivation of the sensitivities and apply them in a projected gradient descent algorithm. For the gradient-free approach, we propose two distinct algorithms: a local search algorithm called the method of local variations and a Bayesian optimization algorithm using Gaussian processes. To illustrate the effectiveness and differences of the methods, we provide two-dimensional design optimization examples using all three proposed algorithms.
**Keywords:** Additive manufacturing; sensitivity; optimization; Bayesian optimization;
## 1 Introduction
Additive manufacturing (AM) has enjoyed substantial success in creating parts with complex geometries, shortening the product design time and lowering costs [1, 2, 3, 4, 5, 6]. Many additive manufacturing technologies have been developed in recent years including fused deposition modeling (FDM), selective laser sintering, _etc_. AM is adopted extensively in prototyping and is increasingly deployed to industrial production of parts for the medical, aerospace and automotive industries [7, 8]. The computer simulation of the AM processes is a topic of huge interest throughout the industry and academia [9, 10, 11, 12, 13, 14, 15], where finite element method is one of the primary numerical tools used.
FDM simulation models typically solves the transient heat transfer problem during the deposition, followed by the mechanics problem based on the temperature history of a material with temperature-dependent constitutive response [10, 11, 16, 17, 18, 19, 20]. The process of deposition can be emulated using an "active/inactive" element addition approach [16, 21, 22, 23, 24, 20]. Heat transfer through conduction, convection and radiation are generally considered in simulation and the dependence of the thermal conductivity and heat capacity on temperature can be explicitly accounted for [25]. Previously, the authors proposed a fully coupled thermomechanical model for FDM [26], where the displacement and temperature fields are solved simultaneously and the deposition of new material is enabled through the creation of new elements. By accounting for existing displacement at each time step, the model can predict the quantities of interest such as maximum deformation, shape error, _etc_.
While AM simulation has progressed steadily, challenges remain in developing optimization tools for it. The optimization of process parameters of AM to achieve various design goals such as better surface finish quality, lower residual stress and less shape error for assembly is of great interest and importance [27, 28, 29, 30, 17, 19]. The process parameters to consider include chamber temperature, part orientation, printing speed, filament diameter, nozzle size and layer thickness [31, 32, 33, 34, 35]. The existing optimization tools for additive manufacturing are mostly simple, data-driven models using experimental data [1, 36, 37, 38, 39, 40]. Commonly used experiments include tensile, compression tests for the printed samples [8, 41, 42, 43, 44, 45], while dynamic mechanical tests are also performed in [46, 47]. Some of the techniques used for optimization are the Taguchi method [48, 49, 31, 50], particle swarm optimization [51] and artificial neural network (NN) [52].
In addition to experiments, an accurate finite element simulation is another natural candidate for optimization due to its flexibility and efficiency, and has been adopted by some [53]. Process mapping of the parameters can be developed using finite element simulation and subsequently applied to optimization [54, 55, 56, 57]. The finite element model used is often an uncoupled thermomechanical one with a fixed Lagrangian mesh [58, 59], which could raise the difficulty of sensitivity calculation for gradient-based optimization algorithms. The optimization variables of these studies include print speed, extrusion temperature and layer thickness [60, 61]. A purely geometric model is used in [62].
Bayesian optimization has been applied to many engineering design problems such as inverse problems [63], structural design [64] and robotics [65] with advanced technologies such as multi-fidelity surrogate models and independent constraints [66, 67]. Thus far, it has been applied to a limited extent in the optimization of AM problems [68], particularly for process parameters. Most of the approaches taken are experiment-driven or rely on geometric models, _e.g._, part orientation optimization of AM in [69]. In [70, 71, 72], machine learning and Bayesian optimization are used for the optimization of lattice structure for metamaterials, where the stiffness of the metamaterial is computed through finite element software. A Bayesian optimization approach is adopted for the metal AM melt pool geometry optimization in [73]. A data-driven Bayesian optimization method using finite element software to generate sample points is proposed in [74]. While [75] argued for a conceptual framework to integrate experiments, finite element and machine learning together for the optimization of AM processes, the authors believe it remains a work in progress.
In most of the existing optimization methods discussed above, the simulation and optimization are largely separated. Additionally, process parameters as well as defects and uncertainty caused by the printing itself are often ignored. Further, the optimization algorithm applied might not have been systematically established and thus relies on exhaustive search. In this paper, we propose two approaches to contribute to the optimization workflow of AM process parameters, each with its own advantages. The first approach is a gradient-based optimization method, where we optimize the design objective constrained by partial differential equations (PDEs). A coupled finite element simulation model for FDM developed by the authors [26] is used, whose fully discretized form is parameterized by the optimization variables, _i.e._, the process parameters. Then, we apply gradient-based optimization techniques to solve the optimization problem using the fully discretized sensitivities computed from the finite element model. The second approach is gradient-free and more suitable for problems where sensitivities are not easily available. We propose
two algorithms that are differentiated by whether a surrogate model is used to approximate the objective, at least locally. The first algorithm, the method of local variations, advances through local function evaluations and step size updates. The second algorithm is a Bayesian optimization one that uses Gaussian process as its surrogate model, which is updated with new simulation data.
The organization of this paper is as follows. Section 2 contains a summary of the coupled thermomechanical finite element model. The optimization formulation is given in Section 3. In Section 4, the gradient-based optimization algorithm is described, with derivations of sensitivities. In Section 5, we propose our gradient-free optimization algorithms and discuss algorithmic parameter choices. Numerical examples of the optimization methods are presented in Section 6. Finally, conclusions are included in Section 7.
## 2 Review of continuum theory and finite element modeling
### Continuum theory
In this paper, it is assumed that the printed body can be adequately modeled as an isotropic and homogeneous thermoelastic continuum undergoing infinitesimal deformation relative to its evolving reference configuration in the presence of large changes in its temperature. The material comprising the body is locally endowed with a Helmholtz free energy function \(\psi=\bar{\psi}(\boldsymbol{\epsilon},\theta)\) per unit volume, where \(\boldsymbol{\epsilon}\) and \(\theta\) are the infinitesimal strain tensor and temperature, respectively.
The local form of linear momentum balance may be expressed as
\[\rho_{0}\boldsymbol{a}\ =\ \boldsymbol{\nabla}\!\cdot\!\boldsymbol{\sigma}+\rho_{0 }\boldsymbol{b}\, \tag{1}\]
where \(\rho_{0}\) is the mass density per unit referential volume, \(\boldsymbol{a}\) is the acceleration, \(\boldsymbol{\sigma}\) is the stress tensor of the infinitesimal theory, and \(\boldsymbol{b}\) is the body force per unit mass, while "\(\boldsymbol{\nabla}\)" denotes the divergence operator relative to the referential coordinates. The balance of angular momentum ensures that the stress tensor \(\boldsymbol{\sigma}\) is symmetric. Moreover, the balance of energy takes the form
\[\rho_{0}\dot{e}\ =\ \rho_{0}r-\boldsymbol{\nabla}\!\cdot\!\boldsymbol{q}_{0}+ \boldsymbol{\sigma}\!\cdot\!\dot{\boldsymbol{\epsilon}}\, \tag{2}\]
in terms of the internal energy per unit mass \(e\), the heat supply per unit mass \(r\), and the referential heat flux vector \(\boldsymbol{q}_{0}=\bar{\boldsymbol{q}}_{0}\left(\boldsymbol{\epsilon},\theta, \nabla\theta\right)\).
Upon invoking the Clausius-Duhem inequality, a standard procedure leads to the relations
\[\boldsymbol{\sigma}\ =\ \frac{\partial\bar{\psi}\left(\boldsymbol{\epsilon},\theta \right)}{\partial\boldsymbol{\epsilon}}\quad,\quad\boldsymbol{q}_{0}\cdot \nabla\theta\ \leqslant\ 0\, \tag{3}\]
as well as to the reformulation of the energy equation (2) as
\[c\ \dot{\theta}\ =\ -\boldsymbol{\nabla}\!\cdot\!\boldsymbol{q}_{0}+\rho_{0}r+ \theta\boldsymbol{M}\!\cdot\!\dot{\boldsymbol{\epsilon}}\, \tag{4}\]
where \(c=-\theta\frac{\partial^{2}\bar{\psi}(\boldsymbol{\epsilon},\theta)}{\partial \theta^{2}}\) is the heat capacity and \(\boldsymbol{M}=\frac{\partial^{2}\bar{\psi}(\boldsymbol{\epsilon},\theta)}{ \partial\boldsymbol{\epsilon}\partial\theta}\) is the stress-temperature modulus, see, _e.g._, [26] for a full derivation.
A Helmholtz free-energy function that can adequately represent materials under infinitesimal deformation and a finite temperature range [76] is chosen according to
\[\bar{\psi}(\boldsymbol{\epsilon},\theta)\ =\ \frac{1}{2}\boldsymbol{\epsilon} \!\cdot\!\mathbb{C}(\theta)\boldsymbol{\epsilon}-\kappa(\theta)\ln(1+\text{tr }\,\boldsymbol{\epsilon})\alpha(\theta-\theta_{0})+\bar{c}\left(\theta- \theta_{0}-\theta\ln\frac{\theta}{\theta_{0}}\right)\, \tag{5}\]
where \(\alpha\) is the (constant) coefficient of thermal expansion, \(\theta_{0}\) is the reference temperature, and \(\bar{c}\) is the specific heat at the reference temperature. Also, the temperature-dependent isotropic elastic modulus in (5) is expressed as \(\mathbb{C}(\theta)=\tilde{f}\left(\theta\right)\mathbb{C}_{0,m}\), where \(\tilde{f}(\cdot)\) is a smooth and positive function of temperature and \(\mathbb{C}_{0,m}\) is the elastic modulus at a given temperature \(\theta_{0,m}\), not necessarily equal to the reference temperature \(\theta_{0}\). Likewise, the temperature-dependent bulk modulus in (5) is approximated as \(\kappa(\theta)=\tilde{f}\left(\theta\right)\kappa_{0,m}\), with \(\kappa_{0,m}\) being the bulk modulus at \(\theta_{0,m}\). The temperature function is defined here as
\[\tilde{f}(\theta)\ =\ b\left(\frac{\theta}{\theta_{0,m}}\right)^{a}+b\left(a-1 \right)+\left(1-ab\right)\frac{\theta}{\theta_{0,m}}\, \tag{6}\]
where \(a\) and \(b\) are material constants and \(\tilde{f}(\theta_{0,m})=1\).
Consistent with the restriction in (3)\({}_{2}\), the heat flux follows the isotropic form of Fourier's law, according to which
\[\boldsymbol{q}_{0}\ =\ -k\nabla\theta\, \tag{7}\]
where \(k\) is the (constant) thermal conductivity coefficient. Also, heat convection is included in the model, such that the flux of heat through the exterior boundary is given by
\[\bar{q}\ =\ h\left(\theta_{\infty}-\theta\right)\, \tag{8}\]
where \(h\) is the (constant) convection coefficient and \(\theta_{\infty}\) is the ambient temperature.
In view of the constitutive equation in (5) and upon taking into account (3)\({}_{1}\) and the definitions of the heat capacity \(c\) and the stress-temperature modulus \(\mathbf{M}\), it follows that
\[\mathbf{\sigma} = \tilde{f}\left(\theta\right)\left[\mathbb{C}_{0,m}\mathbf{\epsilon}- \kappa_{0,m}\alpha\frac{\theta-\theta_{0}}{1+\mathrm{tr}\,\mathbf{\epsilon}}\mathbf{I} \right]\,\] \[\mathbf{M} = \frac{\partial\tilde{f}(\theta)}{\partial\theta}\mathbb{C}_{0,m} \mathbf{\epsilon}-\left[\frac{\partial\tilde{f}(\theta)}{\partial\theta}(\theta- \theta_{0})+\tilde{f}(\theta)\right]\kappa_{0,m}\alpha\frac{1}{1+\mathrm{tr} \,\mathbf{\epsilon}}\mathbf{I}\, \tag{9}\] \[c = -\theta\frac{\partial^{2}\tilde{f}(\theta)}{\partial\theta^{2}} \frac{1}{2}\mathbf{\epsilon}\!\cdot\!\mathbb{C}_{0,m}\mathbf{\epsilon}+\left[\theta \frac{\partial^{2}\tilde{f}(\theta)}{\partial\theta^{2}}(\theta-\theta_{0})+ \theta\frac{\partial\tilde{f}(\theta)}{\partial\theta}\right]\kappa_{0,m} \alpha\ln(1+\mathrm{tr}\,\mathbf{\epsilon})+\bar{c}\,\]
where \(\mathbf{I}\) is the second-order identity tensor.
### Finite element modeling
Let the printed body occupy the region \(\Omega_{0}\) in a reference configuration. The referential boundary \(\partial\Omega_{0}\) of the body is decomposed into Dirichlet parts \(\Gamma^{u}_{D,0}\), \(\Gamma^{t}_{D,0}\) and Neumann parts \(\Gamma^{u}_{N,0}\), \(\Gamma^{t}_{N,0}\) for displacement and temperature, respectively, such that \(\overline{\Gamma^{u}_{D,0}\cup\Gamma^{u}_{N,0}}=\overline{\Gamma^{t}_{D,0}\cup \Gamma^{t}_{N,0}}=\partial\Omega_{0}\). The weak forms of the balance laws in (1) and (4) are expressed respectively as
\[\begin{split}\int_{\Omega_{0}}\mathbf{\xi}\cdot\rho_{0}\mathbf{a}\,dV+ \int_{\Omega_{0}}\mathbf{\nabla}^{s}\mathbf{\xi}\cdot\mathbf{\sigma}\,dV&= \ \int_{\Gamma^{u}_{N,0}}\mathbf{\xi}\cdot\bar{\mathbf{p}}\,dA+\int_{\Omega_{0}}\mathbf{\xi} \cdot\rho_{0}\mathbf{b}\,dV\,\\ \int_{\Omega_{0}}\zeta c\dot{\theta}\,dV-\int_{\Omega_{0}}\nabla \zeta\cdot\mathbf{q}_{0}\,dV&=\ \int_{\Omega_{0}}\zeta\rho_{0}r\,dV+\int_{\Omega_{0}}\zeta\theta\mathbf{M}\! \cdot\!\dot{\mathbf{\epsilon}}\,dV-\int_{\Gamma^{t}_{N,0}}\zeta\bar{q}\,dA\,\end{split} \tag{10}\]
where \(\mathbf{\nabla}^{s}\) denotes the symmetric part of the referential gradient, while \(\bar{\mathbf{p}}\) and \(\bar{q}\) are the imposed traction and flux boundary conditions on \(\Gamma^{u}_{N,0}\) and \(\Gamma^{t}_{N,0}\), respectively. The weak forms are expressed in terms of arbitrary and sufficiently smooth weighting functions \(\mathbf{\xi}\) for the linear momentum balance and \(\zeta\) for the energy balance, each satisfying the corresponding homogeneous Dirichlet boundary conditions. The infinitesimal strain tensor is related to the displacement field \(\mathbf{u}\) according to \(\mathbf{\epsilon}=\mathbf{\nabla}^{s}\mathbf{u}\).
Taking into account the arbitrariness of the weighting functions and introducing standard displacement-type finite element piecewise approximations for the dependent variables, their derivatives, and the corresponding weighting functions leads to elemental equations written in matrix form as
\[\begin{split}\left[\mathbf{M}^{e}_{u}\right]\left[\hat{\mathbf{a}}^{e}_{n }\right]+\left[\mathbf{R}^{e}_{u,n}\right]&=\ \left[\mathbf{F}^{e}_{n}\right]+\int_{ \partial\Omega^{e}_{0}\setminus\Gamma^{u}_{N,0}}\left[\mathbf{N}^{e}_{u}\right] \left[\mathbf{p}^{e}_{n}\right]\,dA\,\\ \left[\mathbf{T}^{e}\right]\left[\hat{\mathbf{\theta}}^{e}_{n}\right]+ \left[\mathbf{M}^{e}_{t}\right]\left[\hat{\mathbf{\theta}}^{e}_{n}\right]+\left[\mathbf{R} ^{e}_{t,n}\right]&=\ \left[\mathbf{Q}^{e}_{n}\right]-\int_{ \partial\Omega^{e}_{0}\setminus\Gamma^{t}_{N,0}}\left[\mathbf{N}^{e}_{t}\right]^{T }\left[\mathbf{q}^{e}_{n}\right]\,dA\.\end{split} \tag{11}\]
Here, the subscript "\(n\)" identifies an algebraic quantity as estimated at time \(t_{n}\), while the superscript "\(e\)" refers to the element \(e\). In addition, the overset symbol in \(\hat{(\cdot)}\) implies that the term under it comprises the set of nodal values associated with the quantity \((\cdot)\). All the matrices used in (11) are defined in the appendix. Upon neglecting, as is customary, the interelement flux terms, the preceding elemental equations give rise to the global system, expressed as
\[\begin{split}[\mathbf{M}_{u}]\,[\hat{\mathbf{a}}_{n}]+[\mathbf{R}_{u,n}]-[\bm {F}_{n}]&=\ [\mathbf{0}]\,\\ [\mathbf{T}]\,\Big{[}\hat{\hat{\mathbf{\theta}}}_{n}\Big{]}+[\mathbf{M}_{t}] \,\Big{[}\hat{\mathbf{\theta}}_{n}\Big{]}+[\mathbf{R}_{t,n}]-[\mathbf{Q}_{n}]& =\ [\mathbf{0}]\,\end{split} \tag{12}\]
where any elemental quantity \([(\cdot)^{e}]\) is assembled into its global counterpart \([(\cdot)]\) by means of a standard assembly operation.
The temporal discretization of the displacement vector \([\hat{\mathbf{u}}]\) and the velocity vector \([\hat{\mathbf{v}}]\) is effected by the standard trapezoidal rule, while the backward Euler rule is employed for the temperature vector \([\hat{\mathbf{\theta}}]\). Therefore, given a time interval \((t_{n-1},t_{n}]\) of size \(\Delta t_{n-1}=t_{n}-t_{n-1}\),
\[\begin{split}[\hat{\mathbf{u}}_{n}]&=\ [\hat{\mathbf{u}}_{n-1}]+[\hat{\mathbf{v}}_{n-1}]\,\Delta t_{n-1}+\frac{1}{4}\left\{[ \hat{\mathbf{a}}_{n-1}]+[\hat{\mathbf{a}}_{n}]\right\}\Delta t_{n-1}^{2}\,\\ [\hat{\mathbf{v}}_{n}]&=\ [\hat{\mathbf{v}}_{n-1}]+\frac{1}{2} \left\{[\hat{\mathbf{a}}_{n-1}]+[\hat{\mathbf{a}}_{n}]\right\}\Delta t_{n-1}\,\\ \Big{[}\hat{\mathbf{\theta}}_{n}\Big{]}&=\ \Big{[}\hat{\mathbf{\theta}}_{n-1}\Big{]}+ \Big{[}\hat{\hat{\mathbf{\theta}}}_{n}\Big{]}\,\Delta t_{n-1}\.\end{split} \tag{13}\]
Taking into account the constitutive laws (7-9) and the time-stepping rules in (13), the system of linear algebraic equations at the \(k\)-th Newton-Raphson iteration in the time interval \((t_{n-1},t_{n}]\) can be written in matrix form as
\[\begin{split}[\mathbf{M}_{u}]\,\Big{[}\hat{\mathbf{a}}_{n}^{(k)}\Big{]}+ \Big{[}\mathbf{R}_{u,n}^{(k)}\Big{]}-[\mathbf{F}_{n}]\\ +\Big{[}\mathbf{K}_{u,n}^{(k)}\Big{]}\,\Big{[}\Delta\hat{\mathbf{u}}_{n}^ {(k)}\Big{]}+\Big{[}\mathbf{K}_{t,n}^{(k)}\Big{]}\,\Big{[}\Delta\hat{\mathbf{\theta}}_ {n}^{(k)}\Big{]}\ =\ [\mathbf{0}]\,\\ [\mathbf{T}]\,\Big{[}\hat{\hat{\mathbf{\theta}}}_{n}^{(k)}\Big{]}+[\mathbf{M}_ {t}]\,\Big{[}\hat{\hat{\mathbf{\theta}}}_{n}^{(k)}\Big{]}+\Big{[}\mathbf{R}_{t,n}^{(k) }\Big{]}-[\mathbf{Q}_{n}]\\ +\Big{[}\mathbf{A}_{u,n}^{(k)}\Big{]}\,\Big{[}\Delta\hat{\mathbf{u}}_{n}^ {(k)}\Big{]}+\Big{[}\mathbf{A}_{t,n}^{(k)}\Big{]}\,\Big{[}\Delta\hat{\mathbf{\theta}}_ {n}^{(k)}\Big{]}\ =\ [\mathbf{0}]\,\end{split} \tag{14}\]
where \(\Delta(\cdot)_{n}\) is the change of the variable \((\cdot)\) in \((t_{n-1},t_{n}]\). Expressions for the tangent matrices \(\Big{[}\mathbf{K}_{u,n}^{(k)}\Big{]}\), \(\Big{[}\mathbf{K}_{t,n}^{(k)}\Big{]}\), \(\Big{[}\mathbf{A}_{u,n}^{(k)}\Big{]}\), and \(\Big{[}\mathbf{A}_{t,n}^{(k)}\Big{]}\) in (14) are provided in the appendix. The global displacement and temperature vectors are updated according to
\[[\hat{\mathbf{u}}_{n}^{(k+1)}]\ =\ [\hat{\mathbf{u}}_{n}^{(k)}]+[\Delta\hat{\mathbf{u}}_{n}^ {(k)}]\quad,\quad[\hat{\mathbf{\theta}}_{n}^{(k+1)}]\ =\ [\hat{\mathbf{\theta}}_{n}^{(k)}]+[\Delta\hat{\mathbf{\theta}}_{n}^ {(k)}]\, \tag{15}\]
until convergence is attained in a predefined stopping criterion and a prescribed tolerance.
To accommodate new nodes and elements generated when material is deposited on the boundary of the existing body, each node is assigned an initial displacement and temperature. These "history" variables ensure zero initial stress and strain. To this end, the displacement history values are subtracted from the nodal displacement values when computing the strain and stress at a typical integration point. Likewise, the temperature history value represents the reference temperature \(\theta_{0}\) in (9) of an element when it is first created, computed as the average of the nodal temperature of the element.
For printed objects that have curved boundaries, our simulation relies on a slicing algorithm to determine how the curves are approximated and printed. The choice of the slicing algorithm is not a focus of this study. For the one that is used in our finite element model, the reader can find it in [26]. In general however, curved boundaries require the use of partial elements that involve hanging nodes. The reference coordinates of the hanging nodes are determined by the slicing algorithm while their displacement and temperature are solved together with the rest of the displacement and temperature field. To do so, additional conditions need to be imposed if a hanging node lies on an existing edge. When such a node is created, a ratio of its coordinates and those of two nodes that define this existing edge can be computed. This ratio is then maintained when computing for the displacement and temperature of the hanging node, so that it would remain at the same relative position of the edge throughout the simulation. Full details on the implementation of history variables and hanging nodes can be found in [26].
## 3 Optimization problem formulation
Let \(f(\mathbf{u},\mathbf{\theta};\mathbf{y},t)\) be the objective function of the design optimization problem, where \(\mathbf{y}\in\mathbb{R}^{d}\) represents the vector set of \(d\) design variables, _i.e._, process parameters such as chamber temperature and layer thickness. The continuum-level optimization problem can be expressed as
\[\begin{split}\underset{\mathbf{y}\in\mathbb{R}^{d}}{\text{minimize}}& f\left(\mathbf{u},\mathbf{\theta};\mathbf{y},t\right)\,\\ \text{subject to}&\mathbf{r}\left(\mathbf{u},\mathbf{\theta};\mathbf{y},t\right)\ =\ \mathbf{0}\,\quad c_{i}(\mathbf{y})\ \leqslant\ 0\,\ i=1,\ldots,m\.\end{split} \tag{16}\]
Here, \(\mathbf{r}\) comprises the balance laws in (10) (before the finite element approximation), while \(c_{i}(\cdot)\), \(i=1,\ldots,m\), are inequality constraint functions on the design variables.
In this paper, the objective function is chosen to express the deviation of the part's actual shape from the desired shape. Such deviation, or shape error, may pose significant
challenges in AM [77]. In the two-dimensional case, given a fixed reference configuration, shape error may be defined as
\[f\left(\mathbf{u},\mathbf{\theta};\mathbf{y},t\right)\ =\ \frac{\left[\int_{\Gamma_{S}^{t}} \left(d\left(\mathbf{X},\mathbf{u}\right)-\bar{d}(\mathbf{X})\right)^{2}dl\right]^{\frac{1} {2}}}{L_{c}\left(\int_{\Gamma_{S}^{t}}\ dl\right)^{\frac{1}{2}}}\, \tag{17}\]
where \(\mathbf{X}\) represent the placement of the boundary points on the (necessarily time-dependent) surface of interest \(\Gamma_{S}^{t}\) in the designed geometry under no deformation. Also, \(d\left(\mathbf{X},\mathbf{u}\right)-\bar{d}(\mathbf{X})\) is a problem-dependent scalar measure of the difference of the printed position \(d\) from the designed position \(\bar{d}\) on \(\Gamma_{S}^{t}\), while \(L_{c}\) is a representative length of the part intended to render \(f\) dimensionless. As seen from (17), the objective function depends explicitly on \(\mathbf{u}\) alone. Upon finite element discretization, the objective function \(f\) is measured on one-dimensional element edges and can be evaluated numerically using Gaussian quadrature. The discrete counterpart \(f_{h}\) of the objective function for linear elements is given by
\[f_{h}(\hat{\mathbf{u}},\hat{\mathbf{\theta}};\mathbf{y},t)\ =\ \frac{\left[\sum_{i=1}^{p} \sum_{j=1}^{l}\left(d\left(\mathbf{X}_{j},\mathbf{u}_{j}\right)-\bar{d}(\mathbf{X}_{j}) \right)^{2}w_{j}\frac{\Delta l_{i}}{2}\right]^{\frac{1}{2}}}{L_{c}\left(\sum_ {i=1}^{p}\Delta l_{i}\right)^{\frac{1}{2}}}\, \tag{18}\]
where \(p\) is the number of discretized edges of interest, \(\Delta l_{i}\) is the length of the \(i\)-th edge, \(l\) is the number of quadrature points per edge, and \(w_{j}\) is the weight of the \(j\)-th quadrature point. The values of \(\mathbf{X}_{j},\mathbf{u}_{j}\) at quadrature points are evaluated using \(\mathbf{X}\) and the nodal displacement \(\hat{\mathbf{u}}\). The precise definition of the position deviation \(d(\mathbf{u})-\bar{d}\) is specified in Section 6 for each example. Implicit in the preceding definitions of \(f\) and \(f_{h}\) is that \(d\) induces a bijection between the designed to the printed surface regardless of the value of the displacement \(\mathbf{u}\), see Figure 1.
Depending on the nature of \(\mathbf{y}\), the optimization problem (16) may require the use of substantially different algorithms. For example, some process parameters, such as chamber temperature, vary continuously. Hence, the objective function \(f_{h}\) may be considered implicitly differentiable in them. This would enable the application of well-established gradient-based optimization algorithms. On the other hand, parameters such as layer thickness, which is mandated by the (integer) number of layers for a given specimen size, are discrete and, therefore, non-differentiable. For optimization in such variables, a gradient-free algorithm would be appropriate.
## 4 Gradient-descent method with line search
For gradient-based optimization methods, the derivatives of the discretized objective \(f_{h}\) from (18) with respect to design variables \(\mathbf{y}\) must be known or estimated. Let \(N\) be the total number of discrete time steps in the analysis. For simplicity, denote the combined system of fully discretized PDEs at time \(t_{n},1\leq n\leq N\), as \(\mathbf{r}_{n}^{h}\), where the superscript \(h\) is included to emphasize the discrete vector form of the variables. It follows that the discretized form of the PDE constraint in (16) can be expressed as
\[\mathbf{r}_{n}^{h}\left(\hat{\mathbf{u}}_{n}^{h},\hat{\mathbf{u}}_{n-1}^{h},\hat{\mathbf{v}}_{n -1}^{h},\hat{\mathbf{\theta}}_{n}^{h},\hat{\mathbf{\theta}}_{n-1}^{h},\hat{\mathbf{u}}_{n }^{his},\hat{\mathbf{\theta}}_{n}^{his};\mathbf{y},t_{n}\right)\ =\ \mathbf{0}\, \tag{19}\]
in view of the single-step time integration rules in (15). As noted in Section 2, the history variables \(\mathbf{u}_{n}^{his},\mathbf{\theta}_{n}^{his}\) represent the displacement and temperature field at each new node and element when they are added at their respective time step. Thus, they are also discrete in form and full details on their precise definition may be found in [26]. In addition, it is assumed that any inequality constraints \(c_{i}(\cdot)\) on \(\mathbf{y}\) can be included by requiring that \(\mathbf{y}\in\mathfrak{C}\), where \(\mathfrak{C}\subset\mathbb{R}^{m}\) is a suitably defined convex and compact set.
The solver-consistent discretized form of (16) is now expressed as
\[\begin{array}{ll}\underset{\mathbf{y}\in\mathfrak{C}}{\text{minimize}}&f_{h} \left(\hat{\mathbf{u}}_{N}^{h},\hat{\mathbf{\theta}}_{N}^{h};\mathbf{y},t_{N}\right)\\ \text{subject to}&\mathbf{r}_{n}^{h}\left(\hat{\mathbf{u}}_{n}^{h},\hat{\mathbf{u}}_{n-1}^{ h},\hat{\mathbf{v}}_{n-1}^{h},\hat{\mathbf{\theta}}_{n}^{h},\hat{\mathbf{\theta}}_{n-1}^{h}, \hat{\mathbf{u}}_{n}^{his},\hat{\mathbf{\theta}}_{n}^{his};\mathbf{y},t_{n}\right)\ =\ \mathbf{0}\quad,\quad n=1,2, \ldots,N\.\end{array} \tag{20}\]
The preceding discrete optimization problem reflects the fact that only the end-state is of consequence in the design.
Consider the reduced-space formulation, in which \(\mathbf{u}_{N}^{h}\) and \(\mathbf{\theta}_{N}^{h}\) are regarded as implicit functions of \(\mathbf{y}\) at time \(t_{N}\) and also let \(N\) be independent of \(\mathbf{y}\). Then, by chain rule, the
Figure 1: Schematic of the distance functions \(d\) and \(\bar{d}\)
gradient of \(f_{h}\) with respect to \(y\) is
\[\frac{df_{h}}{d\mathbf{y}}\ =\ \frac{\partial f_{h}}{\partial\hat{\mathbf{u}}_{N}^{h}} \frac{\partial\hat{\mathbf{u}}_{N}^{h}}{\partial\mathbf{y}}+\frac{\partial f_{h}}{ \partial\hat{\mathbf{\theta}}_{N}^{h}}\frac{\partial\hat{\mathbf{\theta}}_{N}^{h}}{ \partial\mathbf{y}}+\frac{\partial f_{h}}{\partial\mathbf{y}}. \tag{21}\]
Each term on the right-hand-side of (21) is derived next. First, note that, owing to the explicit dependence of \(f_{h}\) on \(\mathbf{y}\), \(\hat{\mathbf{u}}_{N}^{h}\), and \(\hat{\mathbf{\theta}}_{N}^{h}\), the terms \(\frac{\partial f_{h}}{\partial\mathbf{y}}\), \(\frac{\partial f_{h}}{\partial\hat{\mathbf{u}}_{N}^{h}}\), and \(\frac{\partial f_{h}}{\partial\hat{\mathbf{\theta}}_{N}^{h}}\) can be computed directly by differentiating the function \(f_{h}\). Of course, given the specific form of \(f_{h}\) in (18), the only non-zero such term here is \(\frac{\partial f_{h}}{\partial\hat{\mathbf{u}}_{N}^{h}}\). On the other hand, the state sensitivities \(\frac{\partial\hat{\mathbf{u}}_{N}^{h}}{\partial\mathbf{y}}\) and \(\frac{\partial\hat{\mathbf{\theta}}_{N}^{h}}{\partial\mathbf{y}}\) at \(t_{N}\) are dependent on those of the previous time steps, that is, \(\frac{\partial\hat{\mathbf{u}}_{n}^{h}}{\partial\mathbf{y}}\) and \(\frac{\partial\hat{\mathbf{\theta}}_{n}^{h}}{\partial\mathbf{y}}\), for \(n=1,\ldots,N-1\). Due to the small total number of optimization variables, a pure primal approach is adopted here to obtain the sensitivities. To this end, upon differentiation of both sides of Equation (19) with respect to \(\mathbf{y}\), it follows that
\[\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{u}}_{n}^{h}}\frac {\partial\hat{\mathbf{u}}_{n}^{h}}{\partial\mathbf{y}}+\frac{\partial\mathbf{r}_{n}^{h}}{ \partial\hat{\mathbf{u}}_{n-1}^{h}}\frac{\partial\hat{\mathbf{u}}_{n-1}^{h}}{\partial \mathbf{y}}+\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{v}}_{n-1}^{h}}\frac{ \partial\hat{\mathbf{v}}_{n-1}^{h}}{\partial\mathbf{y}}+\frac{\partial\mathbf{r}_{n}^{h}} {\partial\hat{\mathbf{\theta}}_{n}^{h}}\frac{\partial\hat{\mathbf{\theta}}_{n}^{h}}{ \partial\mathbf{y}}+\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{\theta}}_{n-1} ^{h}}\frac{\partial\hat{\mathbf{\theta}}_{n-1}^{h}}{\partial\mathbf{y}}\\ +\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{u}}_{n}^{his}} \frac{\partial\hat{\mathbf{u}}_{n}^{his}}{\partial\mathbf{y}}+\frac{\partial\mathbf{r}_{n} ^{h}}{\partial\hat{\mathbf{\theta}}_{n}^{his}}\frac{\partial\hat{\mathbf{\theta}}_{n}^ {his}}{\partial\mathbf{y}}+\frac{\partial\mathbf{r}_{n}^{h}}{\partial\mathbf{y}}\ =\ \mathbf{0}. \tag{22}\]
The state sensitivities \(\frac{\partial\hat{\mathbf{u}}_{n}^{h}}{\partial\mathbf{y}}\) and \(\frac{\partial\hat{\mathbf{\theta}}_{n}^{h}}{\partial\mathbf{y}}\) at time \(t_{n}\) are obtained from (22), in terms of the rest of the (known) terms in the equation, including the state sensitivities at \(t_{n-1}\). The initial conditions for these sensitivities are set to zero, since the initial displacement and temperature fields are prescribed.
It is important to note here that use of the Newton-Raphson method for the solution to \(\mathbf{r}_{n}^{h}=\mathbf{0}\) requires the calculation of the "stiffness" terms \(\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{u}}_{n}^{h}}\) and \(\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{\theta}}_{n}^{h}}\). Therefore, these two terms (which are derived in the appendix) are already available for the sensitivity calculations. Furthermore, the terms \(\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{u}}_{n-1}^{h}}\), \(\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{v}}_{n-1}^{h}}\), and \(\frac{\partial\mathbf{r}_{n}^{h}}{\partial\hat{\mathbf{\theta}}_{n-1}^{h}}\), which appear in (22), can be readily obtained by substituting the time-stepping formulae (13) into (12) and taking the derivatives of the discrete governing equations comprising \(\mathbf{r}_{n}^{h}\) with respect to \(\mathbf{u}_{n-1}^{h},\mathbf{\theta}_{n-1}^{h},\mathbf{v}_{n-1}^{h}\). Again, explicit expressions for these terms may be found in the appendix.
Based on the implementation of the history variables \(\hat{\mathbf{u}}_{n}^{his}\) and \(\hat{\mathbf{\theta}}_{n}^{his}\), it is straightforward to compute the corresponding sensitivities \(\frac{\partial\hat{\mathbf{u}}_{n}^{his}}{\partial\mathbf{y}}\) and \(\frac{\partial\hat{\mathbf{\theta}}_{n}^{his}}{\partial\mathbf{y}}\) in Equation (22) at the element level and then assemble into their global counterparts (for details see, again, [26]). Since the history variables are simply the displacement and temperature of a node and an element, respectively, at the time they are created, they remain constant throughout the simulation. Thus, the displacement history variable sensitivity \(\frac{\partial\hat{\mathbf{u}}_{n}^{his,e}}{\partial\mathbf{y}}\) is simply the displacement sensitivity at the time of the creation of a node, and the temperature history
sensitivity \(\frac{\partial\mathbf{\hat{\theta}}_{h}^{his,\epsilon}}{\partial\mathbf{y}}\) is the average of the nodal temperature sensitivity of an element. These sensitivities are stored for subsequent use. The same applies to the derivative terms \(\frac{\partial\mathbf{r}_{h}^{h}}{\partial\mathbf{\hat{u}}_{n}^{his}}\) and \(\frac{\partial\mathbf{r}_{h}^{h}}{\partial\mathbf{\hat{\theta}}_{n}^{his}}\), which are derived in the appendix.
At each time \(t_{n}\), the state sensitivities at \(t_{n-1}\) are stored and used in the solution of (22). The last remaining term in (22), \(\frac{\partial\mathbf{r}_{h}^{h}}{\partial\mathbf{y}}\), can be explicitly computed once the optimization variable \(\mathbf{y}\) is defined. With all the necessary terms now available, Equation (22) is solved for \(\frac{\partial\mathbf{\hat{u}}_{n}^{h}}{\partial\mathbf{y}}\) and \(\frac{\partial\mathbf{\hat{\theta}}_{h}^{h}}{\partial\mathbf{y}}\), subject to the initial sensitivities (taken here to be zero).
All the sensitivities computed here analytically have been tested against second-order accurate centered-finite difference approximation to ensure the faultlessness of the derivations.
Given the initial displacement vector \(\mathbf{\hat{u}}_{0}^{h}\), temperature vector \(\mathbf{\hat{\theta}}_{0}^{h}\) and design variable vector \(\mathbf{y}_{0}\), it is now possible to compute the gradient (21). The optimization problem (20) can be recast in reduced-space as
\[\underset{\mathbf{y}\in\mathfrak{C}}{\text{minimize}}\ \ f_{h}\left(\mathbf{u}_{0},\mathbf{ \theta}_{0},\mathbf{y},t_{N}\right). \tag{23}\]
For relatively small dimension of \(\mathbf{y}\), many well-established nonlinear optimization methods may be used to solve this problem. Here, a projected gradient-descent method with line search is employed, as described in Algorithm 1. The initial sensitivities \(\frac{\partial\mathbf{\hat{u}}_{0}}{\partial\mathbf{y}}\) and \(\frac{\partial\mathbf{\hat{\theta}}_{0}}{\partial\mathbf{y}}\) are set to \(0\) as they do not depend on \(\mathbf{y}\) at the starting time \(t_{0}\).
```
1 Let \(\alpha_{0}=1\), \(\rho\in(0,1)\), \(\eta\in(0,1)\)
2for\(k=0,1,2,...\)do
3 Perform the finite element simulation. Compute sensitivities and the gradient in (21).
4 Let \(\mathbf{p}_{k}=-\frac{df_{h}(\mathbf{y}_{k})}{d\mathbf{y}_{k}}/\|\frac{df_{h}(\mathbf{y}_{k})}{ d\mathbf{y}_{k}}\|\).
5 Set \(\alpha_{k}\leftarrow\alpha_{0}\).
6while\(f_{h}\left(\mathbf{y}_{k}+\alpha_{k}\mathbf{p}_{k}\right)>f_{h}\left(\mathbf{y}_{k}\right)+ \eta\alpha_{k}\left[\frac{df_{h}(\mathbf{y}_{k})}{d\mathbf{y}_{k}}\right]^{T}\mathbf{p}_{k}\)do
7\(\alpha_{k}\leftarrow\rho\alpha_{k}\)
8\(\mathbf{y}_{k+1}=\mathbf{y}_{k}+\alpha_{k}\mathbf{p}_{k}\)
9 Project \(\mathbf{y}_{k+1}\) onto \(\mathfrak{C}\).
```
**Algorithm 1**Projected gradient descent with line search
In this algorithm, \(\eta\) and \(\rho\) are user-specified constants, while the scalar \(\alpha_{k}\) is the length of the line-search step. It is worth pointing out that the line-search step could be compu
tationally expensive as \(f_{h}\) needs to be evaluated each time \(\alpha_{k}\) is updated. The projection step for a general convex and compact \(\mathfrak{C}\) typically involves solving a minimum-distance problem. However, in the case of bound constraints, one is only required to force \(\mathbf{y}\) to stay inside the bounds. For more complicated constraints which are nonlinear in \(\mathbf{y}\), this algorithm would not suffice. For the purposes of this work, Algorithm 1 is implemented in both C++ and Python.
## 5 Gradient-free Optimization Methods
Two optimization algorithms are presented in this section for the solution of (16) and (23) that do not require sensitivities and rely only on objective function evaluations. The first is a local search algorithm that advances by comparing objective values at neighboring points in the design space. Owing to parallel computing, this algorithm can quickly yield reasonable results and is easy to implement. The second is a Bayesian optimization algorithm, where a surrogate model for the objective is built using samples. The solution to the optimization problem is updated based on the prediction of the surrogate model. Algorithms of this type have received significant attention in the past decade and have been shown to efficiently generate global optimal result [78].
### Method of local variations
This is a version of the method of local variations proposed by Polak [79] and is shown in Algorithm 2. It is used here to solve the reduced formulation (23) of the design optimization problem. The algorithm starts with a guess in the optimization variable space and takes a series of steps towards an optimal solution by constantly comparing neighborhood objective values and adjusting step sizes. Multiple starting points can be used to improve the results, as the algorithm can be shown to reach a local minimum if the objective is continuously differentiable [79]. Denote the initial step size as \(\mathbf{\tau}_{0}\) and start with \(\mathbf{y}_{0}\in\mathfrak{C}\subset\mathbb{R}^{d}\). The minimum step size is controlled by vector \(\mathbf{\tau}_{min}\). At each iteration \(k\), the algorithm centers around the current point \(\mathbf{y}_{k}\) in the optimization space and runs \(2d\) simulations on neighboring points, with the neighboring points selected by the step size \(\mathbf{\tau}\) and directions \(\mathbf{d}_{j}\), \(j=1,\cdots,2d\). If neighboring points produce smaller values of the objective, the point that produces the smallest value is selected to be the next iteration point. If no such point can be found, the current step size \(\mathbf{\tau}_{\mathbf{k}}\) is reduced and simulation at new neighboring points are run until the step size of each variable becomes smaller than each component of
\(\mathbf{\tau}_{min}\). The quantity \(\mathbf{d}_{j}\) represents the increasing and decreasing search direction for each component of \(\mathbf{y}\). In Algorithm 2, the default optimization variable is continuous, and thus the step size can be repeatedly reduced by half. However, the algorithm can be readily adapted to discrete optimization variables. Specifically, for an integer parameter such as the number of deposition layers, one may reduce the step size \(\|\mathbf{\tau}_{k}\mathbf{d}_{j}\|\) by 1. The algorithm enforces the simple constraints at step 6 so that all points evaluated are feasible.
```
1 Select a \(\mathbf{y}_{0}\in\mathbb{R}^{d}\) such that \(\mathbf{y}_{0}\in\mathfrak{C}\) is feasible. Select a \(\mathbf{\tau}_{0}>0\).
2 Set \(k=0\), and compute \(f_{h}(\mathbf{y}_{0})\). Let \(f_{min}=f_{h}(\mathbf{y}_{0})\).; while\(\mathbf{\tau}_{k}>\mathbf{\tau}_{min}\)do
3 Set \(f_{min}=f_{h}(\mathbf{y}_{k})\) and \(\mathbf{d}_{min}=0\).
4for\(j=1,\ldots,2n\)do
5 Check if \(\mathbf{z}_{k}^{j}=\mathbf{y}_{k}+(\mathbf{\tau}_{k})_{j/2}\mathbf{d}_{j}\in\mathfrak{C}\). Project it onto \(\mathfrak{C}\) if not so that \(\mathbf{z}_{k}^{j}\in\mathfrak{C}\).
6 Compute \(f_{h}(\mathbf{z}_{k}^{j})\).
7if\(f_{h}\left(\mathbf{z}_{k}^{j}\right)<f_{min}\)then
8\(f_{min}=f_{h}(\mathbf{z}_{k}^{j})\).
9\(\mathbf{d}_{min}=\mathbf{z}_{k}^{j}-\mathbf{y}_{k}\).
10\(\mathbf{y}_{k+1}=\mathbf{y}_{k}+\mathbf{d}_{min}\).
11if\(f_{min}<f_{h}(\mathbf{y}_{k})\)then
12\(\mathbf{\tau}_{k+1}=\mathbf{\tau}_{k}\), \(k=k+1\).
13else
14 Reduce the step size via \((\mathbf{\tau}_{k+1})_{j}=(\mathbf{\tau}_{k})_{j}/2\) or \((\mathbf{\tau}_{k+1})_{j}=(\mathbf{\tau}_{k})_{j}-1\) for \(j=1,\ldots,n\). Set \(k=k+1\).
```
**Algorithm 2**Method of local variations
### Bayesian optimization method
A Bayesian optimization algorithm [80] is alternatively employed to solve Problem (23). This is a "black-box" sequential surrogate-based optimization method, where the objective and the associated PDEs are represented by a surrogate model and, therefore, no sensitivity calculations are necessary. The surrogate model is typically a random function with a prior that is constructed and updated based on the samples (data points) collected. Then, a posterior distribution is formed using the sequentially updated prior so that a prediction of the objective at a given point can be made [78, 81]. In addition, an acquisition function is used to determine where to sample next. This function strikes a balance between
exploitation and exploration, which correspond to an improved objective value and lower uncertainty, respectively.
A sample in the Bayesian optimization process is the individual finite element simulation and its objective value given the process parameters \(\mathbf{y}\). The well-tested Gaussian process (GP) is chosen for the surrogate model, while the expected improvement acquisition function is adopted. Given the sample points, GP assumes a multivariate jointly-Gaussian distribution between \(\mathbf{y}\) and the objective function \(f_{h}\):
\[\begin{bmatrix}f_{h}(\mathbf{y}_{1})\\ \vdots\\ f_{h}(\mathbf{y}_{N})\end{bmatrix}\sim\mathcal{N}\begin{pmatrix}\begin{bmatrix}m( \mathbf{y}_{1})\\ \vdots\\ m(\mathbf{y}_{N})\end{bmatrix},\begin{bmatrix}k(\mathbf{y}_{1},\mathbf{y}_{1})\ldots k(\bm {y}_{1},\mathbf{y}_{N})\\ \vdots\\ k(\mathbf{y}_{N},\mathbf{y}_{1})\ldots k(\mathbf{y}_{N},\mathbf{y}_{N})\end{bmatrix}\end{pmatrix}\;. \tag{24}\]
Here, \(\mathcal{N}\) is the normal distribution, \(\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\) are the samples, \(m(\cdot):\mathbb{R}^{N}\rightarrow\mathbb{R}\) is the mean function, and \(k(\cdot,\cdot):\mathbb{R}^{N}\times\mathbb{R}^{N}\rightarrow\mathbb{R}\) is the covariance function. Then, the posterior probability distribution of a new \(\mathbf{y}\) can be inferred using Bayes' rule [80],
\[f_{h}(\mathbf{y})|\mathbf{y},\mathbf{y}_{1:N},f_{h}(\mathbf{y}_{1:N})\sim\mathcal{ N}(\mu(\mathbf{y}),\sigma^{2}(\mathbf{y}))\;, \tag{25}\] \[\mu(\mathbf{y})\ =\ k(\mathbf{y},\mathbf{y}_{1:N})k(\mathbf{y}_{1:N},\mathbf{y}_{1:N})^{-1 }\left(f_{h}(\mathbf{y}_{1:N})-m(\mathbf{y}_{1:N})\right)+m(\mathbf{y}_{1:N})\;,\] \[\sigma^{2}(\mathbf{y})\ =\ k(\mathbf{y},\mathbf{y})-k(\mathbf{y},\mathbf{y}_{1:N})k(\mathbf{y}_ {1:N},\mathbf{y}_{1:N})^{-1}\sigma(\mathbf{y}_{1:N},\mathbf{y})\;,\]
where the vector \(\mathbf{y}_{1:N}\) is the notation for \(\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\) and \(k(\mathbf{y}_{1:N},\mathbf{y}_{1:N})=[k(\mathbf{y}_{1},\ldots,k(\mathbf{y}_{N});\)\(\ldots;k(\mathbf{y}_{N},\mathbf{y}_{1}),\ldots,k(\mathbf{y}_{N},\mathbf{y} _{N})]\). The function \(\mu(\cdot)\) and \(\sigma^{2}(\cdot)\) are referred to as the posterior mean and variance, respectively. The mean function \(m(\cdot)\) is often simply a constant function. The covariance functions, or kernels, of the GP is critical to an accurate surrogate model. The Squared Exponential Covariance Function [82] or power exponential kernel is adopted in this paper. The definition of the covariance function is
\[k(\mathbf{y},\mathbf{y}^{\prime};\theta)\ =\ \exp\left(-\frac{\left\|\mathbf{y}-\mathbf{y}^{ \prime}\right\|^{2}}{\theta^{2}}\right)\;, \tag{26}\]
where \(\theta\) denotes the hyper-parameters of the kernel. The GP model's hyper-parameters are optimized during the regression by maximizing the log-marginal-likelihood with the BFGS method [83]. The expected improvement acquisition function can be written as
\[EI(\mathbf{y})\ =\ \begin{cases}(\mu(\mathbf{y})-f_{h}(\mathbf{y}^{+})-\xi)\Phi(Z)+ \sigma(\mathbf{y})\phi(Z),&\text{if }\sigma(\mathbf{y})>0\\ 0,&\text{if }\sigma(\mathbf{y})=0\end{cases}\;, \tag{27}\]
where
\[Z\ =\ \begin{cases}\frac{\mu(\mathbf{y})-f_{h}(\mathbf{y}^{+})-\xi}{\sigma(\mathbf{y})},& \text{if }\sigma(\mathbf{y})>0\\ 0,&\text{if }\sigma(\mathbf{y})=0\end{cases}\;, \tag{28}\]
and \(\mathbf{y}^{+}=\text{argmax}_{\mathbf{y}_{i}\in\mathbf{y}_{1:N}}f_{h}(\mathbf{y}_{i})\) of all \(N\) current samples [82]. The trade-off parameter \(\xi>0\) controls the balance between exploration and exploitation. At an input \(\mathbf{y}\), the mean value predicted by the surrogate is \(\mu(\mathbf{y})\) and the variance is \(\sigma(\mathbf{y})\). The functions \(\phi\) and \(\Phi\) are the probability density function (PDF) and cumulative distribution function (CDF) of the standard normal distribution, respectively. In order to find the maximal point of the acquisition function, which would be the next sample point, optimization algorithms including L-BFGS or random search can be used. The Bayesian optimization (BO) algorithm applied to (23) is given in Algorithm 3. An example convergence measure that allows the algorithm to terminate is if consecutive optimization iterations produce optimal point and optimal value within a tolerance range, using the updated Gaussian process surrogate model that includes the latest samples.
```
1 Choose initial sampling data \(\mathbf{y}_{0}\).
2 Train the Gaussian process model with \(\mathbf{y}_{0}\).
3for\(k=1,2,\dots\)do
4 Evaluate the acquisition function using sample points \(\{\mathbf{y}_{0},\dots,\mathbf{y}_{k-1}\}\) and find \(\mathbf{y}_{k}=\text{argmax}_{\mathbf{y}}EI(\mathbf{y})\).
5 Run finite element simulation with design variables \(\mathbf{y}_{k}\).
6 Compute the objective \(f_{h}(\mathbf{y}_{k})\) based on the simulation result.
7 Retrain the GP surrogate model with the addition of the new sample \(\mathbf{y}_{k}\) and \(f_{h}(\mathbf{y}_{k})\).
8 Solve the optimization problem with the updated surrogate model.
9 Evaluate convergence measure. Exit if satisfied.
```
**Algorithm 3**Bayesian optimization with Gaussian process
Algorithm 3 is implemented in Python and C++ with scikit-learn machine learning library [84, 85] and uses standardscaler preprocessing which reduces the mean of the objective data to 0 and scales them to unit variance. We note that Bayesian optimization is typically implemented as an maximization algorithm. Therefore, we take the negative value of the objective as the actual output to the implementation.
## 6 Numerical experiments
In this section, Algorithms 1, 2 and 3 are applied to two-dimensional AM optimization problems in conjunction with the material model and the finite element method described in Section 2. All finite element simulations employ standard 4-node rectangular elements with 3\(\times\)3 Gaussian quadrature, see [26] for full details. In particular, two numerical examples are simulated and the process parameters \(\boldsymbol{y}\) being optimized comprise printing speed, layer thickness and convection coefficient.
### Two-dimensional wall after heat dissipation
The first example involves a two-dimensional rectangular wall with width of \(20mm\) and height of \(10mm\) in its reference configuration. It is discretized by \(n_{x}=40\) elements along the horizontal direction and \(n_{y}=30\) elements in the vertical direction. The bottom side is subjected to Dirichlet boundary conditions with zero displacement and constant temperature at the ambient value of \(315K\). The other three sides are traction-free and under convection heat transfer conditions with \(\theta_{\infty}=315K\). The initial temperature of the deposited material is set to \(500K\). To help enforce the boundary conditions, an initial material layer is placed at the base of the body with its bottom nodes subjected to the Dirichlet boundary conditions described above. Each time step corresponds to the time it takes to print one full element. The time interval between the completion of one layer and the initiation of the next one is set to one-half the time needed to print a full layer. The simulation terminates \(240s\) after printing is completed to allow the wall to cool down close to the ambient temperature so that the shape error of different process parameters can be measured in a comparable manner.
For this problem, the shape error is measured only at the top edges of the wall, which defines \(\Gamma_{S}^{t}\) in (17). The local shape error \(d\left(\boldsymbol{X},\boldsymbol{u}\right)-\bar{d}(\boldsymbol{X})\) in (17) is defined as the difference between the \(y\)-coordinate of a point in the deformed configuration and its designed height at the same \(x\)-coordinate of the reference configuration, as in Figure 2. The approximation (18) of the surface integral, _i.e._, the objective, is effected using 3-point Gaussian quadrature per element edge. Also, the critical length \(L_{c}\) is chosen to be the height of the wall. The projected gradient-descent Algorithm 1 is applied to the optimization of convection coefficient \(h\), which can be controlled through mechanisms such as the cooling fan speed in the printing chamber. The algorithmic parameters are set to \(\alpha_{0}=1\), \(\rho=0.5\), and \(\eta=0.1\). The time-step size for printing a full element is set to \(0.006s\), equivalent to a
printing speed of approximately \(83mm/s\), within the range of FDM. The number of layers is fixed at 40. The bound constraints on \(h\) are set according to \(h\in[30,55]W/m^{2}K\). Clearly, the optimization space here is one-dimensional and the gradient \(\frac{df_{h}}{d\mathbf{y}}\) can be computed as described in Section 4 and the appendix. The initial value \(h_{0}\) is set to \(40W/m^{2}K\).
A plot depicting the results of the optimization is shown in Figure 3. Taking a closer look at the initial step where \(h=40W/m^{2}K\), the objective function is \(f_{h}(40)=0.0233137\), while the gradient value is \(\frac{df_{h}}{dh}(40)=-9.4\times 10^{-6}\). Upon conducting another full simulation for which \(f_{h}(41)=0.0233041\), the step-size \(\alpha_{k}=1\) satisfies the line search condition. Hence, the actual step length is 1 and the next convection coefficient in the iteration is \(h=41W/m^{2}K\). In a similar manner, the algorithm eventually reaches the optimal value \(h=55W/m^{2}K\) at the boundary of the feasible space. The dotted line in Figure 3 illustrates the monotonically decreasing dependence of \(f_{h}\) on \(h\), which is physically plausible.
Next, the gradient-free optimization Algorithm 2 and 3 are applied to two important process parameters. These are printing speed, represented by time-step size \(\delta t\) needed to print one element, and layer thickness \(\Delta y\). Clearly, for a given specimen width and number of elements in the horizontal direction, the printing speed is inversely proportional to \(\delta t\). Both process parameters present challenges when using gradient-based optimization algorithms. Layer thickness is a discrete parameter as it is dictated by object height and the number of layers. Hence, the objective is not differentiable with respect to it. Likewise, printing speed is tied to the time-step size \(\Delta t_{n}\) of the finite element formulation in (13), and therefore requires additional effort for sensitivity calculation. Consequently, gradient-free optimization methods present an attractive alternative for these two design parameters.
The bound constraints for the process parameters are set here to
\[\begin{split} 0.20mm\,\leqslant&\,\Delta y\,\leqslant \,0.33mm,\\ 0.005s\,\leqslant&\,\delta t\,\leqslant\,0.01s. \end{split} \tag{29}\]
Figure 2: Measurement of shape error for two-dimensional wall with deformation magnified five times for visual clarity (the reference edges in black have zero shape error, while the deformed ones in blue deviate from the designed height due to the thermomechanical response of the material)
These reflect a realistic range for each parameter. Also, the convection coefficient \(h\) is fixed at \(40W/m^{2}K\).
For the method of local variations, the optimization space is a well-defined two-dimensional rectangle, as shown in Figure 4. The initial optimization step-size \(\boldsymbol{\tau}_{0}\) is set to 4 for number of layers and \(0.001s\) for the printing time step. The starting point in the parameter space is set to \(\Delta y=0.25mm\), which corresponds to 40 layers, and \(\delta t=0.0075s\), which translates to a printing speed of \(v=76.2mm/s\). The final step size \(\boldsymbol{\tau}_{min}\) is set to be 1 for the number of layers and \(2\times 10^{-4}s\) for the time step.
The optimization path generated by the algorithm follows the rule of slower printing speed and larger number of layers and eventually stops at the boundary governed by the constraints as shown in Figure 4. The optimization result translates to printing more slowly and slicing the wall more finely in order to reduce the shape error, both of which are intuitively plausible. Of course, on practical grounds, the trade-off is that the overall printing of the part takes longer.
Note that in this example the purely geometric error, defined as the shape error under no deformation, is equal to zero due to the (nominally) rectangular shape of the printed part. Therefore, the thermomechanical response during the building process is responsible for the entirety of the shape error. Algorithm 2 successfully converges to the global minimum
Figure 3: Two-dimensional wall: Optimization of convection coefficient
for the objective function within the bounds in (29). The left plot in Figure 7 is the filled contour plot of the objective function with respect to the two process parameters, constructed through exhaustive sampling. Evident in the figure is the smoothness of the objective function which leads to successful convergence of the optimization procedure. Further, the objective function is monotonic in both process parameters within the bounds, making it easy to find neighboring points with a smaller objective function value.
The initial sampling is important in the convergence of the Bayesian optimization. Here, four corner points are selected in the bounded optimization space. The mean value and expected improvement acquisition function value contour for each iteration are shown in Figure 5 and 6. The ground truth objective and the prediction by the surrogate model at termination are displayed in Figure 7, where the black dots in the surrogate contour are the sampled points. Notice that the objective value plotted is the scaled value of the \(-f_{h}\) since Algorithm 3 solves a equivalent maximization problem, as explained in Section 5.2. Thus, the maximum value shown indeed corresponds to the minimal shape error. In this case, the four initial points suffice in generating a decent surrogate model and the algorithm terminates in 2 iterations as the predicted optimal variables produced by two consecutive iterations are close enough, at the top-right corner in Figure 7. The optimal design variables are found to be 50 layers and \(0.01s\), which translates to layer thickness of \(0.2mm\) and a
Figure 4: Two-dimensional wall optimization path in the optimization space with method of local variations
printing speed of \(0.057m/s\).
### Two-dimensional wall with hole after heat dissipation
This example concerns the simulation of printing a two-dimensional wall with a circular hole. The width of the wall is \(15mm\) and the height is \(10mm\). A quarter-circular hole of radius \(r=3mm\) is situated at the top-right corner of the wall, as in Figure 8. Due to the curved shape, hanging nodes are used in the vicinity of the circular boundary mesh.
The shape error is measured along the circular edge. Specifically, \(\Gamma_{S}^{t}\) in (17) is the _step-edge_ approximation of the quarter-circular hole in the reference configuration, which is defined as the edges formed by connecting the mid-points of the actual printed-edge approximation of the hole, as in Figure 9. This definition ensures that in the reference configuration the length of \(\Gamma_{S}^{t}\), under mesh refinement, becomes closer to the arc length of the circle \(\pi r/2\), in contrast to the length of the actual mesh boundary, which is always equal to \(2r\). The local shape error \(d\left(\mathbf{X},\mathbf{u}\right)-\bar{d}(\mathbf{X})\) in (18) is defined as the deviation in radius of the step-edge approximation from the circle of the designed shape. The critical length \(L_{c}\) here is chosen to be the radius \(r\) of the same circle.
Algorithm 1 is first applied to determine the optimal convection coefficient \(h\in[30,55]W/m^{2}K\), where \(\alpha_{0}=1\), \(\rho=0.5\) and \(\eta=0.1\). The time-step size for printing a full element is set to \(0.006s\), equivalent to a printing speed of \(62.5mm/s\), while the number of layers is fixed
Figure 5: Two-dimensional wall Bayesian optimization, expected improvement and mean value contour, iteration 1. The axes \(x_{1}\) and \(x_{2}\) are the number of layers (\(10/\Delta y\)) and temporal step size \(\delta t\), respectively, as in (29).
Figure 6: Two-dimensional wall Bayesian optimization, expected improvement and mean value contour, iteration 2. The black dots represent sampled points.
Figure 7: Two-dimensional wall Bayesian optimization, ground truth and surrogate objective.
Figure 8: Boundary and error edges for two-dimensional wall with hole
Figure 9: Error edges for two-dimensional wall with hole zoomed in
at 30. Again, the sensitivity \(\frac{df_{h}}{dy}\) is computed based on the method described in Section 4 and the appendix. However, additional care needs to be taken due to the presence of hanging nodes around the quarter-circle. As described towards the end of Section 2.2, a ratio is maintained for the displacement and temperature of a hanging node that resides on an edge with two other nodes at the end of the edge. Naturally, the sensitivity of the displacement and temperature of the hanging node is required to maintain the same ratio. The initial convection coefficient \(h_{0}\) is set to \(40W/m^{2}K\). The optimization path is shown in Figure 10.
The two gradient-free optimization algorithms are applied to the optimization of the printing speed and layer thickness for \(h=40W/m^{2}K\). The bounds of the process parameters are set to
\[\begin{array}{rcl}0.20mm&\leqslant&\Delta y\,\leqslant\,0.33mm\,\\ 0.004s&\leqslant&\delta t\,\leqslant\,0.009s\.\end{array} \tag{30}\]
The remaining algorithmic parameter setup is as in the previous example. However, the presence of the circular hole causes the shape error to have a more complex relation to the number of layers. Indeed, it is evident that the shape error here is nonzero in the reference configuration with no deformation. This is because the shape error is measured against the ideal quarter-circle on rectangular boundary elements. The referential shape error is
Figure 10: Two-dimensional wall with hole: Optimization of convection coefficient
intrinsic to the layered approximation of a circular curve. The shape error is also affected by the slicing algorithm that determines where each layer should stop at the boundary of the circle. The discrete nature of the number of layers, slicing algorithm and rectangular approximation of a circle lead to a non-monotonic relationship between the shape error and the number of layers. We refer to the shape error in reference configuration with no deformation as the geometric error. The geometric error does not change the general trend of how the process parameters affect the shape error, as we show later in the section, because the thermomechanical shape error caused by the AM building process itself is significantly larger. More discussion on this topic can be found in [26, 86, 87].
The optimization path generated by the method of local variations is shown in Figure 11, where the initial step size is \(0.001s\) for the print velocity and \(4\) for the number of layers. Starting from the point \((0.0065s,40)\) in printing speed/number of layer-space, the optimization method traces a path to \((0.0065s,50)\) followed by consecutively smaller speed until the boundary is hit at \((0.009s,50)\). The step size \(\boldsymbol{\tau}_{k}\) keeps decreasing to \(\boldsymbol{\tau}_{min}\) (\(1\) for the number of layers) as no neighboring point produces smaller objective function values and eventually the iteration stops. Therefore, the optimal variables given by the method of local variations are \((0.009s,50)\), which are optimal as shown in the filled contour plot on the left in Figure 14.
Figure 11: Two-dimensional wall with hole: Optimization path with method of local variations
Next, the Bayesian optimization algorithm is applied to the same problem, starting from four initial sample points. The mean value and expected improvement contour for selected iterations are shown in Figures 12 and 13, respectively.
The ground truth objective and the prediction by the surrogate model at the time of termination are displayed in Figure 14, where the black dots in the surrogate contour are the sampled points. It is emphasized again that the objective shown in the plots are the scaled values of negative shape error.
While Algorithm 3 still converges in two iterations, it is clear that the predicted mean value contour is not an accurate depiction of the ground truth, as shown in Figure 14. However, since the algorithm only strives to find the optimal design variables, it terminates as the point \((0.009s,50)\) at the top right corner is found repeatedly. If we have nine initial sample points, as shown in Figure 15, an overall more accurate surrogate model for \(f_{h}\) of \(\Delta y\) and \(\delta t\) can be found.
The results from both examples predict a better finish quality with higher cooling rate (convection coefficient) within a realistic range. They both point to reduced shape error with slower printing and more layers. These seemingly obvious results would be considerably more complicated given real application constraints such as production time and cost associated with increased number of layers. In addition, the wall with a circular hole example illustrates that both the process parameters studied in this work and the pure geometric design of the object contribute to the shape error. The gradient-based optimization, while costly, is more accurate and can scale well.
Figure 12: Two-dimensional wall with hole: Bayesian optimization, expected improvement and mean value contour, iteration 1. The black dots are the sampled points.
Figure 14: Two-dimensional wall with hole: Bayesian optimization: Ground truth and surrogate objective
Figure 13: Two-dimensional wall with hole Bayesian optimization, expected improvement and mean value contour, iteration 2. The intersection of horizontal and vertical lines are the new sampled points to be included in the surrogate model.
wealth of optimization theories to guarantee local minimum convergence [83]. However, computing the sensitivities takes effort and is in general less flexible to new variables and problems.
Gradient-free optimization is a promising alternative, particularly Bayesian optimization. This approach eliminates the need to compute complicated sensitivities, and thus can be easily adapted to different simulation models, including ones different from ours. Accompanied by physical experiments for validation, it can be useful in practice. In this work, we rely on a fully coupled thermomechanical finite element model to generate sample points. In practice, lower fidelity weakly coupled models can work well and further reduce the cost of each simulation. Bayesian optimization can be applied directly to those more commonly used production models. We point out that if the optimization space is large, the method using GP might not scale well.
## 7 Conclusions
In this paper, design optimization methods for AM process parameters have been proposed based on finite element simulation. To address the diverse set of variables, both gradient-based and gradient-free optimization methods have been proposed and implemented. While the objective is chosen to be shape error, the framework can be adapted to other design goals. The gradient-based approach relies on fully discretized reduced space formulation of the balance equations to obtain sensitivities. For gradient-free optimization, a local search algorithm and a Bayesian optimization algorithm using Gaussian process surrogate
Figure 15: Two-dimensional wall with hole: Bayesian optimization, 9 initial samples, ground truth and surrogate objective
models and expected improvement acquisition functions are presented. Numerical experiments demonstrate that these optimization methods can yield physically plausible results efficiently. Line search gradient descent algorithm is applied to convection coefficient and gradient-free algorithms are deployed for layer thickness and printing speed. These flexible methods can be easily extended to other design variables. And in the case of gradient-free optimization, other finite element models, particularly mature uncoupled three-dimensional ones, can be easily adopted. Surrogate models using NN can also be developed.
|
2302.02248 | Determinacy and Large Cardinals | The study of inner models was initiated by G\"odel's analysis of the
constructible universe. Later, the study of canonical inner models with large
cardinals, e.g., measurable cardinals, strong cardinals or Woodin cardinals,
was pioneered by Jensen, Mitchell, Steel, and others. Around the same time, the
study of infinite two-player games was driven forward by Martin's proof of
analytic determinacy from a measurable cardinal, Borel determinacy from ZFC,
and Martin and Steel's proof of levels of projective determinacy from Woodin
cardinals with a measurable cardinal on top. First Woodin and later Neeman
improved the result in the projective hierarchy by showing that in fact the
existence of a countable iterable model, a mouse, with Woodin cardinals and a
top measure suffices to prove determinacy in the projective hierarchy. This
opened up the possibility for an optimal result stating the equivalence between
local determinacy hypotheses and the existence of mice in the projective
hierarchy. This article outlines the main concepts and results connecting
determinacy hypotheses with the existence of mice with large cardinals as well
as recent progress in the area. | Sandra Müller | 2023-02-04T22:04:16Z | http://arxiv.org/abs/2302.02248v1 | # Determinacy Axioms and Large Cardinals+
###### Abstract
The study of inner models was initiated by Godel's analysis of the constructible universe. Later, the study of canonical inner models with large cardinals, e.g., measurable cardinals, strong cardinals or Woodin cardinals, was pioneered by Jensen, Mitchell, Steel, and others. Around the same time, the study of infinite two-player games was driven forward by Martin's proof of analytic determinacy from a measurable cardinal, Borel determinacy from ZFC, and Martin and Steel's proof of levels of projective determinacy from Woodin cardinals with a measurable cardinal on top. First Woodin and later Neeman improved the result in the projective hierarchy by showing that in fact the existence of a countable iterable model, a mouse, with Woodin cardinals and a top measure suffices to prove determinacy in the projective hierarchy. This opened up the possibility for an optimal result stating the equivalence between local determinacy hypotheses and the existence of mice in the projective hierarchy. This article outlines the main concepts and results connecting determinacy hypotheses with the existence of mice with large cardinals as well as recent progress in the area.
Keywords:Determinacy Infinite Game Large Cardinal.
## 1 Introduction
The standard axioms of set theory, Zermelo-Fraenkel set theory with Choice (\(\mathsf{ZFC}\)), do not suffice to answer all questions in mathematics. While this follows abstractly from Kurt Godel's famous incompleteness theorems, we nowadays know numerous concrete examples for such questions. A large number of problems in set theory, for example, regularity properties such as Lebesgue measurability and the Baire property are not decided - for even rather simple (for example, projective) sets of reals - by \(\mathsf{ZFC}\). Even many problems outside of set theory have been showed to be unsolvable, meaning neither their truth nor
their failure can be proven from ZFC. This includes the Whitehead Problem (group theory, [49]), the Borel Conjecture (measure theory, [22]), Kaplansky's Conjecture on Banach algebras (analysis, [8]), and the Brown-Douglas-Fillmore Problem (operator algebras, [11]). A major part of set theory is devoted to attacking this problem by studying various extensions of ZFC and their properties. One of the main goals of current research in set theory is to identify _the "right" axioms for mathematics_ that settle these problems. This, in part philosophical, problem is attacked with technical mathematical methods by analyzing various extensions of ZFC and their properties. **Determinacy assumptions** are canonical extensions of ZFC that postulate the existence of winning strategies in natural two-player games. Such assumptions are known to imply regularity properties, and enhance sets of real numbers with a great deal of canonical structure. Other natural and well-studied extensions of ZFC are given by the hierarchy of **large cardinal axioms**. Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics such as algebraic topology [7], topology [38, 13, 6], algebra [10], and operator algebras [11]. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of large cardinals and determinacy axioms. Thus, understanding the connections between determinacy assumptions and the hierarchy of large cardinals is vital to **answer questions left open by ZFC itself**. The results outlined in this article are closely related to this overall goal.
To explore the connections between large cardinals and determinacy at higher levels, the study of other hierarchies, for example, with more complex inner models called **hybrid mice**, has been very fruitful. **Translation procedures** are needed to translate these hybrid models, whose strength comes from descriptive set theoretic features, back to standard inner models while making use of their hybrid nature to obtain stronger large cardinals in the translated model. They are therefore a key method **connecting descriptive set theory with inner model theory**. One of the results surveyed in this article is a new translation procedure extending work of Sargsyan [41], Steel [53], and Zhu [61]. This new translation procedure yields a countably iterable inner model with a cardinal \(\lambda\) that is both a limit of Woodin cardinals and a limit of strong cardinals [30]. So it improves Sargsyan's construction in [41] in two ways: It can be used to obtain infinitely many instead of finitely many strong cardinals and the models it yields are countably iterable - a crucial property of mice. This translation procedure can be applied to prove a **conjecture of Sargsyan** on the consistency strength of the Axiom of Determinacy when all sets are universally Baire [30], a central and widely used property of sets of reals introduced implicitly in [47] and explicitly in [12]. In fact, the new translation procedure can be applied in a much broader context. Moreover, it provides the basis for translation procedures resulting in more complex patterns of strong cardinals, for example, a strong cardinal that is a limit of strong cardinals.
Recent seminal results of Sargsyan and Trang [46, 45, 44], see also the review [29], as well as Larson and Sargsyan [20, 42] suggest that we are at a _turning point_
in the search for natural constructions of canonical models with a Woodin limit of Woodin cardinals and thereby for proving better lower bounds for natural set theoretic hypotheses.
## 2 Determinacy for games of length \(\omega\) and large cardinals
In 1953, Gale and Stewart [14] developed a basic theory of infinite games. For notational simplicity, we identify reals in \(\mathbb{R}\) with \(\omega\)-sequences of natural numbers in \({}^{\omega}\omega\). Gale and Stewart considered, for every set of reals \(A\), a two-player game \(G(A)\) of length \(\omega\), where player I and player II alternate playing natural numbers \(n_{0},n_{1},\dots\), as follows:
\begin{tabular}{c|c c c} I & \(n_{0}\) & \(n_{2}\) & \(\dots\) \\ \hline II & \(n_{1}\) & \(n_{3}\) & \(\dots\) \\ \end{tabular}
They defined that player I wins the game \(G(A)\) if and only if the sequence \(x=(n_{0},n_{1},\dots)\) of natural numbers produced during a run of the game \(G(A)\) is an element of \(A\); otherwise, player II wins. We call \(A\) the payoff set of \(G(A)\). The game \(G(A)\) (or the set \(A\) itself) is called determined if and only if one of the two players has a winning strategy, meaning that there is a method by which they can win in the game described above, no matter what their opponent does. The **Axiom of Determinacy** (AD) is the statement that all sets of reals are determined.
Already in [14], the authors were able to prove that every open and every closed set of reals is determined under ZFC. But they also proved that determinacy for all sets of reals contradicts the Axiom of Choice. This leads to the natural question as to how the picture looks for definable sets of reals which are more complicated than open and closed sets. After some partial results by Wolfe [59] and Davis [9], Martin was able to prove in 1975 [24] that every Borel set of reals is determined (again using ZFC).
In the meantime, the development of so-called **large cardinal axioms** was proceeding in set theory, and Solovay was able to prove regularity properties, a known consequence of determinacy, for a specific pointclass, assuming the existence of a measurable cardinal, instead of a determinacy axiom. Finally, Martin was able to prove a direct connection between large cardinals and determinacy axioms: He showed, in 1970, that the existence of a measurable cardinal implies determinacy for every analytic set of reals [23]. Eight years later, Harrington established that this result is, in some sense, optimal, by proving that determinacy for all analytic sets of reals implies that \(0^{\#}\), a countable active iterable canonical inner model which can be obtained from a measurable cardinal, exists [15]. Here, an **iterable canonical inner model**, or **mouse**, is a fine structural model that is, in some sense, iterable. This notion goes back to Jensen [16]. Together with Martin's argument mentioned above, this yields an equivalence between the two statements. The construction of such canonical inner models and their connection with determinacy was later extended in work of Dodd, Jensen, Martin, Mitchell, Neeman, Schimmerling, Schindler, Solovay, Steel, Woodin, Zeman, and others (see, e.g., [25, 26, 48, 55, 60]; see the preface of [32] or Larson's history of
determinacy [19] for a more detailed overview). In the projective hierarchy, this led to the following fundamental theorem. Here, \(M_{0}^{\#}(x)\) denotes \(x^{\#}\), a version of \(0^{\#}\) relativized to a real \(x\), and \(M_{n}^{\#}(x)\) denotes a minimal countable active mouse with \(n\) Woodin cardinals constructed above \(x\).
Theorem 2.1 (Harrington, Martin, Neeman, Woodin [15, 23, 33, 36, 32]): _Let \(n\) be a natural number. Then the following are equivalent:_
1. _All_ \(\boldsymbol{\Pi}^{1}_{n+1}\) _sets are determined, and_
2. _for all_ \(x\in{}^{\omega}\omega\)_,_ \(M_{n}^{\#}(x)\) _exists and is_ \(\omega_{1}\)_-iterable._
The proof that the determinacy of sets in the projective hierarchy implies the existence of mice with finitely many Woodin cardinals in this exact level-by-level correspondence first appeared in [27, 32] and is originally due to Woodin. As shown in [27], the underlying methods can be used to obtain similar results for certain hybrid mice in the \(L(\mathbb{R})\)-hierarchy. These tight connections are, at first, very surprising, as they show that two ostensibly completely different notions, from distinct areas of set theory - determinacy from descriptive set theory, and inner models with large cardinals from inner model theory - are, in fact, the same.
## 3 Determinacy for games longer than \(\boldsymbol{\omega}\)
It turns out that the correspondence between determinacy and inner models with large cardinals does not stop at games of length \(\omega\). For every ordinal \(\alpha\) and set \(A\subseteq{}^{\alpha}\omega\), we can define a game \(G(A)\) of length \(\alpha\) with payoff set \(A\), as follows:
\[\begin{array}{c|cccc}\mbox{I}&n_{0}&n_{2}&\ldots&n_{\omega}&\ldots\\ \hline\mbox{II}&n_{1}&n_{3}&\ldots&n_{\omega+1}&\ldots\end{array}\]
The players alternate playing natural numbers \(n_{i}\) for \(i<\alpha\), and we again say that player I wins the game if and only if the sequence \(x=(n_{0},n_{1},\ldots)\) of length \(\alpha\) they produce is an element of \(A\); otherwise, player II wins. In landmark results, Neeman [37] developed powerful techniques to prove the determinacy of projective games longer than \(\omega\) from large cardinals. A first step in this direction is, for example, the following result:
Theorem 3.1 (Neeman, [37]): _Let \(n\in\omega\) and suppose that \(M_{\omega+n}^{\#}(x)\) exists for all reals \(x\in{}^{\omega}\omega\). Then all games of length \(\omega^{2}\) with \(\boldsymbol{\Pi}^{1}_{n+1}\) payoff are determined._
This result in fact holds for games of fixed length \(\alpha\), for all countable ordinals \(\alpha\), instead of games of length \(\omega^{2}\). The following theorem complements Neeman's results for projective games of length \(\omega^{2}\):
Theorem 3.2 (Aguilera, Muller, [2, 28]): _Let \(n\) be a natural number and suppose that all games of length \(\omega^{2}\) with \(\boldsymbol{\Pi}^{1}_{n+1}\) payoff are determined. Then, for every \(x\in{}^{\omega}\omega\), there is a model \(\mathcal{M}\) of \(\mathsf{ZFC}\), with \(\omega+n\) Woodin cardinals, such that \(x\in\mathcal{M}\)._
At this level, the interplay of determinacy and large cardinals is already understood quite well (see also [1, 3]). For games of length \(\omega^{\alpha}\) with analytic payoff, for countable ordinals \(\alpha\), similar results have previously been established by Trang [57], building on unpublished results of Woodin, using canonical models of determinacy with a generalized Solovay measure. The Solovay measure is also called a **supercompact measure for \(\omega_{1}\)** as it witnesses a degree of supercompactness for \(\omega_{1}\).
When considering much stronger notions of determinacy, the picture is less clear. For example, it was already shown by Mycielski in 1964 that determinacy for all games of length \(\omega_{1}\) is inconsistent with Zermelo-Fraenkel set theory (ZF). Nevertheless, there are subclasses of games of length \(\omega_{1}\) that are still known to be determined under large cardinal assumptions.
An intermediate step are games that do not have a fixed countable length but still end after countably many rounds. In 2004, Neeman showed in groundbreaking work, from large cardinals, that certain games that are not of fixed countable length are still determined. These so-called **games of continuously coded length**, which go back to Steel [50], are defined as follows: For any set \(A\subset(^{\omega}\omega)^{<\omega_{1}}\) and partial function \(\nu\colon^{\omega}\omega\rightharpoonup\omega\), the game \(G_{\mathrm{cont}}(\nu,A)\) is given by the following rules:
\[\begin{array}{c|cccc}\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span
an inner model with a Woodin cardinal that is a limit of Woodin cardinals from the determinacy of certain long games. The natural games to consider at this level have length \(\omega_{1}\) and their payoff set is ordinal definable using reals as parameters. The converse was shown by Woodin, using results of Neeman [37] and ideas going back to Kechris and Solovay [18].
Theorem 4.1 (Neeman, Woodin, [37]): _Suppose there is an iterable proper class model with a Woodin cardinal that is a limit of Woodin cardinals and countable in \(V\). Then there is a model of \(\mathsf{ZFC}\) in which all ordinal definable games of length \(\omega_{1}\) on natural numbers with real parameters are determined._
In fact, Woodin showed that determinacy of these games of length \(\omega_{1}\) is equiconsistent with a seemingly weaker statement: determinacy of certain games that are **constructibly uncountable in the play**. These games are defined as follows: For a payoff set \(A\subset(^{\omega}\omega)^{<\omega_{1}}\), players I and II alternate playing natural numbers to produce reals \(y_{\alpha}\).
\[\begin{array}{c|cccc}\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span
Conjecture 2: Suppose all ordinal definable games of length \(\omega_{1}\) on natural numbers with real parameters are determined. Then there is a model of \(\mathsf{ZFC}\) with a Woodin cardinal that is a limit of Woodin cardinals.
This would be the first correspondence between a natural determinacy notion and large cardinals at the level of a Woodin cardinal that is a limit of Woodin cardinals. It cannot be achieved using current methods such as the core model induction technique due to Woodin (see, for example, the review [29]), which Sargsyan and Trang [46, 44, 45] have recently shown runs into serious issues before reaching this level. In addition, by recent results of Larson and Sargsyan [20, 42], also the well-known liberal \(K^{c}\) construction in [4, 17] can fail if there is a Woodin cardinal that is a limit of Woodin cardinals.
Therefore, understanding the large cardinal strength of the determinacy of such uncountable games might shed light on how to canonically obtain inner models with a Woodin cardinal that is a limit of Woodin cardinals.
## 4 Strong models of determinacy for games of length \(\omega\)
Another approach to strengthen determinacy is to keep playing games of length \(\omega\) and impose additional structural properties on the model. Examples of such structural properties are "\(\theta_{0}<\Theta\)," "\(\Theta\) is regular," or the Largest Suslin Axiom, see, for example, [53, 40, 44]. Here \(\Theta\) is given by
\[\Theta=\sup\{\alpha\mid\mbox{there is a surjection $f\colon\mathbb{R}\to\alpha$}\}\]
and we write \(\theta_{0}\) for the least ordinal \(\alpha\) such that there is no surjection of \(\mathbb{R}\) onto \(\alpha\) which is ordinal definable from a real. While in models of the Axiom of Choice \(\Theta\) is simply equal to \((2^{\aleph_{0}})^{+}\), it has very interesting behaviour in models of the Axiom of Determinacy.
Other examples of properties that can be used to obtain strong models of determinacy are "all sets of reals are Suslin" or "all sets of reals are universally Baire." Being Suslin is a generalization of being analytic. More precisely, a set of reals is _Suslin_ if it is the projection of a tree on \(\omega\times\kappa\) for some ordinal \(\kappa\). Woodin and Steel determined the exact large cardinal strength of the theory "AD + all sets of reals are Suslin" [54, 52]:
Theorem 4.1 (Steel, Woodin, [54, 52]): _The following theories are equiconsistent (over \(\mathsf{ZF}\)):_
1. \(\mathsf{AD}+\mbox{ all sets of reals are Suslin,}\)__
2. \(\mathsf{ZFC}+\mbox{ there is a cardinal $\lambda$ that is a limit of Woodin cardinals and a limit of $<\!\!\lambda$-strong cardinals.}\)__
By results of Martin and Woodin, see [54, Theorems 9.1 and 9.2], assuming \(\mathsf{AD}\), the statement "all sets of reals are Suslin" is equivalent to the Axiom of Determinacy for games on reals (\(\mathsf{AD}_{\mathbb{R}}\)). Being universally Baire is a strengthening of being Suslin that was introduced implicitly in [47] and explicitly by Feng, Magidor and Woodin [12].
Definition 1 (Feng, Magidor, Woodin [12]): A subset \(A\) of a topological space \(Y\) is _universally Baire_ if \(f^{-1}(A)\) has the property of Baire in any topological space \(X\), where \(f\colon X\to Y\) is continuous.
The exact consistency strength of the statement that all sets of reals are universally Baire under determinacy was conjectured by Sargsyan, in 2014, after he was able to obtain an upper bound with Larson and Wilson [21] via an extension of Woodin's famous derived model theorem. One fact that makes their argument particularly interesting is that no model of the form \(L(\mathcal{P}(\mathbb{R}))\) is a model of "\(\mathsf{AD}+\) all sets of reals are universally Baire." Universal Baireness is not only widely used across set theory but a crucial property in inner model theory: Universally Baire iteration strategies (canonically coded as a set of reals) can be extended from countable to uncountable iterations (see, for example, [39]). The following theorem proves **Sargsyan's conjecture** by showing that the upper bound Larson, Sargsyan, and Wilson obtained is optimal:
Theorem 4.1 (Larson, Sargsyan, Wilson, [21], Muller, [30]): _The following theories are equiconsistent (over \(\mathsf{ZF}\)):_
1. \(\mathsf{AD}\,+\,\) _all sets of reals are universally Baire,_
2. \(\mathsf{ZFC}\,+\,\) _there is a cardinal that is a limit of Woodin cardinals and a limit of strong cardinals._
To construct and analyze the relevant models to prove the direction \(Con(1.)\Rightarrow Con(2.)\) in this theorem, instead of just considering two hierarchies - determinacy axioms and inner models with large cardinals - a third hierarchy is used to reach higher levels in the other two. These three hierarchies together form what Steel calls the **triple helix** of inner model theory. The new hierarchy goes back to Woodin and Sargsyan and consists of canonical models called **hybrid mice**, or **hod mice**. These models are not only enhanced by large cardinals witnessed by extenders on their sequence, but also equipped with partial iteration strategies for themselves, see [40]. The strength of these models intuitively comes from the descriptive set theoretic complexity of these partial iteration strategies.
The name **hod mouse** comes from the fact that these mice naturally occur as the result of analyses of \(\mathsf{HOD}\), the hereditarily ordinal definable sets, in various models of determinacy. This analysis was pioneered by Steel and Woodin [51, 56] in the model \(L(\mathbb{R})\), as well as in \(L[x][g]\) for a cone of reals \(x\), where \(g\) is generic for Levy collapsing the least inaccessible cardinal in \(L[x]\) (both under a determinacy hypothesis). It was extended to larger models of determinacy by Sargsyan, Trang, and others [40, 58, 5, 43]. In [31] we showed how to analyze \(\mathsf{HOD}\) in \(M_{n}(x)[g]\), for a cone of reals \(x\), where \(g\) is generic for Levy collapsing the least inaccessible cardinal in \(M_{n}(x)\) (under a determinacy hypothesis).
The technical innovation behind the direction \(Con(1.)\Rightarrow Con(2.)\) in Theorem 4.1 is a new translation procedure to translate hybrid mice into mice with a limit of Woodin and strong cardinals [30]. This required an iterability proof for models obtained via a novel backgrounded construction. In [30] it is shown that
the resulting models are countably iterably, meaning that countable substructures are iterable, and, in fact, a bit more. But the following natural question is left open:
Question 1: Is there a translation procedure that yields fully iterable mice with a limit of Woodin and strong cardinals (when applied to suitable hybrid mice)?
|
2310.02059 | Security Weaknesses of Copilot Generated Code in GitHub | Modern code generation tools, utilizing AI models like Large Language Models
(LLMs), have gained popularity for producing functional code. However, their
usage presents security challenges, often resulting in insecure code merging
into the code base. Evaluating the quality of generated code, especially its
security, is crucial. While prior research explored various aspects of code
generation, the focus on security has been limited, mostly examining code
produced in controlled environments rather than real-world scenarios. To
address this gap, we conducted an empirical study, analyzing code snippets
generated by GitHub Copilot from GitHub projects. Our analysis identified 452
snippets generated by Copilot, revealing a high likelihood of security issues,
with 32.8% of Python and 24.5% of JavaScript snippets affected. These issues
span 38 different Common Weakness Enumeration (CWE) categories, including
significant ones like CWE-330: Use of Insufficiently Random Values, CWE-78: OS
Command Injection, and CWE-94: Improper Control of Generation of Code. Notably,
eight CWEs are among the 2023 CWE Top-25, highlighting their severity. Our
findings confirm that developers should be careful when adding code generated
by Copilot and should also run appropriate security checks as they accept the
suggested code. It also shows that practitioners should cultivate corresponding
security awareness and skills. | Yujia Fu, Peng Liang, Amjed Tahir, Zengyang Li, Mojtaba Shahin, Jiaxin Yu, Jinfu Chen | 2023-10-03T14:01:28Z | http://arxiv.org/abs/2310.02059v2 | # Security Weaknesses of Copilot Generated Code in GitHub
###### Abstract.
Modern code generation tools use AI models, particularly Large Language Models (LLMs), to generate functional and complete code. While such tools are becoming popular and widely available for developers, using these tools is often accompanied by security challenges, leading to insecure code merging into the code base. Therefore, it is important to assess the quality of the generated code, especially in terms of its security. Researchers have recently explored various aspects of code generation tools, including security. However, many open questions about the security of the generated code require further investigation, especially the security issues of automatically generated code in the wild. To this end, we conducted an empirical study by analyzing the security weaknesses in code snippets generated by GitHub Copilot that are found as part of publicly available projects hosted on GitHub. The goal is to investigate the types of security issues and their scale in real-world scenarios (rather than crafted scenarios). To this end, we identified 435 code snippets generated by GitHub Copilot from publicly available projects. We then conducted extensive security analysis to identify Common Weakness Enumeration (CWE) instances in these code snippets. The results show that (1) 35.8% of Copilot generated code snippets contain CWEs, and those issues are spread across multiple languages, (2) the security weaknesses are diverse and related to 42 different CWEs, in which _CWE-78: OS Command Injection, CWE-330: Use of Insufficiently Random Values_, and _CWE-703: Improper Check or Handling of Exceptional Conditions_ occurred the most frequently, and (3) among the 42 CWEs identified, 11 of those belong to the currently recognized 2022 CWE Top-25. Our findings confirm that developers should be careful when adding code generated by Copilot (and similar AI code generation tools) and should also run appropriate security checks as they accept the suggested code. It also shows that practitioners should cultivate corresponding security awareness and skills.
Code Generation, Security Weaknesses, CWEs, GitHub Copilot +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal:
language generation, text classification, and question-answering systems (Dong et al., 2017). Compared to previous deep learning methods, the latest developments in LLMs, such as Generative Pre-trained Transformer (GPT) models, have opened up new opportunities to address the limitations of existing automated code generation technology (Zhu et al., 2018). Currently, code generation tools based on LLMs have also been widely applied, such as Codex by OpenAI (Zhu et al., 2018), AlphaCode by DeepMind (Deng et al., 2018), and CodeWhisper by Amazon (Bahdan et al., 2018).
These models are trained on billions of public open-source lines of code, which includes public code with unsafe coding patterns (Dong et al., 2017). Therefore, code generation tools based on such models can pose security risks, and the code they generate may also have security weaknesses. For example, GitHub Copilot may produce some insecure code, as its underlying model Codex is pre-trained on untrusted data from GitHub (Dong et al., 2018), which is known to contain buggy programs (Zhu et al., 2018). In addition, the code with vulnerabilities generated by these code generation tools may continue to be used to train the model, thus further generating code with vulnerabilities, leading to a vicious cycle. Previous research has studied code generation tools, with more focus on the correctness of the results (Dong et al., 2018; Dong et al., 2018; Dong et al., 2018; Dong et al., 2018), and relatively less attention has been paid to security aspects (Zhu et al., 2018; Dong et al., 2018; Dong et al., 2018). To the best of our knowledge, potential security weaknesses in practical scenarios have not been fully considered and addressed in previous work, and GitHub Copilot clarifies that "_the users of Copilot are responsible for ensuring the security and quality of their code_(Dong et al., 2018)". GitHub also provides tools such as CodeQL to help developers scan the security issues in their code.
To this end, we conducted an empirical study on the security weaknesses of generated code by GitHub Copilot, which is available on GitHub. We chose Copilot as our research subject because it is a commercial instance of AI-assisted programming and has gained much attention and popularity among developers since its launch in 2021. The security weaknesses of code generated by Copilot have also gained attention in the research and practice community. Furthermore, thousands of developers in the GitHub community have shared their experiences of using Copilot in real-world systems (Dong et al., 2018). We collected code generated by Copilot that has been used in projects on GitHub and analyzed the security of the generated code through the lens of a real-world production environment. Then, we used static analysis tools to perform security analysis on the collected code snippets and classified the security weaknesses in the code snippets using the Common Weakness Enumeration (CWE).
**Our findings show that**: (1) 35.8% of Copilot generated code snippets have security weaknesses, and security weaknesses arise regardless of the programming language used; (2) the security weaknesses are diverse and related to 42 different CWEs, in which_CWE-78: OS Command Injection, CWE-330: Use of Insufficiently Random Values_ and _CWE-703: Improper Check or Handling of Exceptional Conditions_ are the most frequently occurred; and (3) among the 42 CWEs identified, 11 CWEs belong to the currently recognized 2022 CWE Top-25.
**The contributions of this work**: (1) We curated a dataset of code snippets generated by Copilot that has been used in projects on GitHub (a curated data (Dong et al., 2018) is made available online for future research in this area.) and conducted security checks on them, which can to some extent reflect the frequency of security weaknesses encountered by developers when using Copilot to generate code in actual coding; (2) We extensively checked all possible CWEs in the code snippets and analyzed them. This can help developers understand the common CWEs caused by using Copilot to generate code in actual coding and how to accept the code suggestions provided by Copilot safely.
The rest of this paper is structured as follows: Section 2 presents the related work. Section 3 presents the research questions and the research design of this study. Section 4 presents our study results, which are further discussed in Section 5. The potential threats to validity are clarified in Section 6. Section 7 concludes this work with future work directions.
## 2. Related Work
### AI-assisted Code Generation Tools
With the rise of code generation tools integrated with IDEs, many studies have evaluated these code generation systems based on transformer models to better understand their effectiveness in real-world scenarios. Previous research mainly focused on whether the code generated by these tools can meet users' functional requirements. Yetistiren _et al._(Yetistiren et al., 2018) evaluated the effectiveness, correctness, and efficiency of the code generated by GitHub Copilot, and the results showed that GitHub Copilot could generate valid code with a success rate of 91.5%, making it a promising tool. Sobania _et al._(Sobania et al., 2018) evaluated the correctness of the code generated by GitHub Copilot and compared the tool with an automatic program generator with a Genetic Programming (GP) architecture. They concluded there was no significant difference between the two methods on benchmark problems. Nguyen and Nadi (Nguyen and Nadi, 2018) conducted an empirical study using 33 LeetCode problems and created queries for Copilot in four different programming languages. They evaluated the correctness and comprehensibility of the code suggested by Copilot by running tests provided by LeetCode. They found that Copilot's suggestions have lower complexity. Burak _et al._(Burak et al., 2018) evaluated the code quality of AI-assisted code generation tools (GitHub Copilot, Amazon CodeWhisper, and ChatGPT). They compared the improvements between the latest and older versions of Copilot and CodeWhisperer and found that the quality of generated code had improved.
In recent years, researchers have also started to focus on the experience of developers when using AI-assisted code generation tools and how the tools can improve productivity by observing their behavior. Vaithilingam _et al._(Vaithilingam et al., 2018) studied how programmers use and perceive Copilot, and they found that while Copilot may not necessarily improve task completion time or success rate, it often provides a useful starting point. They also noted that participants faced difficulties in understanding, editing, and debugging the code snippets generated by Copilot. Barke _et al._(Barke et al., 2018) presented the first theoretical analysis of how programmers interact with Copilot based on the observations of 20 participants. Sila _et al._(Sola et al., 2018) conducted an empirical study on AlphaCode, identifying similarities and performance differences between code generated by code generation tools and code written by human developers. They argued that software developers should check the generated code for potentially problematic code that could introduce performance weaknesses.
These works above conducted relatively extensive evaluations of code generation tools in terms of correctness, effectiveness, and robustness. However, there is still room for improvement regarding its security, as detailed in the following section.
### Security of Code Generation Techniques and LLMs
Code security is an issue that cannot be ignored in the software development process. Recent work has primarily focused on evaluating the security of the code generation tools and the security of the LLMs that these tools are based on.
Pearce _et al._(Pearce et al., 2017) first evaluated the security of GitHub Copilot in generating programs by identifying known weaknesses in the suggested code. The authors prompted Copilot to generate code for 89 cybersecurity scenarios and evaluated the weaknesses in the generated code. They found that 40% of the suggestions in the relevant context contained security-related bugs (i.e., CWE classification from MITRE (Khoury et al., 2017)). Siddiq _et al._(Siddiq et al., 2017) conducted a large-scale empirical study on code smells in the training set of a transformer-based Python code generation model and investigated the impact of these harmful patterns on the generated code. They observed that Copilot introduces 18 code smells, including non-standard coding patterns and two security smells (i.e., code patterns that often lead to security defects). Khoury _et al._(Khoury et al., 2017) studied the security of the source code generated by the ChatGPT chatbot based on LLMs, and they found that ChatGPT was aware of potential weaknesses but still frequently generated some non-robust code.
Several researchers also compared the situation where code generation tools produce insecure code with that of human developers. Sandoval _et al._(Sandoval et al., 2017) conducted a security-driven user study, and their results showed that the rate at which AI-assisted user programming produced critical security errors was no more than 10% of the control group, indicating that the use of LLMs does not introduce new security risks. Asare _et al._(Asare et al., 2017) conducted a comparative empirical analysis of these tools and language models from a security perspective and investigated whether Copilot is as bad as humans in generating insecure code. They found that while Copilot performs differently across vulnerability types, it is not as bad as human developers when it comes to introducing vulnerabilities in code. In addition, researchers have also constructed datasets to test the security of these tools. Tony _et al._(Tony et al., 2017) proposed LLMSecEval, a dataset containing 150 natural language prompts that can be used to evaluate the security performance of LLMs. Siddiq _et al._(Siddiq et al., 2017) provided a dataset, SecurityEval, for testing whether a code generation model has weaknesses. The dataset contains 130 Python code samples.
Unlike the works above, we studied the security weaknesses exhibited by code generation tools in a real-world production environment (i.e., GitHub). We collected code snippets from GitHub generated by developers using Copilot in daily production as a source of research data, whereas in the Pearce _et al._(Pearce et al., 2017) study, the research data came from code generated by the authors using Copilot based on the natural language prompts related to high-risk network security weaknesses. In addition to this, Pearce et al. configured CodeQL only to examine CWEs targeted by security weaknesses associated with the prompted scenarios. In contrast, we used various static analysis tools to examine all types of CWEs and analyze them extensively. Our research results may help developers understand what common CWEs are prone to result from using Copilot to generate code in real coding.
### Security Static Analysis
Vulnerabilities detection is critical to improve software security and ensure quality (Kaut et al., 2017). There are two used methods for vulnerability detection in source code: via static and dynamic code analysis. Dynamic analysis techniques are more sound and precise but lack coverage (Khoury et al., 2017). On the other hand, static analysis is less precise but offers greater coverage and allows to analyze programs without the need to execute them (Siddiq et al., 2017). Static analysis has been widely used to find security issues in code, given it is cheaper to run and can conduct whole program analyses without the need to execute the program (Kaut et al., 2017). OWASP (Kaut et al., 2017) provides a list of commonly used static analysis tools. This includes tools like CodeQL: a general-purpose automatic scanning tool, FindBugs: a tool for Java programs, ESLint: a tool for JavaScript programs, Bandit: a tool for Python programs, and GoSec: a tool for Go programs. Such tools have been widely used in previous security analysis research (Pearce et al., 2017; Siddiq et al., 2017; Siddiq et al., 2017).
Kaut _et al._(Kaut et al., 2017) compared static analysis tools for vulnerability detection in scanning C/C++ and Java source code. Tomasdottir _et al._(Tomasdottir et al., 2017) conducted an empirical study on ESLint, the most commonly used JavaScript static analysis tool among developers. Pearce _et al._(Pearce et al., 2017) used CodeQL for security weakness scanning of generated Python and C++ code. Siddiq _et al._(Siddiq et al., 2017) used Bandit to check Python code generated using a test dataset.
These static analysis tools support different analysis algorithms and techniques. By using multiple tools for analysis, potential weaknesses in the code can be discovered from different perspectives and levels, avoiding omissions and improving the accuracy of the analysis. Our study first used CodeQL to scan the collected code snippets. CodeQL is an open-source tool that supports multiple languages, including Java, JavaScript, C++, C#, and Python. It can find weaknesses in a codebase based on a set of known weaknesses/rules. In addition, to obtain more comprehensive scan results, we supplemented the scan of code in different languages with static analysis tools (i.e., Cppcheck and Bandi) tailored to specific languages.
## 3. Research Design
In this section, we describe our research design in detail. In Section 3.1, we first define our Research Questions (RQs), followed by the process of collecting and filtering the code snippets generated by Copilot in Section 3.2. We then explain the security analysis performed on the identified snippets and the process of filtering the raw results generated by static analysis tools in Section 3.3.
### Research Goal and Questions
This empirical study aims to understand the potential security weaknesses in Copilot generated code. We first collected code snippets generated by Copilot from GitHub projects as the data source for our research. It should be noted that it is not possible to access all the code generated by Copilot in GitHub projects, as there is no direct way to identify if part of a file was generated by Copilot (i.e., source files do not contain any signatures to indicate if Copilot
generates the code). However, we can identify many code snippets by searching through the repository description and the comments provided in the code (details provided in Section 3.2.2). We then analyzed the identified snippets for security weaknesses. We aim to help developers and researchers understand common security weaknesses when using Copilot without focusing on whether the code generation aspects are used correctly.
We conducted this empirical study following the guidelines proposed in (Copilot et al., 2018). The RQs, their rationale, and the research process of this study (see Fig. 1) are detailed in the subsections below.
**RQ1. How secure is the code generated by Copilot in GitHub Projects?**
**Rationale:** Copilot may produce code suggestions that developers accept but these suggestions may include security weaknesses that could potentially make the program vulnerable. The answer to RQ1 helps to understand the frequency of security weaknesses developers encounter when using Copilot in production.
**RQ2. What security weaknesses are present in the code snippets generated by Copilot?**
**Rationale:** Copilot generated code may contain security weaknesses (Krishnan et al., 2017), and developers should conduct a rigorous security review before accepting the code generated by Copilot. As clarified in the documentation of GitHub Copilot "_the users of Copilot are responsible for ensuring the security and quality of their code_(Krishnan et al., 2017)". The answer to RQ2 can help developers better understand possible security weaknesses in the code generated by Copilot, thereby enabling them to prevent and fix these weaknesses more effectively.
**RQ3. How many security weaknesses belong to the MITRE CWE Top-257**
**Rationale:** The MITRE list contains the top 25 most dangerous security weaknesses. The answer to RQ3 can help developers understand whether the code generated by Copilot contains widely recognized types of security weaknesses and Copilot's ability to handle these recent and common weaknesses.
### Data Collection and Filtering
We chose GitHub as the primary data source for answering our RQs. GitHub is widely used by developers. As the world's largest code hosting platform, GitHub contains millions of public code repositories and offers access to a large number of code resources, allowing us to cover multiple programming languages and project types in our study. We used code snippets generated by Copilot on GitHub as our research object to analyze the relevant security weaknesses of Copilot. Our scripts and dataset are provided online in our replication package (Krishnan et al., 2017).
#### 3.2.1. Code Snippets Collection
**Step 1.** To collect code snippets generated by Copilot, we first conducted a pilot search to formulate our search keywords. First, we used "GitHub Copilot" and "Copilot" as our search keywords. As expected, we found that the term "Copilot" not only refers to the code generation tool launched by GitHub but also to tools in the aviation or telemetry fields. Therefore, using the keyword "Copilot" solely may return irrelevant content that may not be related to the use of the tool. On the other hand, using "GitHub Copilot" as a search keyword can exclude content unrelated to the code generation tool Copilot and narrow the search scope, which is what we have used to locate the code snippets.
However, even with this basic search keyword, we still need to carefully filter the search results to ensure they are truly related to GitHub Copilot. Although using "GitHub Copilot" as a search keyword increases the relevance of the results to Copilot, these results are not necessarily the code snippets generated by Copilot. It should be noted that many code snippets containing the "GitHub Copilot" keyword in the search results display GitHub Copilot as text. Developers may use them to describe their experience using Copilot to generate code or showcase information related to Copilot. These code snippets are not what we need because they do not directly relate to the code generated by Copilot. Our target is code generated by Copilot, not code snippets containing the keyword "Copilot".
Our observations from the pilot search showed that using keywords such as "by GitHub Copilot", "use GitHub Copilot", and "with GitHub Copilot" can improve the accuracy of the search results. These keywords enable us to focus more on the code generated using Copilot rather than code snippets that contain other content related to Copilot. In addition, since our goal is to use automated analysis tools to perform security scans on the collected code snippets, we further limited the types of code snippets during the search to Python, JavaScript, Java, C++, C#, and Go. These are the mainstream languages supported by Copilot, and also the languages supported by CodeQL. We collected the _Code_ parts from these search results. Considering that some projects declare using GitHub Copilot generated code in their README files or project description provided in GitHub, we decided to retain the results from the _Repository_ label in the search results. Fig. 2 shows an example of our search process.
Table 1 reports the search terms we used and the number of search results obtained from GitHub. In this step, we collected a total of 8,004 results, of which 7,749 were from the _Code_ label, and 255 were from the _Repository_ label. The same search result may contain multiple keywords, meaning there are duplicate projects in the collected data. After removing duplicate projects, we obtained a total of 4249 search results, of which 4081 were from the _Code_ label, and 168 were from the _Repository_ label. Table 2 shows the number of different language types of search results we obtained from the _Code_ label.
#### 3.2.2. Filtering Code Snippets
**Step 2**. After obtaining the results from the keyword searches, we further filtered them by not only considering the accuracy of the keywords but also investigating the project's documentation, code comments, and other metadata in the search results to determine whether they were generated by GitHub Copilot. Additionally, since we wanted to obtain code snippets used in real-world projects, we excluded search results used to solve simple algorithmic problems on platforms, such as
\begin{table}
\begin{tabular}{l l c c} \hline \hline \(\bullet\) & **Search Term** & **\(\bullet\) Code** & **\(\bullet\) Repositories** \\ \hline
1 & “By GitHub Copilot” & 2549 & 54 \\
2 & “Use GitHub Copilot” & 1822 & 77 \\
3 & “With GitHub Copilot” & 3378 & 127 \\ \hline
**Total** & & **7749** & **255** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Search results based on different terms used
LeetCode, which generally involve simple code and may not involve security weaknesses.
We begin by explaining the terminology used in data filtering: the search results under the _Repository_ label are the projects that contain code files, and the search results under the _Code_ label are individual code files. Those code files contain Copilot generated code snippets. In filtering the projects, we followed three rules: (1) for search results under the _Repository_ label, we identified projects that are fully generated by Copilot, as declared in the projects description or the associated README file(s). We retained code files for Python, JavaScript, Java, C++, C#, and Go, which are the main languages supported by Copilot. (2) For search results under the _Code_ label, we retained code files with comments showing the code generated by Copilot. (3) As we mentioned above, we then excluded code files used to solve simple algorithm problems. We provide examples for the three rules in Figs. 3, 4 and 5. As shown in the example in Fig. 3 for the _Repository_ label, we kept all the Python files. In the next example in Fig. 4, we kept the entire file where the Copilot generated code snippet was located. In Fig. 5, the code snippet was removed as it was determined the code just solved a simple algorithmic problem. Meanwhile, for the code files retained under the _Repository_ label, we consider the entire code file as code generated by Copilot. In other words, we assume that all code in the file is generated by Copilot because it was stated in the README file that it was all generated by Copilot. For code files retained under the _Code_ label, we know that the files contain code snippets, perhaps even just a few lines of code, generated by Copilot. Instead of identifying the specific Copilot generated code in this step, we combine the warning messages from the security scan and the code comments in the file to determine whether Copilot generates the code snippet with the security problem (this process is explained further in Section 3.3.2).
After completing the pilot data labeling, the first author checked the rest of the search results, and obtained a total of 465 code snippets. After removing duplicate results, we finally obtained 435 different code snippets. Among them, 249 are from the _Repository_
\begin{table}
\begin{tabular}{l c c} \hline \hline \# & **Language** & **\# Code results** \\ \hline
**L1** & Python & 784 \\
**L2** & JavaScript & 863 \\
**L3** & Java & 641 \\
**L4** & C++ & 437 \\
**L5** & C# & 386 \\
**L6** & Go & 970 \\ \hline
**Total** & & **4081** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Search Results from GitHub
Figure 1. Overview of the research process
Figure 2. Example of the search process
label, and 186 are from the _Code_ label. Table 3 shows the types and numbers of code snippets obtained.
### Data Pre-processing and Analysis
#### 3.3.1. Data Pre-process
**Step 3**. CodeQL is a scalable static security analysis tool that is widely used in practice, and enables users to analyze code and detect relevant weaknesses using predefined queries and test suites and supports for multiple languages (including Java, JavaScript, C++, C#, Go, and Python (Krishnan et al., 2017)). Before using CodeQL to scan the identified code snippets for security weaknesses, we needed to create a CodeQL database for the source code. For interpreted languages like Python and JavaScript, the source code can be directly analyzed, while for compiled languages such as Java, the source code will need to be compiled first and then imported into the CodeQL database. Therefore, we first compiled the code snippets of all compiled languages (i.e., C#, Java, C++, and Go). We removed any code snippets that could not be compiled. For successfully compiled files, we generated the CodeQL database required for queries. At the same time, for interpreted languages Python and JavaScript files, we stored 20 files in each database to improve efficiency, because if we generate a database for an exceptionally large number of files, this would increase the database compilation and scanning time, which is much longer than partitioning them into small databases. In total, we obtained 80 code databases available for CodeQL scanning. Table 4 shows the types and numbers of files in each database.
#### 3.3.2. Data Analysis
**Step 4**. We used well-known automated static analysis tools listed by OWASP (Krishnan et al., 2017) to scan the collected code snippets. Since different static analysis tools may use different algorithms and rules to detect security weaknesses, using multiple tools can increase our chances of discovering security issues in the code. To improve the coverage and accuracy of the results, we used two static analysis tools for security checks on each code snippet (i.e., CodeQL plus a dedicated tool for the specific language).
We first used CodeQL to analyze the code in our dataset. The default query suite for the standard CodeQL query package is codeql-suites/<lang>-code-scanning.qls. There are several useful query suites in the _codeql-suite_ directory of each package. For example, the codeql/cpp-queries package contains the following query suites (Krishnan et al., 2017):
* Cpp-code-scanning.qls, which is the standard code scanning query for C++. It covers various features and syntax of C++ and aims to discover some common weaknesses in the code.
* Cpp-security-extended.qls, which includes some more advanced queries than _cpp-code-scanning.qls_ and can detect more security weaknesses.
* Cpp-security-and-quality.qls, which combines queries related to security and quality, covering various aspects of C++ development from basic code structure and naming conventions to advanced security and performance weaknesses. It aims to help developers improve the security and quality of their code.
In this study, we scanned code snippets using the <language>-security-and-quality.qls test suite related to security weaknesses. These test suites check for multiple security properties and cover many CWEs. For example, the python-security-and-quality.qls test suite for Python provides 168 security checks, the JavaScript test suite provides 203 security checks, and the C++ test suite provides 163 security checks. As the query reports only provide the name and description of the security issues, we manually matched the results in the query reports with the corresponding CWE IDs.
We then selected other popular static security analysis tools for files in each program languages we analyzed. We used the following popular security analysis tools: Bandit for Python, ESLint for JavaScript, Cppcheck for C++, Findbugs for Java, Roslyn for C# and Gosec for Go. In cases where we could not directly obtain the CWE ID related to the security issue from the scan results, we manually mapped the security attributes to the corresponding CWE for later analysis. We explain the specific correspondences in detail in Section 4.2.
**Step 5**. We scanned code snippets from the _Repository_ and _Code_ labels, and we filtered the scan results before analyzing them. We first removed the scan results that were repeatedly prompted by two of the tools, then removed the results that were unrelated to the security issue, and finally confirmed that the Copilot generated code indeed caused the results related to the security issue. As we explained in Section 3.2.2, we considered the code snippet from the _Repository_ label to be the entire code file. Therefore, we kept the entire scan results from the _Repository_ label and counted all the security issues they suggested. For the code snippet obtained from the _Code_ label, we started by scanning the code file. If the static analysis tool found a security issue in the code, we located the code snippet in the file according to the line number of the security issue indicated by the scanning result. We determined whether it was generated by Copilot based on the comments before and after the code snippet. If Copilot indeed generated the code snippet with a security issue, we kept the scan result for our subsequent statistics. We further analyzed the filtered scan results in **Step 6**, detailed in
\begin{table}
\begin{tabular}{l l c c} \hline \hline \# & **Language** & **\# Databases: Repository** & **\# Databases: Code** \\ \hline
**L1** & Python & 7 & 6 \\
**L2** & JavaScript & 3 & 2 \\
**L3** & Java & 8 & 18 \\
**L4** & C++ & 5 & 12 \\
**L5** & C# & 3 & 7 \\
**L6** & Go & 13 & 1 \\ \hline
**Total** & & **39** & **46** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Databases for CodeQL scanning
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \# & **Language** & **\# Code Snippets: Repository** & **\# Code Snippets: Code** & **Total** \\ \hline
**L1** & Python & 132 & 119 & 251 \\
**L2** & JavaScript & 51 & 28 & 79 \\
**L3** & Java & 25 & 18 & 43 \\
**L4** & C++ & 14 & 12 & 26 \\
**L5** & Go & 19 & 1 & 20 \\
**L5** & C# & 8 & 8 & 16 \\ \hline
**Total** & & 249 & 186 & **439** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Code snippets from GitHub
Section 4 according to the specific RQs. We provide our full dataset (including code snippets, full scan results, and filtered results) in our replication package (Krishnan et al., 2017).
## 4. Results
We present the results of three RQs formulated in Section 3.1 below. For each RQ, we first explain how we analyzed the collected code snippets to answer the RQ. We then provide a detailed presentation of the final results for each RQ.
### RQ1: How secure is the code generated by Copilot?
**Approach**. To answer this RQ, we collected 435 code snippets generated by Copilot from GitHub projects. These snippets cover six common programming languages. We used two static analysis tools (CodeQL + another language-dedicated tool) to scan and analyze the code snippets and then combine the results obtained from the two tools. The aim is to achieve a better coverage of security issues. Therefore, as long as one of the tools detected the presence of a security issue, the code snippet was considered vulnerable.
In the analysis results obtained from the CodeQL tool, three types of warnings were used to describe the detected weaknesses:
Figure 4. Example of rule 2: files with comments showing the code generated by Copilot
Figure 3. Example of rule 1: project is fully written by Copilot
_Recommendation_, which provides suggestions for improving code quality; _Warning_, which alerts to potential weaknesses that could cause code to run abnormally or unsafely; and _Error_, which is the highest level of warning and alert to inform that the error could cause code to fail to compile or run incorrectly. Since our research primarily focused on security weaknesses, we only counted code snippets that had _warnings_ and _errors_, and we ignored the other code quality _recommendations_. For the scanning results from the _Code_ label, we also needed to identify whether the security issues obtained from the scan were from Copilot generated code snippets based on the comment that appears before the method. We provide an example of the scan results filtration in Fig. 6. In **Step 1**, we first went to the corresponding file to locate the specific code snippet based on the start and end lines of the scan results that suggested a security issue. In **Step 2**, we located the code at Line 103 and found no comment indicating that Copilot generated it. In **Step 3**, we also found that the code snippet generated by Copilot in the file was located at Lines 226 to 239, and we then determined that the code snippet generated by Copilot did not cause this security issue and discarded this scan result from further analysis. Finally, we aggregated the filtered results obtained using multiple analysis tools to calculate the number of code snippets with security issues detected.
**Results**. Table 5 shows the numbers of code snippets for different types and the numbers and percentages of code snippets with security weaknesses. From the statistical results, we found that out of the 435 code snippets generated by Copilot, 35.8% of them have security weaknesses, regardless of the programming language. There is a higher proportion of security weaknesses in Python and JavaScript code, which are the most popular Copilot languages developers use (Shen et al., 2018). Out of the 251 Python code snippets, 39.4% have security weaknesses. Among the 79 JavaScript code snippets we collected, 27.8% have security weaknesses. Among all programming languages, C++ code snippets have the highest proportion of security weaknesses, reaching 46.1%. Go also has a relatively high proportion of security weaknesses, at 45.0%. In comparison, the proportion of files with security issues is lower for C# and Java code, at 25% and 23.2%, respectively.
### RQ2: What security weaknesses are present in the code snippets generated by Copilot?
**Approach**. To answer RQ2, we processed the results of the scans conducted for RQ1, eliminating duplicate security issues detected at the same code snippet location. In total, we identified 600 security weaknesses across 435 code snippets. Table 6 shows the number of security weaknesses found in code files of different programming languages.
For each code snippet, we used CWEs to classify the security issues identified by the static analysis. Each CWE has a unique ID and a set of related descriptions, including its potential impact and how to detect and fix the CWE (Wang et al., 2018). Some static analysis tools we used, such as _Bandit_ and _Gosec_, provide a CWE ID corresponding to the detected security issues in their scan results. For other scan
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Language** & \begin{tabular}{c} **\# Snippets containing** \\ **security weaknesses** \\ \end{tabular} &
\begin{tabular}{c} **\# Total security** \\ **weaknesses** \\ \end{tabular} \\ \hline Python & 99 & 352 \\ JavaScript & 22 & 93 \\ Java & 10 & 56 \\ C++ & 12 & 70 \\ Go & 9 & 18 \\ C# & 4 & 11 \\ \hline
**Total** & **156** & **600** \\ \hline \hline \end{tabular}
\end{table}
Table 6. The number of security weaknesses in code snippets generated by Copilot
Figure 5. Example of rule 3: files used to solve simple algorithm problems
tools that do not directly give a CWE ID, such as Codeql, ESLint, and FindBugs, we manually associated the provided security issue information with a CWE ID, which is detailed below. The QL queries used in CodeQL often point to a specific CWE, and the scanning process typically displays the CWE-ID associated with the QL query. Although only the name of the QL query is shown in the scan results, we can manually correlate the name field with the CWE ID. For example, the result shows "_Hard-coded credential in API call_", and we know that Hardcoded credential.ql query belongs to CWE-798, this security issue can be mapped to the same CWE (798). In addition, while _FindBugs_ and _ESLint_ also identify security issues, their scan results only provide descriptions of the security issues. We manually associated the descriptions of the security issues in the results with relevant CWE descriptions to determine the specific CWE category to which these security issues belong. Initially, two authors independently matched each description of the security issue with a CWE ID. In case of disagreement, a discussion was initiated between the two authors, and one other author (a security expert) was then involved to provide his assessment. This process continued until all the descriptions of the security issues in the results were matched with CWE IDs. Table 7 shows the list of manually matched CWE IDs and warning messages from CodeQL, ESLint, and FindBugs results. In the final stage, we performed a statistical analysis of CWE weaknesses in 156 code snippets that contained security weaknesses.
**Results.** Table 8 shows the distribution of CWEs in the code snippets, including the number of code snippets that contain a certain CWE (Related Snippets) and the total number of occurrences (Frequency) of the CWE in the code snippets (we put those CWEs whose Frequency = 1 in "Others"). Note that one related code snippets may contain multiple instances of specific CWE. In total, we found 600 CWEs in 435 code snippets. These security weaknesses were related to 42 types of CWE, indicating that developers face a variety of security weaknesses when using Copilot. _CWE-78: OS Command Injection_ is the most frequently occurred CWE, as it was detected in 15 code snippets (representing 14% of the security weaknesses), followed by _CWE-330: Use of Insufficiently Random Values_, _CWE-703: Improper Check or Handling of Exceptional Conditions_, _CWE-400: Uncontrolled Resource Consumption_ and _CWE-502: Deserialization of Untrusted Data_. Some CWEs appeared less frequently, such as _CWE-95: Eval Injection_, and _CWE-22: Improper Limitation of a Pathhammer to a Restricted Directory_.
Additionally, many CWEs occur with a probability of less than 1%, for example, _CWE-176: Improper Handling of Unicode Encoding_, _CWE-312: Cleartext Storage of Sensitive Information_, and _CWE-326: Inadequate Encryption Strength_. This indicates that the types of security issues are closely related to the specific scenarios in which developers use Copilot and make the security issues become apparent, emphasizing the importance of maintaining vigilance and caution when programming.
Figure 6. Example of filtering scan results from the _Code_ label that are generated by Copilot
### RQ3: How many security weaknesses belong to the CWE Top-25?
**Approach.** The code in our collected dataset was generated between June 2021 and June 2023. To compare whether the security issues in Copilot generated code are widespread in this period, we chose MITRE 2022 CWE Top-25 list (Wang et al., 2021) as our baseline. Then, we compared the CWEs obtained in RQ2 with the CWE Top-25.
**Results.** The distribution of CWEs found compared to the MITER list is shown in Table 9. The results show that the CWE weaknesses present in the code generated by Copilot belong to eleven CWE types included in the MITER CWE Top-25 list. This means these are present issues and are currently among the most common and serious security weaknesses in practice. It is worth noting that the 237 security issues present in the code snippet correspond to these 11 CWEs, while another 31 CWEs cover the remaining 363 security issues. This indicates that the CWE Top-25 weaknesses are also prevalent in the code generated by Copilot. Therefore, developers using Copilot must pay close attention to these weaknesses and take appropriate measures to prevent them before they are integrated into their codebase. At the same time, we can see that _CWE-78: OS Command Injection_ is the most frequently occurring weakness from the Top-25 security weaknesses, ranking sixth in the Top-25 list and first in our RQ2 results. Although _CWE-400: Uncontrolled Resource Consumption_ is ranked towards the end of the Top-25 list, it is one of the weaknesses with a high occurrence frequency. Some CWEs with a higher ranking in the Top-25 list do not appear frequently in Copilot generated code, such as _CWE-79: Cross-site Scripting_ and _CWE-89: SQL Injection_.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Tool** & **Warning Message** & **CWE-ID** \\ \hline CodeQL & Reflected server-side cross-site scripting & CWE-79 \\ CodeQL & Flask app is run in debug mode & CWE-215 \\ CodeQL & Clear-text logging of sensitive information & CWE-532 \\ CodeQL & Clear-text storage of sensitive information & CWE-312 \\ CodeQL & Information exposure through an exception & CWE-209 \\ CodeQL & Request without certificate validation & CWE-295 \\ CodeQL & Assignment to constant & CWE-682 \\ CodeQL & Log injection & CWE-117 \\ CodeQL & Identied operands & CWE-570 \\ CodeQL & Incomplete string escaping or encoding & CWE-176 \\ CodeQL & DOM text reinterpreted as HTML & CWE-79 \\ CodeQL & Arbitrary file write during archive extraction & CWE-116 \\ CodeQL & (“Zip Sl?”) & CWE-22 \\ CodeQL & Hard-coded credential in API call & CWE-798 \\ CodeQL & Hard-coded credentials & CWE-798 \\ CodeQL & Hard-coded credentials & CWE-798 \\ CodeQL & Uncontrolled data used in path expression & CWE-798 \\ CodeQL & Resource not released in destructor & CWE-416 \\ CodeQL & Missing Dispose call on local IDSiposable & CWE-690 \\ CodeQL & Use of the return value of a procedure & CWE-252 \\ CodeQL & Dereferenced variable may be null & CWE-476 \\ ESI\_int & Generic Object Injection Sink & CWE-502 \\ ESI\_int & Function Call Object Injection Sink & CWE-20 \\ ESI\_int & Unsafe Regular Expression & CWE-20 \\ ESI\_int & Variable Assigned to Object Injection Sink & CWE-95 \\ \_FindBugs & M S D: Hardcoded constant database pass- & CWE-798 \\ \_FindBugs & H I Dm: Found reliance on default encoding & CWE-116 \\ FindBugs & MD I CAST: Integral division result & CWE-682 \\ FindBugs & H C II: There is an apparent infinite loop & CWE-835 \\ FindBugs & M B W:Exceptional return value of & CWE-522 \\ \_java.io.File.mkdirs()(jenored & CWE-476 \\ \_FindBugs & M D D:W: Possible null pointer dereference & CWE-476 \\ \_FindBugs & M S D:DRD: Database operate.readDatabase() may & CWE-404 \\ \hline \hline \end{tabular}
\end{table}
Table 7. The warning messages of the scan results by CodeQL, ESIant, and FindBugs manually matched with corresponding CWE IDs
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**CWE-ID** & **Description** & **\# Related Snippets** & **Frequency** \\ \hline CWE-78 & OS Command Injection & 15 & 84 \\ CWE-502 & Deserialization of Untrustrated Data & 15 & 56 \\ CWE-400 & Uncontrolled Resource Consumption & 22 & 50 \\ CWE-20 & Improper Input Validation & 10 & 19 \\ CWE-798 & Use of Hard-coded credentials & 5 & 9 \\ CWE-22 & Improper Limitation of a Pathname & 4 & 6 \\ CWE-89 & SQL Injection & 3 & 4 \\ CWE-476 & NULL Pointer Interference & 1 & 3 \\ CWE-94 & Code Injection & 2 & 2 \\ CWE-97 & Cross-site Scripting & 1 & 2 \\ \hline
**Total** & & **237** \\ \hline \hline \end{tabular}
\end{table}
Table 9. The CWEs that belong to the 2022 CWE Top-25 list.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**CWE-ID** & **\# Related Code** & **Frequency of** & **Percentage** \\ & **Snippets** & **Specific CWE** & **Percentage** \\ \hline CWE-78 & 15 & 84 & 14.0\% \\ CWE-300 & 34 & 81 & 13.5\% \\ CWE-703 & 20 & 78 & 13.0\% \\ CWE-398 & 11 & 60 & 10.0\% \\ CWE-502 & 15 & 56 & 9.3\% \\ CWE-400 & 22 & 50 & 8.3\% \\ CWE-20 & 10 & 19 & 3.1\% \\ CWE-252 & 2 & 13 & 2.1\% \\ CWE-259 & 6 & 13 & 2.1\% \\ CWE-404 & 3 & 13 & 2.1\% \\ CWE-451 & 3 & 13 & 2.1\% \\ CWE-462 & 2 & 13 & 2.1\% \\ CWE-476 & 3 & 11 & 1.8\% \\ CWE-22 & CWE-690 & 3 & 9 & 1.5\% \\ CWE-798 & CWE-798 & 5 & 9 & 1.5\% \\ CodeQL & Hard-coded credentials & CWE-798 & CWE-561 & 4 & 8 & 1.3\% \\ CodeQL & Uncontrolled data used in path expression & CWE-22 & CWE-95 & 4 & 8 & 1.3\% \\ CodeQL & Resource not released in destructor & CWE-416 & CWE-22 & 4 & 6 & 1.0\% \\ CodeQL & Missing Dispose call on local IDSiposable & CWE-690 & CWE-327 & 5 & 5 & \textless{1}\% \\ CodeQL & Use of the return value of a procedure & CWE-532 & CWE-532 & 3 & 5 & \textless{1}\% \\ CodeQL & Dereferenced variable may be null & CWE-476 & CWE-563 & 4 & 4 & \textless{1}\% \\ ESI\_int & Generic Object Injection Sink & CWE-502 & CWE-605 & 4 & 4 & \textless{1}\% \\ ESI\_int & Function Call Object Injection Sink & CWE-20 & CWE-89 & 3 & 4 & \textless{1}\% \\ ESI\_int & Unsafe Regular Expression & CWE-20 & CWE-295 & 1 & 3 & \textless{1}\% \\ ESI\_int & Variable Assigned to Object Injection Sink & CWE-95 & CWE-476 & 1 & 3 & \textless{1}\% \\ FindBugs & M S D: Hardcoded constant database pass- & CWE-798 & CWE-775 & 2 & 3 & \textless{1}\% \\ & word & CWE-117 & 1 & 2 & \textless{1}\% \\ FindBugs & H I Dm: Found reliance on default encoding & CWE-116 & CWE-209 & 1 & 2 & \textless{1}\% \\ FindBugs & MD I CAST: Integral division result & CWE-682 & CWE-215 & 2 & 2 & \textless{1}\% \\ FindBugs & H C II: There is an apparent infinite loop & CWE-835 & CWE-416 & 1 & 2 & \textless{1}\% \\ FindBugs & M B W:Exceptional return value of & CWE-522 & CWE-570 & 2 & \textless{1}\% \\ & java.io.File.mkdirs()(jenored & CWE-664 & 1 & 2 & \textless{1}\% \\ FindBugs & MD D:W: Possible null pointer dereference & CWE-476 & 2 & \textless{1}\% \\ & M S D: DRD: Database operate.readDatabase() may & CWE-404 & CWE-79 \\ & fail to close Connection & & & \textless{1}\% \\ \hline \hline \end{tabular}
\end{table}
Table 8. Distribution of CWEs in code snippets
## 5. Discussion
In this section, we explain the study results in Section 5.1 and then discuss their implications in Section 5.2.
### Interpretation of Results
**RQ1: How secure is the code generated by Copilot?**
Among the 435 code snippets generated by Copilot, we found that 35.8% of these code snippets contain security weaknesses. Those weaknesses appear in all six top-used programming languages supported by Copilot. Furthermore, when it comes to the occurrence of security issues in code snippets of different programming languages, it is important to analyze them in conjunction with the popularity of the languages (Levy et al., 2017). In code snippets written in languages like Python and JavaScript, which are frequently used with Copilot, there may be a slightly higher number of security issues. However, overall the proportion of security issues across these six languages ranges from 25% to 45%, showing no significant difference.
Besides, we also found that _CWE-502: Deserialization of Untrusted Data_ and _CWE-400: Uncontrolled Resource Consumption_ problems mainly appeared in code snippets written in Python and JavaScript. This could be attributed to certain features that made their code more flexible, such as dynamic typing and dynamic interpretation. Therefore, developers should pay special attention to the security of their JavaScript and Python-generated code, taking appropriate measures to validate input data and manage resources effectively to minimize security risks. The results of RQ1 suggest that in practical production, although Copilot can help developers write code faster and increase productivity, additional security assessments and fixes are also required to ensure that the generated code does not introduce potential security risks.
**RQ2: What security weaknesses are present in the code snippets generated by GitHub Copilot?**
After conducting a security evaluation of 425 code snippets generated by Copilot, a total of 600 security weaknesses were identified, involving 42 CWE types, which is around 10% of the CWEs (439 CWE types) in software development (Zhou et al., 2017). This may be due to the reason that Copilot generates code in different programming languages and application scenarios, and a wide variety of application scenarios may lead to various types of security issues. In addition, since the Copilot base model (Codex) is trained on publicly available data that potentially contain various types of security weaknesses, this can lead to the presence of multiple CWEs in the generated code by Copilot. This set of 42 CWE types covers many types of security issues, and Table 10 shows the types of security issues that these 42 CWEs are relevant to.
The diversity of security weaknesses indicates that developers using Copilot face various security risks. These risks are diverse, covering different development environments and application scenarios. At the same time, it also reflects the inevitability of security weaknesses in Copilot generated code. Developers need to have corresponding security awareness and skills and take appropriate security measures to avoid these risks in a timely and targeted manner. In addition, we can see that developers often encounter _CWE-78: OS Command Injection_, _CWE-330: Use of Insufficiently Random Values_, and_CWE-703: Improper Check_ or _Handling_ of _Exceptional Conditions_, _CWE-502: Deserialization of Untrusted Data_, and _CWE-400Uncontrolled Resource Consumption_, which appear in multiple code snippets and have a high frequency of occurrence. This can remind developers to take timely and targeted security measures to mitigate these risks. For example, developers should perform adequate validation of user inputs. In addition to this, it is also necessary to restrict the program's permissions so that they only access essential resources. The results of RQ2 reveal the security weaknesses that developers may encounter in an actual production environment and their frequency of occurrence, which can help developers be aware of security aspects of code generated by Copilot and take appropriate measures to address the security weaknesses in an informed manner.
**RQ3: How many security weaknesses belong to the CWE Top-257**
As shown in Table 9, eleven of the CWEs in Copilot generated code can be found in the 2022 CWE Top-25 list, covering more than 237 security issues (39.5% of 600 identified CWEs) in our dataset. This indicates that the commonly acknowledged top 25 weaknesses in software development, which are considered the most prevalent and dangerous, are also prevalent in the code generated by Copilot. Therefore, developers need to pay special attention to these frequently occurring weaknesses and take corresponding measures to avoid and fix them. We also observed that some vulnerabilities from the CWE top-25 list were not detected in our scans, indicating that Copilot may sanitize and prevent specific weaknesses from being suggested to developers. GitHub is gradually enhancing the security of Copilot and its underlying model (Codex) (Han et al., 2017). We also identified 31 security weaknesses in the code that do not belong to the CWE Top-25 list. Although these less common security weaknesses may not be as widespread as CWE Top-25, attackers can still exploit them. For example, we only detected one instance of _CWE-732: Incorrect Permission Assignment for Critical Resource_ in our dataset. This security weakness is not commonly found in code and only occurs when specific users have certain permissions. However, it can lead to significant security risks when it does occur. Developers should also be aware of these less common security weaknesses to fully protect their code from attacks.
### Implications
**Code Snippets with Security Weaknesses:** In practical production, practitioners often use Copilot to generate code in six languages: Python, JavaScript, Java, C++, Go, and C#. These languages
\begin{table}
\begin{tabular}{l l} \hline
**Type of Security Issue** & **Relevant CWEs** \\ \hline Web security issue & CWE-79, CWE-94, CWE-690, CWE-732 \\ Access control issue & CWE-252, CWE-259, CWE-327, CWE-338 \\ Input validation and representation issue & CWE-20, CWE-93, CWE-116 \\ Command injection issue & CWE-78, CWE-563, CWE-855 \\ SQL injection issue & CWE-89, CWE-256 \\ File handling issue & CWE-22, CWE-570 \\ Insecure storage & CWE-302, CWE-775 \\ Improper error handling & CWE-398, CWE-400 \\ Encryption issue & CWE-327 \\ Memory management issue & CWE-416 \\ Buffer errors & CWE-476 \\ Insecure random number & CWE-330 \\ Incorrect type conversion & CWE-703 \\ \hline \end{tabular}
\end{table}
Table 10. The CWEs and Types of Security issues
all inevitably produce security weaknesses. We conjecture that practitioners using Copilot will likely encounter security weaknesses, regardless of the programming language used, and security checks are mandatory. When using Copilot, practitioners should conduct their own assessment of the generated code with the support of security analysis tools. They should exercise extreme caution when attempting to rely entirely on Copilot's behavior, especially for the most commonly used languages with Copilot: Python and JavaScript.
**Types of Security Weaknesses in Copilot Generated Code:** Practitioners using Copilot may encounter a variety of security weaknesses. Our results show that these weaknesses are related to over 40 CWEs. This finding indicates that there are diverse security scenarios in production, and practitioners must have the corresponding security awareness and skills and adopt multiple security prevention measures to address security risks so that they do not simply accept vulnerable code suggestions. At the same time, our study reveals the frequency of related CWEs. When using Copilot to generate code, practitioners should pay particular attention to specific weaknesses, such as _CWE-78: OS Command Injection, CWE-330: Use of Insufficiently Random Values, CWE-502: Deserialization of Untrusted Data, CWE-703: Improper Check or Handling of Exceptional Conditions_, and _CWE-400: Uncontrolled Resource Consumption_. Our findings can assist practitioners in proactively preventing and addressing security issues in a targeted manner.
**The CWEs in Copilot Generated Code from the Top-25 CWE List:** Common security weaknesses in software development are also prevalent in code generated by Copilot. As a good practice, developers can use the CWE Top-25 list as a guide to understand which security weaknesses are most common and dangerous in the generated code and take appropriate measures to improve the code security. Additionally, the CWE Top-25 provides a standardized approach for security assessment, and developers can also use it to conduct security audits of the code generated by Copilot. Developers should also follow best practices and use code analysis tools (static, dynamic, or hybrid) to check the suggested code by Copilot (or any code generation tools) before integrating any code suggestions. Such tools can safeguard the code and help in discovering weaknesses early.
## 6. Threats to Validity
The validity threats are discussed according to the guidelines in (Stein et al., 2017). Note that we did not consider internal validity threats since we did not investigate any relationships between variables and results.
**Construct Validity** is the degree to which a measurement can explain the theoretical structure and characteristics of the measurement, reflecting the extent to which the studied operational measures truly represent the researcher's ideas and the content investigated based on the research questions. This study has three threats to the construct validity: _(1) Using the keyword-based search_ - We used a keyword-based search to collect relevant code snippets from GitHub. The results obtained through the keyword-based search may not cover all code snippets generated by Copilot on GitHub. We tried to mitigate this threat by constantly and iteratively refining the keywords and using synonyms. _(2) Manual data filtering_ - We manually screened the results obtained from the keyword-based search by analyzing the comments, tags, and other metadata of the code snippets to determine whether they were generated by Copilot. Since this process was manually done, it may have been influenced by personal bias. At the same time, we assumed that all code files contained in projects declared to be written by Copilot in markdown files were generated by Copilot, and we included them in the research data. This has an impact on the construct validity, as we could not have excluded the possibility of human-written code files in these projects. _(3) Manual association of CWEs_ - We manually associated some warning messages prompted by static analysis tools with CWEs. Some warning messages could be associated with multiple CWEs, while we only focused on assigning one most suitable CWE for each warning message. Personal bias may occur with this step, impacting the final association, and we mitigated the bias by two authors conducting the association with the assessment by a security expert.
**External Validity** refers to the extent to which research results can be generalized and the degree to which people outside the investigated cases are interested in the research results. It indicates whether the research results are representative and can be validated in similar contexts. Our dataset consists of Copilot generated code snippets collected from open-source projects on GitHub. During the filtering process, we excluded code that utilized Copilot to solve algorithmic problems, aiming to ensure that the collected data genuinely reflected real-world production environments. Due to the reason that the data from GitHub is not diversified enough, there are many code snippets from Game projects. The peculiarity of the data source may make the dataset incomplete, thereby threatening the external validity of the results. However, GitHub is one of the largest code hosting platforms in the world, with hundreds of millions of public code repositories, and is popular among developers and the technology community. The code snippets obtained from GitHub are diverse, which mitigates this threat. Furthermore, we acknowledge the need to collect more diverse code snippets from different platforms to increase the generalizability of the results. We will consider adopting more diversified ways or platforms to collect code. Additionally, due to the limitations of static analysis tools themselves, these tools could not scan all CWEs, and there is a degree of false-positives in the scanning results (as the case with static analysis tools (Kraus et al., 2017; Wang et al., 2018)). Although we attempted to use two static analysis tools to increase the coverage of weaknesses, these tools may have limited abilities in analyzing some CWEs with specific error rates, which may impact the completeness and correctness of the results.
**Reliability** refers to the extent to which a specific research method can produce consistent results. We used multiple automated static analysis tools to analyze the Copilot generated code snippets to improve security weaknesses detection. Developers have widely used these automated tools. The querying mechanism of these tools ensures that the scan results remain consistent when used multiple times. In addition, we performed two rounds of scanning with two tools for security checks on each code snippet, intending to complement the results of one tool with the other one. By implementing these measures, we believe that our research results are reliable and these threats to reliability are mitigated.
## 7. Conclusions
Automatic code generation and recommendation has been an active research area due to the advancement of AI and specifically LLMs. AI code generation tools such as Copilot can greatly improve the development efficiency of programmers, but they can also introduce vulnerabilities and security risks. In this paper, we present the results of an empirical study to analyze security weaknesses in Copilot generated code found in public GitHub projects. We identified 435 code snippets generated by Copilot from GitHub projects and analyzed those snippets for security weaknesses using static analysis tools. This study aims to help developers understand the security risks of weaknesses introduced in the code generated by Copilot (and potentially similar code generation tools). Our results show: (1) 35.8% of the 435 Copilot generated code snippets contain security weaknesses, spreading across six programming languages. (2) The detected security weaknesses are diverse in nature and are associated with 42 different CWEs. The CWEs that occurred most frequently are _CWE-78: OS Command Injection_, _CWE-330: Use of Insufficiently Random Values_, and _CWE-703: Improper Check or Handling of Exceptional Conditions_ (3) Among these CWEs, 11 appear in the MITRE CWE Top-25 list, demonstrating their commonality, current and severity. These are: _CWE-78: OS Command Injection_, _CWE-502: Deserialization of Untrusted Data_, _CWE-400: Uncontrolled Resource Consumption_, _CWE-89: SQL Injection_, _CWE-20: Improper Input Validation_, _CWE-22: Improper Limitation of a Pathname to a Restricted Directory_, _CWE-94: Code Injection_, _CWE-476: NULL Pointer Dereference_, _CWE-798: Use of Hard-coded Credentials_, _CWE-79: Cross-site Scripting_, and _CWE-416: Use After Free_.
In the future, we plan to: (1) collect additional code snippets from other open source repositories and industrial projects, and code snippets generated by newer releases of Copilot; (2) analyze and summarize the application scenarios of these code snippets, studying how practitioners use Copilot and fix the issues in development; and (3) compare the results with other emerging Generative AI code generation tools such as CodeWhisperer, aixcoder, and Code Llama.
|
2304.09093 | Improving Items and Contexts Understanding with Descriptive Graph for
Conversational Recommendation | State-of-the-art methods on conversational recommender systems (CRS) leverage
external knowledge to enhance both items' and contextual words' representations
to achieve high quality recommendations and responses generation. However, the
representations of the items and words are usually modeled in two separated
semantic spaces, which leads to misalignment issue between them. Consequently,
this will cause the CRS to only achieve a sub-optimal ranking performance,
especially when there is a lack of sufficient information from the user's
input. To address limitations of previous works, we propose a new CRS framework
KLEVER, which jointly models items and their associated contextual words in the
same semantic space. Particularly, we construct an item descriptive graph from
the rich items' textual features, such as item description and categories.
Based on the constructed descriptive graph, KLEVER jointly learns the
embeddings of the words and items, towards enhancing both recommender and
dialog generation modules. Extensive experiments on benchmarking CRS dataset
demonstrate that KLEVER achieves superior performance, especially when the
information from the users' responses is lacking. | Huy Dao, Dung D. Le, Cuong Chu | 2023-04-11T21:21:46Z | http://arxiv.org/abs/2304.09093v1 | # Improving Items and Contexts Understanding with Descriptive Graph for Conversational Recommendation
###### Abstract
State-of-the-art methods on conversational recommender systems (CRS) leverage external knowledge to enhance both items' and contextual words' representations to achieve high quality recommendations and responses generation. However, the representations of the items and words are usually modeled in two separated semantic spaces, which leads to misalignment issue between them. Consequently, this will cause the CRS to only achieve a sub-optimal ranking performance, especially when there is a lack of sufficient information from the user's input. To address limitations of previous works, we propose a new CRS framework KLEVER, which jointly models items and their associated contextual words in the same semantic space. Particularly, we construct an _item descriptive graph_ from the rich items' textual features, such as item description and categories. Based on the constructed descriptive graph, KLEVER jointly learns the embeddings of the words and items, towards enhancing both recommender and dialog generation modules. Extensive experiments on benchmarking CRS dataset demonstrate that KLEVER achieves superior performance, especially when the information from the users' responses is lacking.
## 1 Introduction
**Motivation and Problem.** Recently, recommender systems [1, 1, 2, 3, 4, 5] have been investigated extensively due to their practical benefits for industrial applications. Such information retrieval systems provide personalized recommendations to the users based on their historical interactions such as recorded ratings or clicking history. However, conventional recommender systems suffer from the cold-start problem wherein the systems need to recommend items to new users or the user's interactions are very scarce [1]. Besides, it is hard for static recommender systems to capture online user's preferences, since the user's interests are often dynamic and vary over time [1]. For those reasons, conversational recommender systems (CRSs) [1, 2, 3, 4] have gained considerable attention from both academic researchers and industry practitioners, thanks to their ability to offer interactive experience and recommend suitable items on the fly to the users. The goals of such systems are to produce relevant recommendations by interactively asking clarifying questions [2, 3, 4] and to generate human-like and informative responses [1, 2, 3, 5].
A desirable quality of CRS frameworks is that the systems could produce appropriate recommendations by only comprehending some indicate words from the conversations with the users. For example, in the figure 1, the words "girlfriend,
Figure 1: The three words _girlfriend_, _valentine_, _romantic_ and the item **Me Before You** express some common knowledge. Similar to the two words _adventure_, _sci-fi_ and the item **Interstellar**. An _item descriptive graph_ connects these information and reduces false alignments between items and words.
valentine, romantic" possibly correspond to the item Me Before You (2016), while "adventure, sci-fi" are more likely suitable for Interstellar (2014). Understanding such relationships between items and words is especially useful in many cases when the users may not be familiar with available items and only express their preferences by mentioning some descriptive words. However, state-of-the-art approaches Zhou et al. (2020); Lu et al. (2021) suffer from the problem of misalignment between those two kinds of information since word and item representations in their proposed frameworks are inherently modeled in two separated semantic spaces.
**Approach.** We argue that the misalignment between item and context representations can be addressed by jointly learning them in a common semantic space. Previous CRS frameworks suffer from the misalignment problem due to the following two reasons: (1) do not utilize the contextual descriptive features to enrich the understanding of the items and (2) lacking of an effective mechanism to jointly learn word and item embeddings in the same semantic space. To this end, we propose a novel CRS framework called KLEVER (_Knowledge Enhanced conv_E_rasational _Recommender System_) which jointly learn item and word representations in the same semantic space. To facilitate our proposed framework, we introduce (1) an item descriptive graph, constructed by extracting descriptive terms from entities' textual features (such as item categories, item reviews, entity descriptions). This heterogeneous graph captures semantic relationships between items and their descriptive words and (2) a self-supervised learning setting to jointly learn word and item representations. The sets of learned representations are then used to enhance the performance of both recommendation and dialog generation modules.
**Contributions and Organization.** Our contributions can be summarized as follows: Firstly, we introduce a novel item descriptive graph that serves as a prior resource for enhancing the accuracy of items' and words' representations alignment. To the best of our knowledge, this is the first work that an item descriptive graph is introduced to better learn the representations for items and improve the performance of conversational recommendations. Secondly, based on the constructed graph, we learn the embeddings for items and contextual words to optimize for a self-supervised link prediction task. These embeddings are used for two downstream tasks: item recommendation and response generation. Besides, we also introduce a bag-of-words loss based on the connectivity of our item descriptive graph to promote the model to generate relevant words with entities mentioned in the conversation. _Thirdly,_ we conduct extensive experiments on a public CRS dataset to demonstrate the superiority of KLEVER compared with state-of-the-art competitors in Sections 5 and 6. Our detailed analysis further shows its advantages in the cold-start setting where the CRS needs to predict suitable recommended items using only the word-based information. For completeness, we discuss related work in Section 2 and conclude in Section 7.
## 2 Related Work
Attribute-based CRSIn this setting, CRS models focus on asking clarifying questions on the item attributes. There are several works following this line of research including reinforcement learning based techniques Sun and Zhang (2018); Lei et al. (2020); Deng et al. (2021); Li et al. (2021), generalized binary search Zou et al. (2020), graph-based approaches Xu et al. (2020); Lei et al. (2020); Ren et al. (2021); Xu et al. (2021), memory network Zhang et al. (2018), adversarial learning Ren et al. (2020) and multi-armed bandit algorithms Christakopoulou et al. (2016); Li et al. (2021). Another line of research focuses on balancing exploration (_i.e_ asking questions) and exploitation (_i.e_ recommending items) trade-off for cold-start users Gao et al. (2021); Christakopoulou et al. (2016); Li et al. (2021). Though attribute-based CRS models are more controllable in industrial applications; however, such CRS systems lack of the ability to naturally interact with the users, which may lead to undesirable user experience.
Dialog-based CRSRecently, dialog-based CRS models Li et al. (2018); Chen et al. (2019); Liao et al. (2019); Kang et al. (2019); Liu et al. (2020); Zhou et al. (2020); Chen et al. (2020); Hayati et al. (2020); Zhou et al. (2020); Liang et al. (2021); Lu et al. (2021) have been extensively investigated due to their flexibility and interactiveness. Li et al. (2018) propose a benchmark dataset called REDIAL which is collected from Amazon Mechanical Turk (AMT). The dataset consists of a large amount human conversations in movie recommendation scenarios. State-of-the-art dialog-based CRS mod
els (Chen et al., 2019; Zhang et al., 2021; Zhou et al., 2020; Lu et al., 2021) propose to incorporate domain knowledge to enhance the semantic meaning of the conversation. Zhou et al. (2020) leverage two knowledge graphs, _i.e_ Dbpedia (Bizer et al., 2009) and ConceptNet (Speer et al., 2017) to connect related entities and words respectively and learn those two pieces of information in two separated vector spaces. Moreover, they leverage a Mutual Information Maximization (MIM) objective (Velickovic et al., 2019; Kong et al., 2019) to align word and entity representations that express the same knowledge.
Our work lies in the research of dialog-based CRS. In contrast to previous works (Zhou et al., 2020) (Lu et al., 2021), we propose to jointly learn item and context representations in the same semantic space by using the proposed item descriptive graph and a self-supervised objective function.
## 3 Preliminaries
For convenience, we denote \(\mathcal{I},\mathcal{V}\) as set of items and the vocabulary respectively. We also denote the context dialog as \(\mathbf{s}=\{s_{1},s_{2},....,s_{t}\}\) where \(s_{i}\) is the sentence at \(i\)-th turn and \(s_{i}=\{t_{i,1},t_{i,2},...,t_{i,N}\}\) where \(t_{i,j}\) is the \(j\)-th token at the \(i\)-th sentence. Given the context \(\mathbf{s}\), CRS models try to produce proper items and generate natural responses based on extracted information from the context. Formally, at the \(t\)-th conversation turn, the recommendation engine retrieves a set of candidate items \(\mathcal{I}_{t+1}\) from the entire set \(\mathcal{I}\) while the dialog module generates the next utterance \(s_{t+1}\) to respond the user.
Following (Chen et al., 2019; Zhou et al., 2020), we adopt Dbpedia (Bizer et al., 2009) and ConceptNet (Speer et al., 2017) (Arabshahi et al., 2021) to build the item-oriented and word-oriented knowledge graphs respectively. Then we utilize Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2017) and Graph Convolutional Network (GCN) (Kipf and Welling, 2017) to learn entity and word representations respectively. Finally, we obtain an embedding \(\mathbf{e}_{u}\in\mathbb{R}^{d}\) for each entity \(u\) and \(\mathbf{e}_{w}\in\mathbb{R}^{d}\) for each word \(w\) where \(d\) is the dimensionality of those representations.
## 4 Methodology
In this section, we describe our proposed framework to the CRS task. We firstly describe how
Figure 2: The overall framework of our model.
we construct the proposed item descriptive graph which consists of nodes of items and descriptive words. Each item node has connections to its corresponding words and vice versa. Based on the constructed graph, we propose a self-supervised objective function to jointly learn item and word representations in the same semantic space. Finally, we introduce the recommendation engine and the dialog module based on the learned representations. Figure 2 depicts our proposed CRS framework.
### Item Descriptive Graph
For handling the misalignment between words and items, we propose an Item Descriptive Graph (IDG for short) where in each item is represented by a set of descriptive words. These words are directly retrieved from item's meta information such as item categories and associated tags, as well as other textual features such as user reviews and entity descriptions by using a simple linguistics approach (i.e removing stopwords and only considering words whose frequencies appear more than a certain threshold value \(m\)). In the end, only important tags (categories) and top-\(k\) frequent words are used to form the representative set of each item. Our proposed IDG have several advantages: (1) it provides the prior knowledge for handling the misalignment between items and their contextual words based on rich textual features, and (2) similar items may share a similar set of descriptive words. Hence leveraging the constructed graph might help the model capture meaningful properties of the items. We depict the item descriptive graph in figure 1.
### Joint Learning Words and Items Representations
In contrast to previous works Zhou et al. (2020) (Lu et al., 2021), our method directly handle the aforementioned misalignment by modeling both items and words representations in the same semantic space. Specifically, with the constructed item descriptive graph, we utilize a GCN model Kipf and Welling (2017) to jointly learn the item and word embeddings as follows:
\[\begin{split}\hat{\mathbf{e}}_{u}^{(l)}&=\sigma( \sum_{w\in N(u)}\hat{\mathbf{W}}^{(l)}\hat{\mathbf{e}}_{w}^{(l-1)}+\hat{ \mathbf{W}}_{0}\hat{\mathbf{e}}_{u}^{(l-1)})\\ \hat{\mathbf{e}}_{w}^{(l)}&=\sigma(\sum_{u\in N(w)} \hat{\mathbf{W}}^{(l)}\hat{\mathbf{e}}_{u}^{(l-1)}+\hat{\mathbf{W}}_{0}\hat{ \mathbf{e}}_{w}^{(l-1)})\end{split} \tag{1}\]
where \(\hat{\mathbf{e}}_{u}^{(l)},\hat{\mathbf{e}}_{w}^{(l)}\in\mathbb{R}^{d}\) are enhanced representations and \(N(u),N(w)\) are set of neighbors for item \(u\) and word \(w\) respectively. \(\hat{\mathbf{W}}^{(l)},\hat{\mathbf{W}}_{0}^{l}\) are shared weight matrices for both entities and words at the \(l\)-th layer. Besides, we adopt Leaky ReLU Xu et al. (2015) as the non-linear activation function.
Link-prediction Loss:To effectively guide the representation learning process on the item descriptive graph, we propose a self-supervised learning approach based on the link prediction task. Specifically, we learn the embeddings \(\hat{\mathbf{e}}_{w}^{(l)}\) and \(\hat{\mathbf{e}}_{u}^{(l)}\) to predict whether existing a link between the word \(w\) and the entity \(u\) with the probability computed as follows:
\[\text{p}_{w,u}=\sigma\left((\hat{\mathbf{e}}_{w}^{(l)})^{T}\hat{\mathbf{e}}_{ u}^{(l)}\right) \tag{2}\]
where \(\sigma\) denotes for the sigmoid function. Following Mikolov et al. (2013); Schlichtkrull et al. (2017), we utilize negative sampling to train the link prediction loss:
\[\begin{split} L_{link}=-\frac{1}{N}\sum_{(w,u)\in\mathcal{E}^{+} \bigcup\mathcal{E}^{-}}\mathbb{1}[(w,u)\in\mathcal{E}^{+}]\log(\text{p}_{w,u} )\\ +\mathbb{1}[(w,u)\in\mathcal{E}^{-}](1-\log(\text{p}_{w,u}))\end{split}\]
where \(\mathcal{E}^{+},\mathcal{E}^{-}\) are sets of positive and negative examples respectively. Hence, \(N=|\mathcal{E}^{+}|+|\mathcal{E}^{-}|\) is the total number of training examples.
Embedding Fusion:Finally, we obtain the final representations for words and items by fusing embedding vectors learned by the item descriptive, item-oriented and word-oriented knowledge graphs respectively. Given a word \(w\) and an item \(u\), we obtain the final embeddings for item \(u\) and word \(w\) using the following formulations.
\[\begin{split}\textbf{h}_{u}&=\mathbf{W}_{u}[\mathbf{ e}_{u},\hat{\mathbf{e}}_{u}]+\textbf{b}_{u}\\ \textbf{h}_{w}&=\mathbf{W}_{w}[\mathbf{e}_{w},\hat{ \mathbf{e}}_{w}]+\textbf{b}_{w}\end{split} \tag{3}\]
where \(\textbf{h}_{u},\textbf{h}_{w}\in\mathbb{R}^{d}\) are fused representations for entity \(u\) and word \(w\). \(\mathbf{W}_{u},\mathbf{W}_{w}\in\mathbb{R}^{2d\times d}\) and \(\textbf{b}_{u},\textbf{b}_{w}\in\mathbb{R}^{d}\) are learnable weight matrices and biases for items and words respectively.
Misalignment Handling:Intuitively, representations of items and their contextual words (e.g \(\mathtt{Ironman}\) and _super hero_ or \(\mathtt{The}\mathtt{Conjuring}\) and _horror_ ) should be close to each other in the embedding space. In the item descriptive graph, each item
has several connections to its contextual words that are extracted from rich item-side information such as item categories, tags, keywords. By adapting a GNN model Kipf and Welling (2017) which acts as a smoothing operator Li et al. (2018), the joint learner is able to produce similar representations for each node and its neighbor nodes in the graph. Besides, items that share a common set of contextual words may also have similar representations, which improves the quality of the learned embedding vectors.
### Knowledge-enhanced Recommendation Engine
Given a conversation context \(\mathbf{s}=\{s_{1},s_{2},....,s_{t}\}\), we first extract all words and entities, then lookup for their fused embeddings learnt from the previous step. Via a gating network, the user preference **u** is then defined as a combination of word and entity representations as follows:
\[\begin{split}\textbf{u}&=\beta*\textbf{p}_{u}+(1- \beta)*\textbf{p}_{w}\\ \beta&=\sigma(\textbf{W}_{\text{gate}}[\textbf{p}_{ u},\textbf{p}_{w}]+\textbf{b}_{\text{gate}})\end{split} \tag{4}\]
where \(\textbf{p}_{u}\) and \(\textbf{p}_{w}\) are vector representations for entity and context information respectively. We compute those embedding vectors by using the self-attention mechanism Zhou et al. (2020).
Finally, given the user preference vector **u**, the probability that item \(i\) is recommended is the dot product between user preference vector **u** and the fused item embedding \(\textbf{h}_{i}\).
\[\text{P}_{rec}(i)=\text{Softmax}(\textbf{u}^{T}\textbf{h}_{i}) \tag{5}\]
To train the recommendation engine, we optimize the cross entropy loss between our model prediction and the target item.
\[\begin{split} L_{rec}=-\sum_{s\in\mathcal{S}}\sum_{i\in \mathcal{I}}&(1-y_{i}^{s})*\log(1-\text{P}_{rec}^{s}(i))\\ &+y_{i}^{s}\log\text{P}_{rec}^{s}(i)+\lambda_{1}*L_{link}\end{split}\]
where \(\mathcal{S}\) is set of all conversations and \(y_{i}^{s}\) is the label of item \(i\) at the conversation \(s\). We optimize the recommendation loss and the link prediction loss jointly whereas \(\lambda\) is a weighted parameter.
### Knowledge-enhanced Dialog Module
For response generation module, we adopt Transformer Vaswani et al. (2017) as the main architecture. Given a conversation context, we utilize Transformer Encoder to encode the context sequence. At the \(j\)-th decoding step, we feed the context features \(\textbf{x}_{all}\) as well as embeddings of groundtruth tokens before the \(j\)-th position \(\{y_{1},y_{2},...,y_{j-1}\}\) into the Transformer Decoder to obtain a hidden vector \(\textbf{s}_{j}\in\mathbb{R}^{d_{gen}}\) that represents information for predicting the \(j\)-th token \(y_{j}\).
Bag-of-words Loss:To promote the model to generate relevant words with items mentioned in the conversation, we design a novel sentence-level bag-of-words loss based on the connectivity of our constructed item descriptive graph. Firstly, we compute a vector \(\textbf{a}_{j}\in\mathbb{R}^{|\mathcal{V}|}\) representing the predicted scores at the \(j\)-th position in the response as follows:
\[\textbf{a}_{j}=\textbf{W}_{bow}[\textbf{s}_{j},\textbf{p}_{u},\textbf{p}_{w}] +\textbf{b}_{bow} \tag{6}\]
where \(\textbf{W}_{bow}\in\mathbb{R}^{d_{gen}+2*d\times|\mathcal{V}|},\textbf{b}_{ bow}\in\mathbb{R}^{|\mathcal{V}|}\) are learnable parameters. We define a probability distribution \(\text{P}_{bow}\) whose each element represents how likely each word \(w\) in the vocabulary \(\mathcal{V}\) appears in the generated sentence regardless of the position in the sentence as follows.
\[\text{P}_{bow}=\sigma\left(\sum_{j=1}^{N}\textbf{a}_{j}\right) \tag{7}\]
where \(N\) is the number of words in the response and \(\sigma\) is the sigmoid function.
Then we optimize the bag-of-word objective by using the following loss function.
\[L_{bow}=-\sum_{u\in\textbf{s}}\sum_{w\in\mathcal{N}_{1-hop}(u)}\log(\text{P} _{bow}(w)) \tag{8}\]
where \(\mathcal{N}_{1-hop}(u)\) is the set of 1-hop words of entity \(u\) in the conversation context \(s\). Finally, we compute the probability distribution at the \(j\)-th token by the following formulation.
\[\begin{split}\text{Pr}(y_{j})=\text{Pr}_{1}(y_{j}|\textbf{s}_{j}) &+\text{Pr}_{2}(y_{j}|\textbf{s}_{j},\mathcal{G}_{1},\mathcal{G} _{2},\mathcal{G}_{3})\\ &+\text{Pr}_{3}(y_{j}|\textbf{s}_{j},\mathcal{G}_{1},\mathcal{G} _{2},\mathcal{G}_{3})\end{split} \tag{9}\]
where \(\text{Pr}_{1}(.)\) is the generative probability computed by the output of the Transformer Decoder, \(\text{Pr}(.)_{2}\) is the copy probability implemented by the standard copy mechanism Gu et al. (2016) and \(\text{Pr}_{3}(.)\) is the knowledge-guided bag-of-words probability and is computed by applying the Softmax function over the predicted vector \(\textbf{a}_{j}\).
To train the dialog module, we optimize the cross-entropy loss of ground truth responses and
the bag-of-words loss jointly. The final generation loss function is defined as follows.
\[L_{gen}=-\frac{1}{N}\sum_{j=1}^{N}\log(\text{Pr}(y_{j}|\textbf{s},y_{1},...y_{j- 1}))+\lambda_{2}L_{bow}\]
where \(\textbf{y}=\{y_{1},y_{2},...,y_{N}\}\) is the corresponding ground truth response of the conversation context **s** and \(\lambda_{2}\) is the weighted hyperparameter for the bag-of-words loss.
## 5 Experimental Setup
### Dataset
We conduct all experiments on the REDIAL dataset, a recent benchmark for CRS models introduced in Li et al. (2018). For entity textual features to construct the item descriptive graph, following Lu et al. (2021), we crawl movie genres, movie keylpts, user reviews and entity descriptions from IMDB website 1. The detailed statistics of Redial dataset, our constructed item descriptive graph and the model's implementations can be seen in table 1, 2 at Appendix A.1
Footnote 1: [https://www.imdb.com/](https://www.imdb.com/)
### Baseline Methods
We compare our CRS framework denoted as **KLEVER** with several representative baseline approaches: **REDIAL**Li et al. (2018), which utilizes a sentiment-aware auto-encoder Vincent et al. (2008) as the recommendation model. **KBRD**Chen et al. (2019), a model utilizes a knowledge graph based on DBpedia Bizer et al. (2009) to enhance entity representations. **KECRS**Zhang et al. (2021), which adopts a domain-specific knowledge graph based on The Movie Database (TMDB) 2. **KGF**Zhou et al. (2020), which leverages two KGs, i.e ConceptNet and DBpedia, and introduces the Mutual Information Maximization objective to bridge the gap between concept and item representations. **RevCore**Lu et al. (2021), which incorporates user's reviews to enhance the semantic of the conversation. Besides, we also denote **KLEVER - L**, **KLEVER - Bow**, **KLEVER - IDG**, **KLEVER - KG** as variants of our model without link-prediction loss, bag-of-words loss, item descriptive graph and knowledge graphs (i.e. DBpedia, ConceptNet) respectively.
Footnote 2: [https://www.themoviedb.org/](https://www.themoviedb.org/)
### Evaluation Metrics
For _recommendation task_, we use Recall@k (originally defined in Li et al. (2018) and also used in CRS methods Chen et al. (2019); Zhou et al. (2020)), denoted as **R@k** (\(k=1,10,50\)) which checks whether the top-k recommended items contain the ground-truth item. Besides, we also simulate the **cold-start scenario** in CRS, _i.e_ we only consider the test examples without any mentioned items in the conversation context.
For the _generation task_, we assess the generated responses in both automated and manual manners. For automated evaluation, we utilize Distinct N-gram (\(N=2,3,4\)) Li et al. (2016); Zhou et al. (2020) to measure the diversity of generated sentences. For manual evaluation, we randomly select 50 conversations and responses generated by KLEVER and baseline models. We invite three annotators to score the responses in two aspects, Fluency and Informativeness. The range of score is 1 to 3. The final performance is calculated using the average scores of all annotators. The inter-annotator agreement is measured by Fleiss' Kappa Fleiss and Cohen (1973). Detailed instruction for manual evaluation can be found at Appendix A.2.
## 6 Experimental Results
### Recommendation Performance
Evaluation on All Data Setting:As can be seen in table 1, our model outperforms all baseline methods in all metrics and achieves the state-of-the-art performance. Noticeably, our model significantly performs better than KGFF (+13.88% R@1, +11.22% R@10, +2.68% R@50). The reason is possibly that KGFF and RevCore inherently capture words and items in two separated semantic spaces, which lead to the mismatch between the two kinds of signal. While our proposed model is able to jointly model them in a common semantic space and alleviate the mismatch by aligning items with their descriptive words extracted from items'
\begin{table}
\begin{tabular}{l c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{All Data} & \multicolumn{3}{c}{Cold Start} \\ \cline{2-7} & **R@1** & **R@10** & **R@50** & **R@1** & **R@10** & **R@50** \\ \hline Redial & 0.024 & 0.140 & 0.320 & 0.021 & 0.075 & 0.201 \\ KBRD & 0.031 & 0.150 & 0.336 & 0.026 & 0.085 & 0.242 \\ KECRS & 0.025 & 0.153 & 0.349 & 0.032 & 0.148 & 0.327 \\ KCSF & 0.036 & 0.182 & 0.373 & 0.036 & 0.168 & 0.368 \\ RevCore & 0.037 & 0.187 & 0.380 & 0.033 & 0.168 & 0.365 \\ KLEVER & **0.041** & **0.203** & **0.383** & **0.049** & **0.184** & **0.369** \\ \hline KLEVER - L & 0.037 & 0.199 & 0.374 & 0.047 & 0.177 & 0.361 \\ KLEVER - IDG & 0.033 & 0.173 & 0.346 & 0.035 & 0.168 & 0.359 \\ KLEVER - KG & 0.020 & 0.100 & 0.237 & 0.019 & 0.089 & 0.245 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **R@\(k\) for the recommendation task.**
textual features.
Evaluation on Cold Start Setting:As one can see in table 1, our model also outperforms all baseline methods in all metrics, superior over KGSF (+36.11% R@1, +9.52% R@10) and RevCore (+48.48% R@1, +9.52% R@10). Both KGSF and RevCore perform poorly on the cold-start setting since they suffer from the misalignment between words and items. On the other hand, our proposed model can provide an effective alignment between those two kinds of information. Therefore, when the user mentions some indicative words in the conversation, the model may get more evidence to recommend relevant items associated with these mentioned words.
### Generation Performance
Automatic Evaluation:Table 2 shows the generation performance of CRS models. Compared with the baseline models, our proposed model is consistently better in all evaluation metrics. Noticeably, our model outperform the RevCore model (+9.21% Dist2, +9.50% Dist-3, +11.39% Dist-4) and achieves the state-of-the-art performance. We hypothesize that our new bag-of-words objective provides additional guidance for the dialog module to generate sentences not only from ground-truth sequences but also from rich entities textual features; therefore, it may improve the diversity of the generated responses.
Human Evaluation:Table 3 summarizes performance of CRS models by human evaluation. For fluency, all considered models achieve similar and high scores. The reason is possibly from the fact that those models tend to generate short and safe responses. On the other hand, for informativeness, our model significantly outperforms all baseline methods. By leveraging the richness from the item descriptive graph with our proposed bag-of-words objective, rather than producing safe and short sentences, our model is able to generate relevant information, especially in such cold-start cases, which improves the informativeness of the responses.
### Ablation Study
Recommendation Task:As can be seen in table 1, compared to our best model, removing either the self-supervised objective function (i.e. link prediction loss) or item descriptive graph leads to a sharp decreasing on the recommendation accuracy in both the all-data and cold-start settings. We hypothesize that the link-prediction loss helps the model to infer potential edges between items and their descriptive words, which may be missed in our constructed graph. The result also shows that the item descriptive graph are crucial to handle cold-start cases where the model need to effectively align the contextual words to potential items to produce appropriate recommendations to the users.
Generation Task:Table 2 demonstrates that all the components in KLEVER are significantly useful on the response generation task. The bag-of-words loss is able to guide the model in generating relevant words that may not belong to the ground-truth sentences but are handled by rich and diverse textual features. On the other hand, the proposed joint learning process through the item descriptive graph is able to handle the false alignments between word and entity representations, not only enhances the recommendation performance but it also improves the quality of the generated responses.
### Discussion
Item Descriptive Graph improves MIM.To demonstrate the effectiveness of our contribution, we also incorporate the item descriptive graph into the KGSF model. Specifically, we only utilize the MIM objective to align entities with their corresponding descriptive words based on the graph. Table 4 shows that by incorporating the item descriptive graph, the KGSF model is able to achieve
\begin{table}
\begin{tabular}{l c|c|c} \hline
**Models** & **Dist-2** & **Dist-3** & **Dist-4** \\ \hline Redial & 0.225 & 0.236 & 0.228 \\ KBRD & 0.263 & 0.368 & 0.423 \\ KECRS & 0.286 & 0.392 & 0.451 \\ KGSF & 0.364 & 0.517 & 0.605 \\ RevCore & 0.391 & 0.568 & 0.667 \\ KLEVER & **0.427** & **0.622** & **0.743** \\ \hline KLEVER - Bow & 0.391 & 0.578 & 0.701 \\ KLEVER - IDG & 0.380 & 0.524 & 0.594 \\ \hline \end{tabular}
\end{table}
Table 2: Performance on the generation task.
\begin{table}
\begin{tabular}{l c|c|c} \hline
**Models** & **Fluency** & **Informativeness** & **Kappa** \\ \hline KECRS & 2.86 & 1.25 & 0.82 \\ KGSF & 2.71 & 1.55 & 0.74 \\ RevCore & 2.79 & 1.40 & **0.86** \\ KLEVER & **2.90** & **1.85** & 0.76 \\ \hline Human & 2.84 & 2.25 & 0.65 \\ \hline \end{tabular}
\end{table}
Table 3: Human evaluation on the generation task.
better performance on the recommendation task. The reason is that our proposed item graph alleviates noisy alignments between words and entities, therefore, improves the quality of entity and word representations. Noticeably, our proposal KLEVER still significantly performs better than the KGSF + IDG model. We also conduct an analysis on the number of descriptive words per each item in the item descriptive graph, which can be found at Appendix A.3.
Case Study I: Embedding VisualizationWe visualize item embedding vectors learned by KGSF Zhou et al. (2020) and our proposed model to demonstrate that join learning words and items can lead to more meaningful representations. Figure 3 depicts the learned item embeddings according to some randomly chosen item categories. We can see that item representations produced by KLEVER are more separable and meaningful than those learned by KGSF. This is reasonable since our model directly align items with categories by using prior knowledge from the item descriptive graph.
Case Study II: Interactive DialogTable 5 shows an anecdotal example of a cold-start conversation when no item is mentioned in the beginning of conversation. As one can see, KLEVER is able to generate more informative and meaningful responses than other baseline methods. Other examples can be also found at Appendix A.4.
## 7 Conclusion
In this paper, we introduce KLEVER, a novel CRS framework that directly handle the misalignment between words and entities by modeling them in the same latent space. We leverage rich textual features (such as item categories, user reviews and entity descriptions) to construct an item descriptive graph which connects two pieces of data signal and devise a knowledge-enhanced fusion module to jointly learn both word and entity representations. Based on the constructed graph, we also introduce a bag-of-words loss to promote the model to generate more informative responses. Extensive experiments demonstrate the superiority of KLEVER compared with state-of-the-art CRS models, especially in the cases that lack information from the user input.
\begin{table}
\begin{tabular}{l c|c|c} \hline \hline
**Model** & **R@1** & **R@10** & **R@50** \\ \hline KGSF & 0.036 & 0.182 & 0.373 \\ KGSF + IDG & 0.038 & 0.183 & 0.381 \\ KLEVER & **0.041** & **0.203** & **0.383** \\ \hline \hline \end{tabular}
\end{table}
Table 4: KGSF with Item Descriptive Graph on the recommendation task.
Figure 3: Item embedding vectors learned by KGSF (left) and KLEVER (right). We use TSNE van der Maaten and Hinton (2008) to produce the low dimensional visualizations. Each row illustrates item representations from 3 distinct categories.
\begin{table}
\begin{tabular}{l l} \hline \hline & \multicolumn{1}{c}{**Conversation Example**} \\ \hline
**User:** & Hello. Do you know any good horror movies? \\ \hline
**Human:** & Rings (2017) \\
**KGSF:** & Hello, how are you? \\
**Revocore:** & Hello! What kind of movies do you like \\
**KLEVER:** & I would recommend The Conjuring (2013) \\ \hline
**User:** & What is Rings (2017) about? \\ \hline
**Human:** & It is horror movie where the little girl fears \\ & all as a ghost coming out of the well \\
**KGSF:** & I would recommend you to watch it \\
**Revocore:** & I also recommend Hellraiser (1987) \\
**KLEVER:** & I’m not sure if you like ghost movies. \\ \hline
**User:** & I think I’ve seen part of it \\ \hline
**Human:** & Oh nice why don’t you see the full movie \\
**KGSF:** & I would recommend you to watch it. It’s a \\ & bit of Stephen King. \\
**Revocore:** & I also recommend The Babadook (2014) \\
**KLEVER:** & I would recommend you 68 Kill (2017) \\ & Rogue (2007). Excellent movies for you? \\ \hline
**User:** & Maybe if it comes on TV \\ \hline
**Human:** & You should try them. you’re welcome \\
**KGSF:** & I hope you enjoy them. Have a great day! \\
**Revocore:** & I hope you enjoy them. \\
**KLEVER:** & You’re welcome. Bye \\ \hline \hline \end{tabular}
\end{table}
Table 5: Case Study. A cold-start example on movie recommendation. Human responses are ground-truth. |
2310.15243 | ${\rm S{\scriptsize IM}BIG}$: The First Cosmological Constraints from
the Non-Linear Galaxy Bispectrum | We present the first cosmological constraints from analyzing higher-order
galaxy clustering on non-linear scales. We use ${\rm S{\scriptsize IM}BIG}$, a
forward modeling framework for galaxy clustering analyses that employs
simulation-based inference to perform highly efficient cosmological inference
using normalizing flows. It leverages the predictive power of high-fidelity
simulations and robustly extracts cosmological information from regimes
inaccessible with current standard analyses. In this work, we apply ${\rm
S{\scriptsize IM}BIG}$ to a subset of the BOSS galaxy sample and analyze the
redshift-space bispectrum monopole, $B_0(k_1, k_2, k_3)$, to $k_{\rm
max}=0.5\,h/{\rm Mpc}$. We achieve 1$\sigma$ constraints of
$\Omega_m=0.293^{+0.027}_{-0.027}$ and $\sigma_8= 0.783^{+0.040}_{-0.038}$,
which are more than 1.2 and 2.4$\times$ tighter than constraints from standard
power spectrum analyses of the same dataset. We also derive 1.4, 1.4,
1.7$\times$ tighter constraints on $\Omega_b$, $h$, $n_s$. This improvement
comes from additional cosmological information in higher-order clustering on
non-linear scales and, for $\sigma_8$, is equivalent to the gain expected from
a standard analysis on a $\sim$4$\times$ larger galaxy sample. Even with our
BOSS subsample, which only spans 10% of the full BOSS volume, we derive
competitive constraints on the growth of structure: $S_8 =
0.774^{+0.056}_{-0.053}$. Our constraint is consistent with results from both
cosmic microwave background and weak lensing. Combined with a $\omega_b$ prior
from Big Bang Nucleosynthesis, we also derive a constraint on
$H_0=67.6^{+2.2}_{-1.8}\,{\rm km\,s^{-1}\,Mpc^{-1}}$ that is consistent with
early universe constraints. | ChangHoon Hahn, Michael Eickenberg, Shirley Ho, Jiamin Hou, Pablo Lemos, Elena Massara, Chirag Modi, Azadeh Moradinezhad Dizgah, Liam Parker, Bruno Régaldo-Saint Blancard | 2023-10-23T18:01:04Z | http://arxiv.org/abs/2310.15243v1 | # SimBIG: The First Cosmological Constraints from the Non-Linear Galaxy Bispectrum
###### Abstract
We present the first cosmological constraints from analyzing higher-order galaxy clustering on non-linear scales. We use SimBIG, a forward modeling framework for galaxy clustering analyses that employs simulation-based inference to perform highly efficient cosmological inference using normalizing flows. It leverages the predictive power of high-fidelity simulations and robustly extracts cosmological information from regimes inaccessible with current standard analyses. In this work, we apply SimBIG to a subset of the BOSS galaxy sample and analyze the redshift-space bispectrum monopole, \(B_{0}(k_{1},k_{2},k_{3})\), to \(k_{\rm max}=0.5\,h/{\rm Mpc}\). We achieve \(1\sigma\) constraints of \(\Omega_{m}=0.293^{+0.027}_{-0.027}\) and \(\sigma_{8}=0.783^{+0.049}_{-0.038}\), which are more than \(1.2\) and \(2.4\times\) tighter than constraints from standard power spectrum analyses of the same dataset. We also derive \(1.4\), \(1.4\), \(1.7\times\) tighter constraints on \(\Omega_{b}\), \(h\), \(n_{s}\). This improvement comes from additional cosmological information in higher-order clustering on non-linear scales and, for \(\sigma_{8}\), is equivalent to the gain expected from a standard analysis on a \(\sim\)\(4\times\) larger galaxy sample. Even with our BOSS subsample, which only spans \(10\%\) of the full BOSS volume, we derive competitive constraints on the growth of structure: \(S_{8}=0.774^{+0.056}_{-0.053}\). Our constraint is consistent with results from both cosmic microwave background and weak lensing. Combined with a \(\omega_{b}\) prior from Big Bang Nucleosynthesis, we also derive a constraint on \(H_{0}=67.6^{+2.2}_{-1.8}\,{\rm km\,s^{-1}\,Mpc^{-1}}\) that is consistent with early universe constraints.
cosmological parameters from LSS -- Machine learning -- cosmological simulations -- galaxy surveys
## 1 Introduction
The three-dimensional spatial distribution of galaxies enables us to constrain the nature of dark matter and dark energy and measure the contents of the Universe. Along with other cosmological probes, it provides one of the most stringent tests of the standard \(\Lambda\)CDM cosmological model that can lead to discoveries of new physics. With this aim, spectroscopic galaxy surveys of the next decade, the Dark Energy Spectroscopic Instrument (DESI; Collaboration et al., 2016, 2016; Abareshi et al., 2022), Subaru Prime Focus Spectrograph (PFS; Takada et al., 2014; Tamura et al., 2016), the ESA _Euclid_ satellite mission (Laureijs et al., 2011), and the Nancy Grace Roman Space Telescope (Roman; Spergel et al., 2015; Wang et al., 2022), will probe galaxies over unprecedented cosmic volumes out to \(z\sim 3\).
Current analyses of galaxy clustering focus on the power spectrum, the Fourier counterpart to the two-point correlation function, as the primary measurement of galaxy clustering (_e.g._ Beutler et al., 2017; Ivanov et al., 2020; Chen et al., 2022; Kobayashi et al., 2022). These standard analyses model the power spectrum using the perturbation theory (PT) of large-scale structure (see Bernardeau et al., 2002; Desjacques et al., 2016, for a re
view). As a result, they focus on large, mostly linear, scales (\(k_{\rm max}\sim 0.2\,h/{\rm Mpc}\)) where deviations from linear theory are small and PT remains valid. Accurate modeling of higher-order clustering statistics (_e.g._ bispectrum) with PT is progressively more complex and challenging. Furthermore, there are currently no PT-based models that describe new promising summary statistics (_e.g._ Banerjee & Abel, 2021; Eickenberg et al., 2022; Valogiannis & Dvorkin, 2022; Naidoo et al., 2022).
Meanwhile, recent studies have now established that there is additional cosmological information in higher-order statistics (_e.g._ Gil-Marin et al., 2017; D'Amico et al., 2022; Philcox & Ivanov, 2022). Forecasts have also long suggested that there may be even more information on small scales (_e.g._ Sefusatti & Scoccimarro, 2005). Recently, Hahn et al. (2020) and Hahn & Villaescus-Navarro (2021) showed that constraints on \(\Lambda\)CDM cosmological parameters, \(\Omega_{m},\Omega_{b},h,n_{s},\sigma_{8}\), improve by a factor of \(\sim\)2 by analyzing the bispectrum down to nonlinear scales (\(k_{\rm max}=0.5\,h/{\rm Mpc}\)). Massara et al. (2020); Gualdi et al. (2021); Massara et al. (2022); Wang et al. (2022); Hou et al. (2022); Eickenberg et al. (2022); Vallociannis & Dvorkin (2022); Porth et al. (2023) found consistent improvements from forecasts of other summary statistics that extract non-Gaussian cosmological information from non-linear scales. These improvements are further corroborated by recent small-scale clustering analyses using emulators (Storey-Fisher et al., 2022; Zhai et al., 2022).
Another major limitation of current analyses is robustly accounting for observational systematics in _e.g._ targeting, imaging, completeness that significantly impact clustering measurements (Ross et al., 2012, 2017). Fiber collisions, for example, prevent galaxy surveys that use fiber-fed spectrographs (_e.g._ DESI, PFS) from successfully measuring redshifts from galaxies within some angular scale of one another (Yoon et al., 2008). They significantly bias the power spectrum measurement on scales smaller than \(k>0.1\,h/{\rm Mpc}\)(Guo et al., 2012; Hahn et al., 2017; Bianchi et al., 2018). While improved correction schemes for fiber collisions may be sufficient for power spectrum analyses (Hahn et al., 2017; Pinol et al., 2017; Bianchi et al., 2018; Smith et al., 2019), no correction scheme has yet been designed or demonstrated for other summary statistics.
Recently, Hahn et al. (2022) and Hahn et al. (2023)1 presented the SIMulation-Based Inference of Galaxies (SimBIG), a forward modeling framework for analyzing galaxy clustering. SimBIG uses simulation-based inference2 (SBI; see Cranmer et al., 2020, for a review) to perform highly efficient cosmological parameter inference using neural density estimation (NDE) from machine learning (_e.g._ Germain et al., 2015; Papamakarios et al., 2017). This enables SimBIG to use high-fidelity simulations that model the details and realism of the observations. In particular, the SimBIG forward model is based on cosmological \(N\)-body simulations that can more accurately model non-linear structure formation to smaller scales than PT. It also includes observational systematics (_e.g._ survey geometry, masking, fiber collisions). With this approach, H22a analyzed the galaxy power spectrum from the Sloan Digital Sky Survey (SDSS)-III Baryon Oscillation Spectroscopic Survey (BOSS; Eisenstein et al., 2011; Dawson et al., 2013). This work demonstrated that they can rigorously analyze the power spectrum down to smaller scales than ever before, \(k_{\rm max}=0.5\,h/{\rm Mpc}\).
Footnote 1: hereafter H22a and H23
In this work, we extend the SimBIG analysis to the first higher-order statistic: the bispectrum. For a near-Gaussian galaxy distribution, the bispectrum extracts nearly all of its cosmological information (_e.g._ Fry, 1994; Matarrese et al., 1997; Scoccimarro, 2000). We present the first robust cosmological constraints from an analysis that exploits clustering information on both nonlinear scales and in higher-order statistics. We begin in Section 2 by describing the observational galaxy sample that we analyze. We then briefly summarize the details of the SimBIG approach in Section 3. We present and discuss our cosmological results in Section 4 and compare them to constraints in the literature.
## 2 Observations: Boss CMASS Galaxies
We apply our SimBIG bispectrum analysis to the same observed galaxy sample as H22a, which is derived from the Sloan Digital Sky Survey (SDSS)-III Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (Eisenstein et al., 2011; Dawson et al., 2013). More specifically, the sample consists of galaxies in the Southern Galactic Cap (SGC) of BOSS CMASS galaxy sample that are within the redshift range \(0.45<z<0.6\) and have Dec \(>-6\) deg. and \(-25<{\rm RA}<28\) deg. Overall, the galaxy sample covers \(\sim\)3,600 deg\({}^{2}\) and includes 109,636 galaxies. This corresponds to 70% of the SGC footprint and \(\sim\)10% of the full BOSS volume. We refer readers to H22a and H23 for further details on the observed galaxy sample.
The SimBIG approach uses SBI to infer posteriors of \(\Lambda\)CDM cosmological parameters with only a forward model that can generate mock observations, _i.e._ the 3D galaxy distribution. In this section, we briefly describe the forward model, the SBI methodology, the bispectrum, and our posterior validation.
### Forward Model
The Simbig forward model constructs simulated galaxy catalogs from Quijote \(N\)-body simulations run at different cosmologies in a Latin-hypercube configuration (Villaescusa-Navarro et al., 2020). Each simulation has a volume of \(1\,(h^{-1}\text{Gpc})^{3}\) and is constructed using \(1024^{3}\) cold dark matter (CDM) particles gravitationally evolved from \(z=127\) to \(z=0.5\). From the \(N\)-body simulations, halos are identified using the phase-space information of dark matter particles with the Rockstar halo finder (Behroozi et al., 2013). Afterwards, the halos are populated using the halo occupation distribution (HOD; Berlind & Weinberg, 2002; Zheng et al., 2007) framework, which provides a flexible statistical prescription for determining the number of galaxies as well as their positions and velocities within halos. SimBIG uses a state-of-the-art HOD model with 9 parameters that supplements the standard Zheng et al. (2007) model with assembly, concentration, and velocity biases.
From the HOD galaxy catalog, SimBIG adds a full BOSS survey realism by applying the exact survey geometry and observational systematics. The forward modeled catalogs have the same redshift range and angular footprint of the CMASS sample, including masking for bright stars, centerpost, bad field, and collision priority. Furthermore, SimBIG also includes fiber collisions, which systematically removes galaxies in galaxy pairs within an angular scale of \(62^{\prime\prime}\). We forward model fiber collisions because the standard correction schemes do not accurately correct for them (Hahn et al., 2017). In summary, the SimBIG forward model aims to generate mock galaxy catalogs that are statistically indistinguishable from the observations. For more details on the forward model, we refer readers to H22a and H23.
### Simulation-Based Inference
From the forward modeled galaxy catalogs, we use the SimBIG SBI framework to infer posterior distributions of cosmological parameters, \(\mathbf{\theta}\), for a given summary statistic, \(\mathbf{x}\), of the observations: \(p(\mathbf{\theta}\,|\,\mathbf{x})\). The SimBIG SBI framework enables cosmological inference with a limited number of forward modeled simulations. This in turn enables us to exploit cosmological information on small, non-linear, scales and in higher-order statistics that is inaccessible with standard cosmological analyses.
The SBI in SimBIG is based on NDE and uses "normalizing flow" models (Tabak & Vanden-Eijnden, 2010; Tabak & Turner, 2013; Jimenez Rezende & Mohamed, 2015). Normalizing flows use neural networks to learn an extremely flexible and bijective transformation, \(f:x\mapsto z\), that maps a complex target distribution to a simple base distribution, \(\pi(\mathbf{z})\), that is fast to evaluate. \(f\) is defined to be invertible and have a tractable Jacobian so that the target distribution can be evaluated from \(\pi(\mathbf{z})\) by change of variables. Since \(\pi(\mathbf{z})\) is easy to evaluate, this enables us to also easily evaluate the target distribution. In our case, the target distribution is the posterior and the base distribution is a multivariate Gaussian. Among various normalizing flow architectures, we use Masked Autoregressive Flow (MAF; Papamakarios et al., 2017) models.3.
Footnote 3: We use the MAF implementation in abi Python package (Greenberg et al., 2019; Tejero-Cantero et al., 2020), which is based on the nflows Python package (Durkan et al., 2019, 2020).
Our goal is to train a normalizing flow with hyperparameters, \(\mathbf{\phi}\), that best approximates the posterior, \(q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})\approx p(\mathbf{\theta}\,|\,\mathbf{x})\). We do this by minimizing the forward KL divergence between \(p(\mathbf{\theta},\mathbf{x})=p(\mathbf{\theta}\,|\,\mathbf{x})p(\mathbf{x})\) and \(q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})p(\mathbf{x})\). In practice, we first split the forward modeled catalogs into a training and validation set with a 90/10 split. Then we maximize the total log-likelihood \(\sum_{i}\log q_{\mathbf{\phi}}(\mathbf{\theta}_{i}\,|\,\mathbf{x}_{i})\) over the training set, \(\{(\mathbf{\theta}_{i},\mathbf{x}_{i})\}\) This is equivalent to minimizing the forward KL divergence. We use the Adam optimizer (Kingma & Ba, 2017) with a batch size of 50. To prevent overfitting, we evaluate the total log-likelihood on the validation data at every training epoch and stop the training when the validation log-likelihood fails to increase after 20 epochs.
We determine the architecture of our normalizing flow, _i.e._ number of blocks, transforms, hidden features, and dropout probability, through experimentation. We train a large number of flows with architectures and learning rates determined using the Optuna hyperparameter optimization framework (Akiba et al., 2019). Afterwards, we select five normalizing flows with the lowest validation losses. Our final flow is an equally weighted ensemble of the flows: \(q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})=\sum_{j=1}^{5}q_{\mathbf{\phi}}^{j}(\mathbf{ \theta}\,|\,\mathbf{x})/5\). We find that ensembling flows with different initializations and architectures generally improves the robustness of our normalizing flow (Lakshminarayanan et al., 2016; Alsing et al., 2019). For the bispectrum, the posteriors predicted by each individual flow in the ensemble are in good agreement.
In \(q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})\), \(\mathbf{\theta}\) represents the 5 cosmological and 9 HOD parameters. The prior of our posterior estimate is set by the parameter distribution of our training set.
Since the \(N\)-body simulations used for our forward modeled catalogs are evaluated over a Latin-Hypercube, we use uniform priors over the cosmological parameters, \(\{\Omega_{m},\Omega_{b},h,n_{s},\sigma_{8}\}\). The prior ranges fully encompass the _Planck_ priors. For the HOD parameters, we use the same conservative priors from H22a and H23. Next, we describe our summary statistic \(\mathbf{x}\).
### Summary Statistic: the Galaxy Bispectrum
With SimBIG we can derive robust cosmological constraints using any summary statistic of the galaxy distribution that we can accurately forward model. In this work, we apply SimBIG to the first higher-order statistic: the galaxy bispectrum. The bispectrum, \(B(k_{1},k_{2},k_{3})\), is the three-point correlation function in Fourier space and measures the excess probability of different triangle configurations (\(k_{1},k_{2},k_{3}\)) over a random distribution. In this work, we focus solely on the monopole of the redshift-space bispectrum, \(B_{0}(k_{1},k_{2},k_{3})\).
To measure \(B_{0}\), for both observed and forward modeled galaxy samples, we use the Scoccimarro (2015) redshift-space bispectrum estimator, implemented in the pySpectrum python package4. The estimator uses Fast Fourier Transforms with grid size \(N_{\rm grid}=360\) and box size (\(1800\,h^{-1}\)Mpc)\({}^{3}\). The estimator accounts for the survey geometry using a random catalog that has the same radial and angular selection functions as the observed catalog but with a much larger number of objects (>4,000,000). When measuring \(B_{0}\), we include the same Feldman et al. (1994) weights as in H22a. For the observed galaxy sample, we also include angular systematic weights to account for stellar density and seeing conditions as well as redshift failure weights. We do not include weights for fiber collisions, since this effect is included in the SimBIG forward model.
Footnote 4: [https://github.com/changhoonhahn/pySpectrum](https://github.com/changhoonhahn/pySpectrum)
We measure \(B_{0}\) in triangle configurations defined by \((k_{1},k_{2},k_{3})\) bins of width \(\Delta k=0.0105\,h/\)Mpc, three times the fundamental mode \(k_{f}=2\pi/(1800\,h^{-1}\)Mpc). For \(k_{\rm max}=0.5\,h/\)Mpc, \(B_{0}\) has 10,052 total triangle configurations. In practice, we use the reduced bispectrum instead of the bispectrum to reduce the dynamic range
Figure 1: The bispectrum monopole (\(B_{0}\); top panel) and reduced bispectrum monopole (\(Q_{0}\); bottom panel) of a subset of simulated galaxy catalogs in our training set. The catalogs are constructed using the SimBIG forward model from the Quijote\(N\)-body simulations and include BOSS survey realism. We randomly select 200 out of the 20,000 catalogs. We present a subset of 1,354 triangle configurations with \(k_{1},k_{2},k_{3}<k_{\rm max}=0.25\,h/\)Mpc,for clarity. The configurations are ordered by looping through \(k_{3}\) in the inner most loop and \(k_{1}\) in the outer most loop with \(k_{1}\leq k_{2}\leq k_{3}\). For reference, we include \(B_{0}\) measured from the observed BOSS CMASS sample (black) with errorbars estimated from the TESTO simulations. The observed \(B_{0}\) is well within our training dataset.
of the summary statistic5:
Footnote 5: For simplicity, we will refer use \(B_{0}\) to refer to both the bispectrum and reduced bispectrum.
\[Q_{0}(k_{1},k_{2},k_{3})=\] \[\frac{B_{0}(k_{1},k_{2},k_{3})}{P_{0}(k_{1})P_{0}(k_{2})+P_{0}(k_{1 })P_{0}(k_{3})+P_{0}(k_{2})P_{0}(k_{3})}, \tag{1}\]
where \(P_{0}(k)\) represents the monopole of the power spectrum.
We present \(B_{0}(k_{1},k_{2},k_{3})\) and \(Q_{0}(k_{1},k_{2},k_{3})\) for 200 out of 20,000 randomly selected subset of the training set in Figure 1. We only show a subset of 1,354 triangle configurations with \(k_{1},k_{2},k_{3}\leq k_{\text{max}}=0.25\,h/\text{Mpc}\) for clarity. We order the triangles by looping through \(k_{3}\) in the inner most loop and \(k_{1}\) in the outer most loop satisfying \(k_{1}\geq k_{2}\geq k_{3}\). For reference, we include \(B_{0}\) of the observed CMASS sample (black) with uncertainties estimated using the TEST0 simulations, which we describe in the next section. The \(B_{0}\) of the training dataset has a broad range that fully encompasses the observed \(B_{0}\).
### Posterior Validation
Before applying our SimBIG \(B_{0}\) posterior estimator, \(q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})\), to observations, we validate that it can robustly infer unbiased posteriors of the \(\Lambda\)CDM cosmological parameters. First, we assess whether \(q_{\mathbf{\phi}}\) accurately estimate the posterior across the parameter space of the prior. We call this the "NDE accuracy test". In principle, with a sufficiently large training set and successful minimization, \(q_{\phi}\) is guaranteed to accurately estimate the true posterior, since we train it by minimizing the KL divergence with the true posterior. In our case, however, we have a limited number of simulations.
We use the 2,000 validation simulations that were excluded from the training of our posterior estimate (Section 3.2). In Figure 2, we present the simulation-based calibration (SBC; Talts et al., 2020) for the \(\Lambda\)CDM cosmological parameters. For each validation simulation, we apply \(q_{\phi}\) to its \(Q_{0}(k_{123}<0.5\,h/\text{Mpc})\) measurement to infer the posterior. Then for each cosmological parameter, we calculate the rank of the true parameter value within the marginalized 1D posterior estimate. A uniform rank distribution indicates that we accurately estimate the true posterior (black dashed). Overall, the rank distributions are close to uniform for all of the \(\Lambda\)CDM cosmological parameters. For \(\Omega_{m}\) and \(\sigma_{8}\), the distributions have a slight \(\cap\)-shape, which indicate that our \(\Omega_{m}\) and \(\sigma_{8}\) posterior estimates are slightly broader than the true posterior (_i.e._ underconfident). Since this means that our cosmological constraints will be conservative, we conclude that \(q_{\mathbf{\phi}}\) is sufficiently accurate.
Next, we verify the robustness of our \(B_{0}\) posterior with the SimBIG "mock challenge." The SimBIG forward model, or _any_ forward model, makes modeling choices and assumptions that, in detail, do not reflect the actual Universe. To account for this, SimBIG is designed to be highly flexible so that we can robustly marginalize over the complex physical processes that govern galaxy formation and the galaxy-halo connection. Nevertheless, a summary statistic may be sensitive to the specific choices made in the forward model. More importantly, this can bias the inferred cosmological parameters. We, therefore, assess whether this is the case for \(B_{0}\) and validate that we can derive unbiased cosmological parameter constraints.
We use 2,000 test simulations in the three test sets described in H23: TEST0, TEST1, and TEST2. TEST0 consists of 500 "in distribution" simulations built using the same forward model as the training set: Qui-jote \(N\)-body, Rockstar halo finder, and the full SimBIG HOD. TEST1 and TEST2 are "out of distribution" simulations. TEST1 are constructed using QuiJote \(N\)-body, the Friend-of-Friend halo finder (FoF; Davis et al., 1985), and a simpler HOD model. Lastly, TEST2 consists of 1,000 "out of distribution" simulations built using AbacusSummit \(N\)-body simulations (Maksimova et al., 2021), CompaSO halo finder (Hadzihyska et al., 2022), and the full SimBIG HOD. Each test set is constructed using a different forward model. Hence, they serve as a stringent test sets for the robustness of the SimBIG \(B_{0}\) analysis.
We run \(q_{\mathbf{\phi}}\) on the \(B_{0}\) of all of the test sets and derive a posterior for each simulation. In Figure 3, we present the \((\Omega_{m},\sigma_{8})\) posteriors for a randomly selected subset of the test simulations. We present posteriors for TEST0, TEST1, and TEST2 simulations in the top, center, and bottom panels, respectively. The contours represent the 68 and 95 percentiles of the posteriors. In each panel, we mark the true \((\Omega_{m},\sigma_{8})\) value of the test simulation (black x). Each test simulation is a unique realization of a CMASS-like galaxy catalog subject to cosmic variance. We, therefore, do not expect the true \((\Omega_{m},\sigma_{8})\) value to lie at the center of each of the posteriors. Instead, we note that for the majority of the randomly selected test simulations, the true parameter values lie within the 68 and 95 percentiles SimBIG posteriors.
Next, we assess the robustness more quantitatively. In H23, we used SBC, or coverage, to assess the robustness of the posterior estimates. This assessment, however, requires that the parameters of the test simulations sample the full prior distribution. Otherwise, the distribution of the rank statistic is not guaranteed to be uniform, even for the true posterior. The test simulations are
evaluated at fiducial values of the cosmological parameters. Consequently, we use a different approach and assess the robustness by comparing the \(B_{0}\) likelihoods of the different test sets. If \(B_{0}\) is sensitive to variations in the forward model, there will be significant discrepancies among the likelihoods of the test sets.
In practice, comparing the \(B_{0}\) likelihoods is challenging since \(B_{0}(k_{123}<0.5\,h/\text{Mpc})\) is 10,052-dimensional. We instead compare the likelihoods of the compressed \(B_{0}\), \(B_{0}^{(c)}\), as show in Figure 4 for TEST0 (blue), TEST1 (orange), and TEST2. For the compression, we use the mean of the marginalized 1D SimBIG \(B_{0}\) posterior for the \(\Lambda\)CDM cosmological parameters: \(B_{0}^{(c)}=\sum_{j=1}^{N}\mathbf{\theta}_{j}/N\) where \(\mathbf{\theta}_{j}\sim q_{\mathbf{\phi}}(\mathbf{\theta}\,|\,B_{0})\). We use \(N=10,000\) samples to estimate the mean. Each panel represents a dimension of \(B_{0}^{(c)}\) that corresponds to one of the \(\Lambda\)CDM parameters. This is a near-optimal compression of the cosmological information in \(B_{0}\), since \(q_{\mathbf{\phi}}\) accurately estimates the true posterior.
We present the distribution of \(B_{0}^{(c)}-\overline{B_{0}^{(c)}}\), where \(\overline{B_{0}^{(c)}}\) is the average \(B_{0}^{(c)}\) instead of \(B_{0}^{(c)}\). This is because the TEST2 simulations are constructed using a different set of fiducial parameter values than the TEST0 and TEST1 simulations. Overall, we find excellent agreement among the \(B_{0}^{(c)}\) likelihoods with no significant discrepancies. We also find similar levels of agreement when we use other summaries of the marginalized SimBIG \(B_{0}\) posterior (_e.g._ standard deviation, 16\({}^{\text{th}}\) percentile) for the compression. Given the good agreement of \(B_{0}^{(c)}\) likelihoods among the test sets, we conclude that our
Figure 3: Posteriors of \((\Omega_{m},\sigma_{8})\) inferred using the SimBIG bispectrum analysis for a random subset of the TEST0 (top), TEST1 (center), and TEST2 (bottom) simulations. We mark the 68 and 84 percentiles of the posteriors with the contours. We also include the true \((\Omega_{m},\sigma_{8})\) of the test simulations in each panel (black \(\times\)). The comparison between the posteriors and the true parameter values qualitatively show good agreement for each test simulations.
Figure 2: The NDE accuracy test that shows the SBC validation of the SimBIG \(B_{0}(k_{123}<0.5\,h/\text{Mpc})\) posterior estimate. We present the distribution of the rank statistics, which are derived by comparing the true parameter values to the inferred marginalized 1D posteriors. The rank statistics are calculated using 2,000 validation simulations that were excluded from training the posterior estimate. For an accurate estimate of the true posterior, the rank statistic would be uniformly distributed (black dashed). Overall, we estimate unbiased posteriors of all of the \(\Lambda\)CDM cosmological parameters.
analysis is sufficiently robust to the modeling choices in our forward model.
## 4 Results
In Figure 5, we present the posterior distribution of all parameters inferred from the CMASS bispectrum monopole with \(k_{\rm max}<0.5\,h/\)Mpc using SimBIG. The top and bottom sets of panels present the posterior of the cosmological and halo occupation parameters, respectively. The diagonal panels present the 1D marginalized posteriors; the rest of the panels present marginalized 2D posteriors of different parameter pairs. The contours represent the 68 and 95 percentiles and the ranges of the panels match the prior. We also list the 50, 16, and 84th percentile constraints on the parameters above the diagonal panels.
Focusing on the \(\Lambda\)CDM cosmological parameters (Figure 6), we find that the SimBIG \(B_{0}\) analysis tightly constrains _all_ of them. This is without relying on any priors from Big Bang Nucleosynthesis (BBN) or cosmic microwave background (CMB) experiments that are typically used in galaxy clustering analyses (_e.g._ Ivanov et al., 2020; Philcox and Ivanov, 2021; Kobayashi et al., 2022). We derive \(\Omega_{b}=0.059^{+0.005}_{-0.005}\), \(h=0.756^{+0.040}_{-0.039}\), and \(n_{s}=0.954^{+0.033}_{-0.040}\). For the growth of structure parameters (right panels) we derive: \(\Omega_{m}=0.293^{+0.027}_{-0.027}\) and \(\sigma_{8}=0.783^{+0.040}_{-0.038}\).
Our \(B_{0}\) analysis places significantly tighter constraints than \(P_{\ell}(k)\) for the same BOSS SGC sample from previous works. Compared to the H22a SimBIG \(P_{\ell}(k<k_{\rm max}\)=0.5) analysis, our \(\Omega_{m}\) and \(\sigma_{8}\) constraints are both 1.7\(\times\) tighter. This \(P_{\ell}\) analysis, however, goes beyond standard analyses and includes cosmological information on non-linear scales. If we compare to a standard PT \(P_{\ell}(k<k_{\rm max}\)=0.25 \(h/\)Mpc) analysis (Ivanov et al., 2020, \(\Omega_{m}=0.317^{+0.031}_{-0.032}\) and \(\sigma_{8}=0.719^{+0.100}_{-0.085}\); orange), our \(\Omega_{m}\) and \(\sigma_{8}\) constraints are 1.2 and 2.5\(\times\) tighter. Our constraints are also 1.1 and 2.0\(\times\) tighter than the \(P_{\ell}(k<0.25\,h/\)Mpc) constraints from Kobayashi et al. (2022) (\(\Omega_{m}=0.314^{+0.031}_{-0.030}\) and \(\sigma_{8}=0.790^{+0.083}_{-0.072}\); green). They use a theoretical model based on a halo power spectrum emulator and a halo occupation framework. These comparisons clearly illustrate that the cosmological information in both higher-order statistics and non-linear scales is _substantial_.
Next, we analyze \(B_{0}\) to \(k_{\rm max}=0.3\,h/\)Mpc to examine how much of the improvement in our \(B_{0}\) constraints comes from the non-linear scales alone. In Figure 7, we present the SimBIG \(B_{0}(k_{123}\)\(<\)0.3 \(h/\)Mpc) posterior (red dashed) on \(\Omega_{m}\) and \(\sigma_{8}\). We include posteriors from Ivanov et al. (2020) (orange), Kobayashi et al. (2022) (green), and SimBIG \(B_{0}(k_{123}<0.5\,h/\)Mpc) (black). The contours represent the 68 and 95 percentiles of the posteriors. We find overall good agreement among the posteriors. Compared to the \(P_{\ell}\) constraints, the SimBIG \(B_{0}(k_{123}<0.3)\) analyses improves \(\sigma_{8}\) by \(\sim\)1.33\(\times\). The improvement is more modest than the improvement from SimBIG \(B_{0}(k_{123}<0.5)\) and is broadly consistent with the D'Amico et al. (2022) constraints from analyzing the \(B_{0}\) to \(k_{\rm max}\)=0.23 \(h/\)Mpc and bispectrum quadrupole, \(B_{2}\), to \(k_{\rm max}\)=0.08 \(h/\)Mpc. Philcox and Ivanov (2021) and Ivanov et al. (2023) recently found more modest improvements from the bispectrum (\(\sim\)1.1\(\times\)). They, however, only include the bispectrum monopole and multipoles, respectively, out to \(k_{\rm max}\)=0.08 \(h/\)Mpc. We refrain from a more detailed comparison since we analyze a subsample of BOSS galaxies. Nevertheless, the comparison illustrates that the \(B_{0}\) on non-linear scales contains significant additional cosmological information.
Figure 4: Comparison of the compressed bispectrum likelihood, \(p(B_{0}^{(c)}\,|\,\theta_{\rm fid})\), computed on the three sets of test simulations: TEST0 (blue), TEST1 (orange), and TEST2 (green). \(B_{0}^{(c)}\) is derived by taking the mean of the marginalized 1D SimBIG \(B_{0}(k_{123}<0.5\,h/\)Mpc) posterior for the \(\Lambda\)CDM parameters, an optimal compression of the cosmological information in \(B_{0}\). In each panel, we mark the corresponding \(\Lambda\)CDM parameters. The likelihoods are at the fixed fiducial cosmologies and parameter values of the test sets. We present the distribution of \(B_{0}^{(c)}-\overline{B_{0}^{(c)}}\) because TEST2 simulations are constructed using different fiducial parameter values than the TEST0 and TEST1 simulations. Overall, _we find excellent agreement among the likelihoods of the different test simulations and conclude that our \(B_{0}\) analysis is robust to modeling choices in our forward model._
The SimBIG\(B_{0}(k_{123}<0.5)\) produces significantly tighter cosmological constraints than \(P_{\ell}\) analyses because we exploit both non-Gaussian and non-linear cosmological information. For \(\sigma_{8}\), the 2\(\times\) improvement in precision is roughly equivalent to analyzing a galaxy sample with \(>\)4\(\times\) the volume using the standard approach. This improvement is made possible by the SimBIG forward modeling approach that is not only able to accurately model galaxy clustering to \(k_{\rm max}=0.5\,h/{\rm Mpc}\) but also robustly account for observational systematics.
Interestingly, the improvements from the SimBIG \(B_{0}\) analysis enable us to inform recent "cosmic tensions", despite only using 10% of the full BOSS volume. These tensions refer to the discrepancies between the late time and early time measurements of \(S_{8}=\sigma_{8}\sqrt{\Omega_{m}/0.3}\) and the Hubble constant, \(H_{0}\), that have been growing in statistical significance with recent observations (for a recent review see Abdalla et al., 2022). They have increased the scrutiny on \(\Lambda\)CDM and have led to a slew of theoretical works to explore modifications or alternatives to \(\Lambda\)CDM (_e.g._Meerburg, 2014; Chudaykin et al., 2018; Di Valentino et al., 2020).
For \(S_{8}\), our SimBIG\(B_{0}\) constraint \(S_{8}=0.774^{+0.056}_{-0.053}\) lies slightly above the constraints from weak lensing
Figure 5: Posterior distribution of all parameters inferred using the SimBIG\(B_{0}\) analysis to \(k_{\rm max}<0.5\,h/{\rm Mpc}\) from BOSS CMASS SGC. In the top set of panels, we present the cosmological parameters. In the bottom, we present the halo occupation parameters. The axis ranges of the panels represent the prior range. _We place significant constraints on all \(\Lambda\)CDM parameters and a number of the halo occupation parameters_ (_e.g._\(\log M_{\rm min}\), \(\log M_{0}\), and \(\eta_{\rm nat}\)).
(WL) experiments (_e.g._ Asgari et al., 2021; Amon et al., 2022; Secco et al., 2022; Dalal et al., 2023; Sugiyama et al., 2023; DES & KiDS et al., 2023). We do not find significant tension with either the CMB or WL experiments. Our SimBIG \(B_{0}\) analysis also places significant constraints on \(H_{0}\), especially when we combine our posterior with a prior on \(\omega_{b}=\Omega_{b}/h^{2}=0.02268\pm 0.00038\) from BBN using importance sampling (Aver et al., 2015; Cooke et al., 2018; Schoneberg et al., 2019): \(H_{0}=67.6^{+2.2}_{-1.8}\). We find a lower value of \(H_{0}\) that is in good agreement with CMB and other galaxy clustering constraints.
## 5 Discussion
The SimBIG SBI approach relies on accurate forward modeling of the observed galaxy distributions such that the simulated and observed data are statistically indistinguishable. To achieve this, the SimBIG forward model is designed to be highly flexible and mitigate the impact of model misspecification. It uses \(N\)-body simulations that can accurately model the non-linear matter distribution, a halo finder that robustly determines the position and velocities of dark matter halos, and a highly flexible state-of-the-art HOD.
Despite these modeling choices, the SimBIG forward model does not account for all possible effects that may impact galaxy clustering. For example, it does not include the effect of baryons on the matter clustering. Instead, since it has a subpercent effect on the matter bispectrum at \(k<0.5\,h/\)Mpc (_e.g._ Foreman et al., 2019), we rely on the HOD model to implicitly account for the impact. Furthermore, we do not include redshift evolution and additional observational systematics (_e.g._ imaging incompleteness). We refer readers to H23 for a more detailed discussion on the caveats of our forward model.
There are also caveats to our posterior validation for \(B_{0}\). For instance, the comparison of the \(B_{0}^{(c)}\) likelihoods only demonstrates the robustness near the fiducial cosmologies of the test simulations. Furthermore, some cosmological information may be lost in the \(q_{\phi}\)-based compression scheme. This would then potentially underestimate the discrepancies in the full \(B_{0}\) likelihood. Addressing either of these limitations, however, requires a substantially larger suite of simulations evaluated across
Figure 6: _Left_: Posterior of cosmological parameters inferred from \(B_{0}\) using SimBIG. In the diagonal panels we present the marginalized 1D posterior of each parameter. The other panels present the 2D posteriors that illustrate the degeneracies between two parameters. The contours mark the 68 and 95 percentiles. By robustly analyzing \(B_{0}\) down to non-linear regimes, \(k_{\rm max}=0.5\,h/\)Mpc, we place significant constraints on all \(\Lambda\)CDM parameters without any priors from BBN of CMB experiments. _Right_: We focus on the posteriors of \(\Omega_{m}\) and \(\sigma_{8}\)), the parameters that can be most significantly constrained by galaxy clustering alone. We derive \(\Omega_{m}=0.293^{+0.027}_{-0.027}\) and \(\sigma_{8}=0.783^{+0.040}_{-0.038}\). Our \(\Omega_{m}\) and \(\sigma_{8}\) constraints are >10 and 50% tighter than the \(P_{t}(k<k_{\rm max}=0.25\,h/\)Mpc) constraints from a PT approach (Ivanov et al., 2020, orange) and an emulator approach (Kobayashi et al., 2022, green). This improvement comes from simultaneously exploiting higher-order and non-linear cosmological information.
the full prior space. We reserve developing more stringent and efficient validation of the posterior and summary statistic to future work.
Significant challenges still remain when applying forward modeling approaches to upcoming surveys. They will need to be accompanied by continual improvements to the forward model and validation. There are also challenges in extending SimBIG to the large volumes and the different galaxy samples of upcoming surveys. Nevertheless, in this work we demonstrate the clear advantages of forward modeling: by extracting cosmological information using higher-order statistics and on non-linear scales we can _double_ the precision of \(\sigma_{8}\) constraints and significantly improve the constraints of all \(\Lambda\)CDM parameters. In the Hahn et al. (2023), we will present forecasts SimBIG analyses applied to upcoming galaxy surveys: DESI, PFS, and _Euclid_.
## 6 Summary
We present the SimBIG cosmological constraints from analyzing the galaxy bispectrum monopole, \(B_{0}(k_{1},k_{2},k_{3})\), on non-linear scales to \(k_{\rm max}=0.5\,h/\)Mpc. SimBIG provides a forward modeling framework that uses SBI to perform highly efficient cosmological inference using NDE with normalizing flows (H22a and H23). It enables us to leverage the predictive power of \(N\)-body simulations to accurately model higher-order clustering on small scales, which is currently inaccessible with standard PT analyses. It also allows us to more robustly include observational systematics that significantly impact galaxy clustering measurements.
After validating the accuracy and robustness of our analysis using 2,000 test simulations constructed using three different forward models, we conduct the SimBIG \(B_{0}(k_{123}<0.5\,h/\)Mpc) analysis on a subset of CMASS galaxies in the SGC of SDSS-III BOSS. We derive significant constraints on all \(\Lambda\)CDM parameters (\(\Omega_{m},\Omega_{b},h,n_{s},\sigma_{8}\)) without any external priors. Compared to standard power spectrum analyses, we infer 1.2 and 2.4\(\times\) tighter constraints on \(\Omega_{m}=0.293^{+0.027}_{-0.027}\) and \(\sigma_{8}=0.783^{+0.040}_{-0.038}\). We verify that this improvement comes from higher-order cosmological information on non-linear scales and, when restricted to larger scales, our constraints are consistent with previous bispectrum analyses.
In this work, we apply SimBIGto\(\sim\)10% of the full BOSS volume due to the limited volume of our \(N\)-body simulations. Despite the smaller volume, we derive growth of structure, \(S_{8}=\sigma_{8}\sqrt{\Omega_{m}/0.3}\), constraints competitive with other cosmological probes and BOSS analyses of the full volume. Our \(S_{8}=0.774^{+0.056}_{-0.053}\) constraint is statistically consistent with both CMB and weak lensing experiments. We also derive a constraint on \(H_{0}=67.6^{+2.2}_{-1.8}\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) by combining our posterior with a \(\omega_{b}\) prior from BBN. Our \(H_{0}\) constraint is consistent with early universe constraints from CMB and other LSS analyses.
Even with the limited volume of our observations, we derive competitive constraints on \(S_{8}\) and \(H_{0}\) by exploiting additional cosmological information in higher-order clustering on non-linear scales. Extending SimBIG to the full BOSS volume would roughly improve the precision of our constraints by \(\sim\)3\(\times\). In an accompanying paper Hahn et al. (2023), we will present forecasts of SimBIG clustering analyses of upcoming spectroscopic galaxy surveys (_e.g._ DESI, PFS, _Euclid_) and demonstrate that it has to be potential to produce the leading cosmological constraints from LSS. Hahn et al. (2023) will also compare the \(B_{0}\) constraints from this work to SimBIG constraints derived from field-level inference using convolutional neural networks (Lemos et al., 2023) and the wavelet scatter transform (Regaldo-Saint Blancard et al., 2023).
Figure 7: \((\Omega_{m},\sigma_{8})\) posterior from the SimBIG \(B_{0}\) analysis to \(k_{\rm max}=0.3\,h/\)Mpc (red dashed). For comparison, we include posteriors from \(P_{\ell}\) analyses (Ivanov et al., 2020, orange; Kobayashi et al., 2022, green) and the SimBIG \(B_{0}(k_{123}<0.5\,h/\)Mpc) analysis (black). The contours represent the 68 and 95 percentiles. We find overall good agreement among the posteriors. Furthermore, the improvement is find from \(B_{0}(k_{123}<0.3\,h/\)Mpc) over \(P_{\ell}\) is consistent with the improvement from \(B_{0}\) found in the literature (_e.g._ D’Amico et al., 2022). Our \(B_{0}(k_{123}<0.3\,h/\)Mpc) posterior is significantly broader than our \(B_{0}(k_{123}<0.5\,h/\)Mpc). This demonstrates that there is additional higher-order cosmological information in the non-linear regime, \(0.3<k<0.5\,h/\)Mpc, that we can robustly analyze using SimBIG.
## Acknowledgements
It's a pleasure to thank Mikhail M. Ivanov and Yosuke Kobayashi for providing us with the posteriors used for comparison. We also thank Peter Melchior, Uros Seljak, and Benjamin D. Wandelt for valuable discussions. This work was supported by the AI Accelerator program of the Schmidt Futures Foundation. JH has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 101025187. AMD acknowledges funding from Tomalla Foundation for Research in Gravity.
|
2301.03360 | Upward lightning at wind turbines: Risk assessment from larger-scale
meteorology | Upward lightning (UL) has become an increasingly important threat to wind
turbines as ever more of them are being installed for renewably producing
electricity. The taller the wind turbine the higher the risk that the type of
lightning striking the man-made structure is UL. UL can be much more
destructive than downward lightning due to its long lasting initial continuous
current leading to a large charge transfer within the lightning discharge
process. Current standards for the risk assessment of lightning at wind
turbines mainly take the summer lightning activity into account, which is
inferred from LLS. Ground truth lightning current measurements reveal that less
than 50% of UL might be detected by lightning location systems (LLS). This
leads to a large underestimation of the proportion of LLS-non-detectable UL at
wind turbines, which is the dominant lightning type in the cold season. This
study aims to assess the risk of LLS-detectable and LLS-non-detectable UL at
wind turbines using direct UL measurements at the Gaisberg Tower (Austria) and
S\"antis Tower (Switzerland). Direct UL observations are linked to
meteorological reanalysis data and joined by random forests, a powerful machine
learning technique. The meteorological drivers for the non-/occurrence of
LLS-detectable and LLS-non-detectable UL, respectively, are found from the
random forest models trained at the towers and have large predictive skill on
independent data. In a second step the results from the tower-trained models
are extended to a larger study domain (Central and Northern Germany). The
tower-trained models for LLS-detectable lightning is independently verified at
wind turbine locations in that domain and found to reliably diagnose that type
of UL. Risk maps based on case study events show that high diagnosed
probabilities in the study domain coincide with actual UL events. | Isabell Stucke, Deborah Morgenstern, Thorsten Simon, Georg J. Mayr, Achim Zeileis, Gerhard Diendorfer, Wolfgang Schulz, Hannes Pichler | 2023-01-09T14:12:35Z | http://arxiv.org/abs/2301.03360v1 | # Upward lightning at wind turbines: Risk assessment from larger-scale meteorology
###### Abstract
Upward lightning has become an increasingly important threat to wind turbines as ever more of them are being installed for renewably producing electricity. The taller the wind turbine the higher the risk that the type of lightning striking the man-made structure is upward lightning. Upward lightning can be much more destructive than downward lightning due to its long lasting initial continuous current leading to a large charge transfer within the lightning discharge process. Current standards for the risk assessment of lightning at wind turbines mainly take the summer lightning activity into account, which is inferred from LLS. Ground truth lightning current measurements reveal that less than 50 % of upward lightning might be detected by lightning location systems (LLS). This leads to a large underestimation of the proportion of LLS-non-detectable upward lightning at wind turbines, which is the dominant lightning type in the cold season. This study aims to assess the risk of LLS-detectable and LLS-non-detectable upward lightning at wind turbines using direct upward lightning measurements at the Gaisberg Tower (Austria) and Santis Tower (Switzerland). Direct upward lightning observations are linked to meteorological reanalysis data and joined by random forests, a powerful machine learning technique. The meteorological drivers for the non-/occurrence of LLS-detectable and LLS-non-detectable upward lightning, respectively, are found from the random forest models trained at the towers and have large predictive skill on independent data. In a second step the results from the tower-trained models are extended to a larger study domain (Central and Northern Germany). The tower-trained models for LLS-detectable lightning is independently verified at wind turbine locations in that domain and found to reliably diagnose that type of upward lightning. Risk maps based on case study events show that high diagnosed probabilities in the study domain coincide with actual upward lightning events. This lends credence to the transfer of the model for all upward lightning types, which increases both the risk and the affected areas.
## 1 Introduction
The growing importance to produce renewable energy has recently led to a notable increase in the number of wind turbines (e.g., Pineda et al., 2018). Since those structures are commonly taller than \(100\) m, the initiation of upward lightning (UL) propagating from the tall structure towards the clouds is facilitated (Berger, 1967). A tall structure is more prone to experience
UL as it is exposed to a stronger electrical field in comparison to the ground. Structures shorter than \(100\) m mainly experience downward lightning (DL) with leaders propagating from the clouds towards the earth surface (e.g., Rakov and Uman, 2003).
As wind turbines are getting taller, UL is the major weather-related cause of severe damages to them (e.g., Rachidi et al., 2008; Montanya Puig et al., 2016; Pineda et al., 2018; Matsui et al., 2020; Zhang and Zhang, 2020). It can be much more destructive than DL due to its initial continuous current (ICC) lasting approximately ten times longer than the current of DL. Ground truth lightning current measurements at the specially instrumented tower on top of the Gaisberg mountain (Austria, Salzburg) reveal that more than \(50\) % of UL is not detected by conventional lightning location systems (LLS). The reason is that the LLS cannot detect a particular subtype of UL having only an ICC (Diendorfer et al., 2015; March et al., 2016). Even though towers exist providing ground truth lightning current data for LLS-detectable UL (UL-LLS) such as the Santis Tower in Switzerland, the Gaisberg Tower is the only instrumented tower in Europe providing the full information on the occurrence of both UL-LLS and LLS-non-detectable UL (UL-noLLS).
Standards for lightning protection of wind turbines (e.g., IEC61400-24, 2019) crucially underestimate the occurrence of UL at wind turbines since they currently rely only on three factors: The height of the wind turbine, the local annual flash density derived from LLS and an environmental term involving factors like terrain complexity or altitude (Rachidi et al., 2008; Pineda et al., 2018). Lightning activity in summer clearly dominates the annual local flash density due to large amounts of DL caused by deep convection. However, UL is expected to be the dominant lightning type at wind turbines with a tendency to be even more important in the colder season (Diendorfer, 2020; Rachidi et al., 2008). Further the risk assessment standards cannot account for UL-noLLS, but can only account for UL-LLS given that a tall structure is present.
The major objective of this study is to assess the risk of UL-LLS and UL-noLLS at wind turbines over a larger domain. Only at very few points the actual occurrence of UL can be analyzed based on direct measurements. Even though LLS networks exist which might allow to analyze UL-LLS at tall structures, the lightning current measurements show that a significant proportion is missed. Being aware that conventional LLS cannot assess the full risk of UL at wind turbines, this study uses a new approach.
It uses machine learning techniques linking the occurrence of UL to the larger-scale meteorological setting. The occurrence of UL can only be provided by ground truth lightning current measurements. These are the basis to build and train the statistical models used to eventually assess the risk of UL over a whole study domain. Specifically, this study employs conditional inference random forests (Hothorn and Zeileis, 2015), which account for highly nonlinear and complex interactions between the incidence of UL to the tall structures and the atmosphere. The achievement of the major objective requires several steps.
From lightning current measurement data at two instrumented towers in Austria (Gaisberg Tower) and Switzerland (Santis Tower) two models are constructed: One for UL-LLS and one for UL-LLS + UL-noLLS. These shall first find whether there is a relationship between larger-scale meteorological variables and the occurrence of UL and second demonstrate how well larger-scale meteorology can serve as a diagnostic tool to infer the occurrence of UL.
The benefit of the availability of UL-LLS data helps to verify whether the results from the instrumented towers are transferable. The idea is to extract wind turbine locations within the study domain and identify all lightning strikes to them from the colder season (ONDJFMA) using LLS data. Succeeding in reliably diagnosing UL-LLS from larger-scale meteorology in
combination with UL ground truth lightning current measurements provides a stronger reliability of the results when in a final step the risk of UL-noLLS, which cannot be verified using LLS data, is assessed.
The following sections are organized as follows. Section 2 introduces the two instrumented towers providing the necessary ground truth data for this study. The first one is the Gaisberg Tower providing both UL-LLS and UL-noLLS and the second one is the Santis Tower providing only UL-LLS. Further this section introduces the identification of lightning at wind turbines in the study domain as well as the meteorological data used.
Section 3 summarizes the procedures and major findings from the two instrumented towers. Section 3.1 describes the basic principle of the construction of a random forest model. Section 3.2 presents the performance of the models at the instrumented towers. Further, the most important larger-scale meteorological variables are introduced which lead to a higher risk of UL (Sect. 3.3).
Then, Sect. 4 presents the results extending the models from the instrumented towers to the larger study domain to find regions with a higher risk to experience UL. Section 4.1 diagnoses UL-LLS at wind turbines and presents case studies. Then, in Sect. 4.2 the risk of UL-LLS and UL-LLS + UL-noLLS at wind turbines is illustrated and discussed using the whole period of consideration.
Section 5 concludes and summarizes the most important findings.
## 2 Data
This study combines five different data sources: UL data measured directly at the Gaisberg Tower in Austria (Diendorfer et al., 2009) and at the Santis Tower in Switzerland (Romero et al., 2012); LLS data measured remotely by the European Cooperation for Lightning Detection (EUCLID, Schulz et al., 2016); larger-scale meteorological variables from the reanalysis database ERA5 (Hersbach et al., 2020); wind turbine locations identified using the OpenStreetMap database.
### Direct UL measurements at instrumented towers
Figure 1 shows two of the very few instrumented towers for the direct measurement of currents from UL. These are the Gaisberg Tower (1 288 m amsl, \(47^{\circ}48^{\prime}\) N, \(13^{\circ}60^{\prime}\) E) and the Santis Tower (2 502 m amsl, \(47^{\circ}14^{\prime}\) N, \(9^{\circ}20^{\prime}\) E). Lightning at the instrumented towers is almost exclusively UL. Gaisberg Tower recorded in total \(819\) UL events between \(2000\) and \(2015\). Santis Tower recorded \(692\) UL events between \(2010\) and \(2017\).
A sensitive shunt type sensor at Gaisberg allows to measure all types of upward flashes regardless of the current waveform, i.e., UL-LLS and UL-noLLS. However, inductive sensors employed at Santis cannot measure upward flashes with only an ICC (approximately \(50\) %, Diendorfer et al., 2015).
Direct UL current measurements are the crucial prerequisite to construct the random forest models, which are extended to the larger study domain after being trained on the tower data. The combination of data from both towers allows to construct the two types of models, that shall diagnose UL-LLS and both UL-LLS + UL-noLLS.
### UL-LLS at wind turbines and study domain
Remotely detected lightning data by the LLS EUCLID and wind turbine locations derived from OpenStreetMap serve as verification of the statistical models assessing the risk of UL-LLS for the selected study domain.
Within the study domain of 50\({}^{\circ}\)N-54\({}^{\circ}\)N and 6\({}^{\circ}\)E-16\({}^{\circ}\)E, 27 \(814\) wind turbines have been installed by the end of \(2020\) (Fig. 1). Having extracted the exact locations of these wind turbines, lightning strikes within a 0.003\({}^{\circ}\) circular area (approximately within 300 m radius) detected by EUCLID are identified and assumed as UL. EUCLID measures DL with a high flash detection efficiency of more than \(90\,\%\)(Schulz et al., 2016). As mentioned, UL might be detected less efficiently (< \(50\,\%\) Diendorfer et al., 2015).
Due to its destructive potential and its severe underestimation in the current lightning protection standards, UL shall be explicitly accounted for investigating the risk of UL at wind turbines in the study domain. The tower-trained models are based on UL data throughout the year. However, as UL is dominant in the colder season compared to DL, only the months from October to April, starting from October \(2018\) until December \(2020\) are considered in the verification part of the study. Further, since DL is dominant in the warmer season, the extraction of lightning strikes to wind turbines would possibly lead to ambiguity in the identification of DL or UL when considering the whole year.
Figure 1: Geographic overview of the instrumented tower locations (Gaisberg and Säntis) as well as the study domain (box). Green dots are manually identified wind turbine locations based on OpenStreetMap \(2020\). Right: topographic map of study domain showing altitude above mean sea level. Data taken from Shuttle Radar Topography Mission (Farr and Kobrick, 2000).
### Meteorological data
ERA5 provides hourly reanalysis of the state of the atmosphere. It has a resolution of \(31\) km horizontally ( grid cell size of 0.25 \({}^{\circ}\) x 0.25 \({}^{\circ}\) ) and \(137\) levels vertically. This study uses 35 directly available and derived variables at the surface, on model levels and integrated vertically. These reflect variables relevant for cloud electrification, lightning and thunderstorms (Morgenstern et al., 2022). A full list of variables can be found in the Appendix A. Data are spatially and temporally bilinearly interpolated to each Gaisberg and Santis Tower UL observation as well as to each grid cell within the study domain in the verification part of this study.
## 3 Methodological procedures and findings from the instrumented towers
This section provides the required background information on the basic methods as well as important outcomes from the analysis at the instrumented Gaisberg Tower and Santis Tower. Three different aspects shall be covered in the following: First the principle how the basic model, i.e., a random forest, is constructed. Second, the performance of the models and third, which variables are most important to identify favorable conditions for UL to occur or not.
### Construction and verification of the tower-trained random forests
A machine learning technique, which has been recently widely adopted in various scientific fields, is used to link larger-scale meteorology and the occurrence of UL at the instrumented towers. Random forests are highly flexible and able to handle nonlinear effects capturing complex interactions with respect to the stated modeling problem (Strobl et al., 2009).
The occurrence versus the non-occurrence of UL is a binary classification problem which is tackled using \(35\) larger-scale meteorological variables (predictors). Each meteorological predictor is linked to a situation with or without UL at the Gaisberg or Santis Tower using a random forest. A random forest combines predictions from several decision trees, learned on randomly chosen subsamples of the input data.
Specifically, the trees in the random forest are constructed by capturing the association between the binary response and each of the predictor variables using permutation tests (also known as conditional inference, see Strasser and Weber (1999)). The idea is that, in each step of the recursive tree construction, the one predictor variable is selected which has the highest (most significant) association with the response variable. Then, the dataset is split with respect to this predictor variable in order to separate the different response classes as well as possible. Splitting is repeated recursively in each of the subsets of the data until a certain stopping criterion (e.g., regarding significance or subsample size) is met. The forest combines \(500\) of such trees, where each tree is learned on randomly subsampled two-thirds of the full data set and only considering six randomly selected predictors in each split. Finally, the random forest averages the predictions from the ensemble of trees, which stabilizes and enhances the predictive performance. See Hothorn et al. (2006) and Hothorn and Zeileis (2015) for more details on the algorithm and implementation.
To validate the resulting models, the input data are split into training and testing data samples. On the training data, the models are trained and on the unseen testing data the diagnostic ability is assessed. Leave-one-out cross-validation is used for validating the models for UL-LLS and UL-LLS + UL-noLLS. The first model for UL-LLS uses both Santis data and Gaisberg data to increase the size of the training data. The particular flash type that cannot be detected at the Santis Tower is left out from the Gaisberg data during the training procedure to ensure consistency. The second model for UL-LLS + UL-noLLS uses only Gaisberg data, as only the Gaisberg Tower provides full information on all subtypes of UL.
Between 2000 and 2015, the Gaisberg Tower experienced \(247\) unique days with UL events. Between 2010 and 2017, the Santis Tower experienced \(186\) unique days. Combining UL days from both towers yields \(406\) unique days with UL. Each training input data leaves out one of the \(247\) (406) days with UL to use it as test data. This is repeated until each of the \(247\) (406) days has been left out once for training the random forest models. This results in \(247\) (406) different models trained on equal numbers of situations with and without UL.
The input model response (i.e., did UL occur or not) is sampled such that the two classes are balanced, i.e., situations with and without UL are present with equal proportions. Assessing the models' performance, the models diagnose the conditional probability on data not considered during training the models, i.e., on the respective day left out. We call the probability conditional due to the balanced model response setup. To diagnose the conditional probability of UL on days without UL as well, days without UL from each season are randomly sampled between \(2000\) and \(2017\). High diagnostic ability relates to high probabilities whenever UL occurred at Gaisberg or Santis in the particular situation (i.e., a high true positive rate) and low probabilities whenever no UL occurred (i.e., a low false positive rate).
### Performance of the tower-trained random forests
The tower-trained random forest models can reliably diagnose both UL-LLS and UL-LLS + UL-noLLS when validated on unseen withheld data from the towers. Figure 2 summarizes the cross-validated diagnostic ability according to the random forests for UL-LLS + UL-noLLS (Gaisberg) and UL-LLS (Gaisberg + Santis). Both model ensembles show a similarly good diagnostic performance. The diagnosed median conditional probabilities are about \(0.8\) given that UL was observed in the respective situation (minute). This indicates a high true positive rate. Similarly, for situations without lightning (right), the conditional probabilities are low indicating a low false positive rate.
That the random forest including UL-noLLS has the highest diagnostic ability demonstrates that the proportion which cannot be detected by conventional LLS can be indeed reliably diagnosed by larger-scale meteorology alone. This supports the idea to also investigate the risk for unverifiable UL-noLLS and not only for UL-LLS.
### Meteorological drivers for UL-LLS at the instrumented towers
Random forests allow to assess the influence of individual variables on the models' diagnostic performance. This is done by computing the so-called permutation variable importance. The idea is to break up the relationship between the response variable and one predictor variable by neglecting its information when assessing the models' diagnostic performance. Neglecting the information of one predictor variable is done by permutation, i.e., randomly shuffling its values and then assessing how much
the diagnostic performance decreases. Figure 3 visualizes the computed median permutation variable importance according to \(100\) different random forest models for UL-LLS. Each of the \(100\) models is based on a balanced proportion of situations with UL and randomly chosen situations without UL. Results for UL-LLS and UL-LLS + UL-noLLS models are very similar.
Convective precipitation has the largest influence on the occurrence of UL according to the random forests based on direct observations from the Gaisberg and the Santis Tower (Fig. 3). Neglecting the information of this driver variable reduces the diagnostic performance most. The second and third most important variables are the maximum updraft velocity and convective available potential energy (CAPE), respectively. Statistically summarizing the three most important variables shows that CAPE is both at the Santis Tower and at the Gaisberg Tower rather low, when UL occurs (median value of \(68\) J kg\({}^{-1}\)). Convective precipitation comes with a median value of \(3.8\) mm and maximum vertical updraft velocity with a median of \(-\)\(1.5\) m s\({}^{-1}\). All values are larger in magnitude than on "average" when considering every single hour in the considered time range. However, in comparison to situations with deep convection, the order of magnitude is not exceptionally high as may be observed with deep convection in which particularly the CAPE values are commonly higher than \(500\) J kg\({}^{-1}\). An important reason for this might be that at the instrumented towers, UL occurs approximately equally distributed throughout the year whereas intense thunderstorms with deep convection and high CAPE values occur mainly in the summer season. Further this might suggest that for UL to occur, requires a combination of many different processes interacting to form favorable conditions for UL which might be even more complex than providing favorable conditions for deep convection.
Other important variables are the maximum precipitation rate, the vertical size of the thundercloud, the amount of ice crystals and solid hydrometeors as well as the \(2\) m dewpoint temperature are influential.
Figure 2: Distributions of diagnosed conditional probabilities in situations with or without UL events. Left: conditional UL probability given that UL was observed in the particular minute (true positive) based on Gaisberg data including all subtypes of UL. Center: conditional UL probability given that UL was observed in the particular minute based on Gaisberg and Santis data combined. Right: conditional UL probability on randomly sampled days without UL events (false positive).
## 4 UL at wind turbines
The extraction of wind turbine locations and identification of lightning strikes to them within \(300\) m in the colder season (ONDJFMA) shows that there are regions within the study domain that experience UL more frequently than others (see Fig. 4). Interestingly, regions which are more often affected by UL (panel (b), dark pink) coincide with regions with many wind turbines. However, in general it can be observed that regions with a high number of wind turbines (panel (a), dark green) do not necessarily coincide with a high number of UL as can be seen in the North-Eastern parts of the study domain, for instance.
The following sections present and discuss the results when extending the findings from the instrumented towers to the study domain in which wind turbine locations are extracted and the lightning activity to them is analyzed.
### Diagnosing UL-LLS at wind turbines from larger-scale meteorological conditions
The random forest models for UL-LLS and UL-LLS + UL-noLLS based on data from the two instrumented towers identified larger-scale meteorological variables which are most important distinguishing situations with and without UL.
Now, the tower-trained random forest models are applied to the larger study domain to assess the risk of UL at wind turbines. Lightning measurements from LLS data shall verify the results at identified wind turbine locations.
The following results are based on a similar procedure as described in Sect. 3.2 except that each grid cell ( 31 km x 31 km ) of the study domain is used as test data instead of the cross-validated data from the instrumented towers.
Figure 3: Median permutation variable importance according to \(100\) different random forests based on balanced proportions of situations with and without UL at the Gaisberg and Säntis Tower.
In the following, the tower-trained random forest models are applied to each grid cell of the study domain. To increase the robustness of the results, again \(100\) different random forest models based on observations from the Gaisberg and the Santis Tower are used to diagnose the conditional probability of UL on the selected case studies over the study domain. The results in this section visualize the median conditional probabilities diagnosed by the random forest models.
### Case studies: UL-LLS at wind turbines
To illustrate the diagnostic ability of the tower-trained random forests for UL-LLS on days with UL events, three different case study days are selected out of colder seasons between \(2018\) and \(2020\) in the study domain.
Figure 4: Panel (a): number of wind turbines per grid cell derived from © OpenStreetMap \(2020\) data. Panel (b): number of hours per grid cell with lightning at wind turbines derived from EUCLID data.
Figure 5: Larger-scale meteorological setting on the \(4th\) March 2019 over the study domain. Left column illustrates the setting at 13 UTC, right column at 14 UTC. From upper to lower: spatial distributions of isolines of the \(850\) hPa temperature (in intervals of 1 K), convective precipitation, the maximum large-scale updraft velocity (negative values is upward motion) and CAPE. Darker colors indicates higher magnitude. Dark gray dots in all figures are flashes within the considered hour and ERA5 grid cell derived from LLS EUCLID data.
## 6 Conclusion
Figure 6: Median diagnosed conditional probability of UL according to \(100\) random forest models based on Gaisberg and Säntis Tower data (red areas). Yellow symbols are flashes within the considered hour derived from EUCLID data. Gray shaded areas are grid cells without wind turbines.
The selected case study days are characterized by typical weather situations for the colder seasons in the mid-latitudes. The atmosphere in the transition seasons and in winter tends to be highly variable and influenced by the succession of cyclones and anticyclones determining the meteorological setting (Perry, 1987).
In particular the development and progression of mid-latitude cyclones provides favorable conditions for so-called wind-field thunderstorms (Morgenstern et al., 2022). This thunderstorm type is among others associated with strong updrafts, high amounts of precipitation as well as low but present CAPE.
The first case study is considered in more detail regarding the drivers identified at the instrumented towers (Fig. 3). Figure 5 illustrates the larger scale isotherm locations, the spatial distribution of convective precipitation, the maximum updraft velocity and CAPE on the \(4th\) March \(2019\) at 13 UTC and 14 UTC. LLS detected lightning events to the identified wind turbines within the particular hour are indicated as dark gray dots.
The meteorological setting is determined by the passage of a cold front ahead of a trough around noon. Densely packed isotherms at \(850\) hPa crossing Northern and Central Germany from West to East indicate the approximate location of the cold front in panels (a) and (b). The cold front implies locally enhanced amounts of convective precipitation in (c) and (d), strong updrafts indicated by large negative values in (e) and (f) and slightly increased but in general low CAPE in (g) and (h) in comparison to deep convection in summer. All three variables show maximum increased values in slightly different areas within the study domain induced by the cold front. Convective precipitation shows increased values along the cold front, whereas the other two variables have locally more concentrated areas with maximum values (e.g., maximum updraft velocity in North/Central Germany).
Figure 6 visualizes the diagnosed conditional probability by the random forest models in red colors for all three case study days. Panels (a) and (b) show the results for the particular case study discussed in Fig. 5. The diagnosed pattern is a result of combining the influence of the three driver variables. This suggests that not a single variable can be responsible for the resulting probability map but it is rather an interaction of different influential variables yielding areas with increased risk to experience UL.
The yellow symbols again show lightning strikes over the considered hour. Identified lightning events in yellow require a wind turbine within a distance of maximum \(300\) m as described in Sect. 2. All other tall structures that might have experienced UL are not considered in this figure. Therefore, the diagnosed probabilities do not depend on wind turbine locations meaning that high probabilities might be diagnosed even though there is no wind turbine installed. Grid cells without any wind turbines are shaded in gray.
All three case study days in Fig. 6 show that areas with increased diagnosed probability for UL to occur coincide well with identified lightning events in the respective hour over the study domain. In all three case studies there is a clear separation between areas with very low diagnosed risk and areas with very high diagnosed risk to experience UL.
On the \(11th\) February 2020 shown in panel (c) and (d) of Fig. 6, the study domain is in strong westerly flow again associated with locally increased convective precipitation, CAPE and strong updrafts (not shown here). On the \(17th\) February 2020, the study domain is crossed by a cold front in higher altitudes (above \(500\) hPa). Regardless of the different meteorological
situation, the conditions are again similar to the other case studies showing increased values in the three driver variables that highly influence the diagnosed conditional probability.
### Risk assessment of UL at wind turbines
Identifying areas with increased risk of UL due to larger-scale meteorological conditions is a valuable step towards the risk assessment of lightning at wind turbines. The case studies clearly demonstrate that observed lightning at wind turbines coincide with areas of increased probability diagnosed by the random forest models. The following analysis considers all events within the considered period of time in which lightning at wind turbines was identified. Not only the models for UL-LLS shall provide a risk assessment, but now random forests for UL-LLS + UL-noLLS are additionally applied to the study domain and the considered time period.
The considered study period including the transition seasons and winter from \(2018\) to \(2020\) counts in total \(185\) event days with \(1\ 027\) single flash hours and \(18\ 602\) single flash events. These numbers shall be a measure to verify the resulting diagnosed probabilities by the random forest models. Note that these numbers are the lower limit of actually occurred flashes. Considering the uncertainty of manually identifying flashes at wind turbines as well as the uncertainty of detecting UL by the LLS suggests a significantly larger number of actually occurred lightning events at wind turbines. Further, this verification approach exclusively considers lightning at wind turbines and neglects all other tall structures such as radio towers in the study domain that might be affected by UL. In the following, all days within the considered study period are taken as new data for the random forest models to diagnose the conditional probabilities on hourly basis.
The objective is to identify regions that are more frequently affected by a higher risk of UL compared to other regions according to the random forest models. For this purpose the number of hours in each ERA5 grid cell ( 0.25 \({}^{\circ}\) x 0.25 \({}^{\circ}\) ) that exceeds the conditional probability threshold of \(0.5\) is counted.
#### Risk assessment of UL-LLS at wind turbines
Figure 7 (a) illustrates that there are regions in the considered study domain having a higher risk to experience UL-LLS more often than other regions. The western and southwestern parts of the study domain have a considerably higher probability for UL-LLS. This is also in agreement with panel (b) in Fig. 4 showing the actually observed hours in which at least one lightning event to a wind turbine occurred within the respective grid cell.
Interestingly, areas with higher UL-LLS probabilities in Fig. 7 roughly coincide with regions of elevated topography in the southern third of the domain (cf. Fig. 1). Possible explanations are an increased lightning-effective height (e.g., Shindo, 2018) of the turbines and increased chances for thunderstorm formation through orographic lifting and thermally-induced breezes (Kirshbaum et al., 2018). Sea breezes might also be an explanation for the higher probabilities in the northwestern, sea-covered part of the domain.
The successful transfer of the UL-LLS model trained with meteorological data on direct tower measurements to a larger region and its independent verification on wind turbines shows the potential of our approach to be able to produce regionally
varying risk maps, which might in turn lead to regionally varying (voluntary or enforced) lightning protection standards for wind turbines.
#### Risk assessment of UL-LLS + UL-noLLS at wind turbines
The successful transfer of the tower-trained and verified UL-LLS model to a larger domain lends credence to taking the same step with the tower-trained model for all upward lightning (UL-LLS and UL-noLLS) although no data exist for an independent verification.
Panel (b) in Fig. 7 indicates that more flashes are expected when additionally accounting for the LLS-non detectable UL flash type. The pattern of areas with increased risk to experience UL are similar even though some areas affected more often are enlarged. From this it can be suggested that there are similar mechanisms that result from larger-scale meteorology leading to the UL-LLS or UL-noLLS flash types.
The risk in regions with elevated topography in the southern part of the domain and in the coastal northwestern region is most pronouncedly increased.
## 5 Conclusions
Upward lightning (UL) initiating at tall structures such as wind turbines is much more destructive than downward lightning (DL). Each UL flash starts with an initial continuous current (ICC) lasting about ten times longer than in DL transferring much more charge to the tall structure. Further, direct upward lightning measurements suggest that less than \(50\) % of UL events can be detected by most lightning location systems (LLS) since they are not able to spot UL with only an ICC.
UL directly measured at the instrumented tower at Gaisberg has little seasonal variation. However, current lightning protection standards are based on the annual flash density derived from LLS data which is clearly dominated by DL in the warm season. UL-noLLS is completely neglected and UL in the cold season is highly underestimated. Basic knowledge about the occurrence of UL is still incomplete impeding a proper risk assessment of UL at wind turbines.
The missing consideration of UL-noLLS and of the importance of the cold season for UL will therefore considerably underestimate the risk of UL to wind turbines. This study leverages rare direct UL measurements with larger-scale meteorological data in a machine learning model in order to estimate the risk of all UL (UL-LLS and UL-noLLS) at wind turbines.
The first step constitutes training and validating two different random forest models based on long-term observations from two specially instrumented towers. One model accounts only for UL-LLS and one model accounts for UL-LLS + UL-noLLS. The model input data are direct UL measurements from the Gaisberg Tower (Austria, \(2000\)-\(2015\)) and from the Santis Tower (Switzerland, \(2010\)-\(2017\)). While the sensor at the Gaisberg Tower measures also UL-noLLS, the sensor at the Santis Tower misses most of them.
In a second step, the random forest models are extended to a larger study domain (50\({}^{\circ}\)N - 54\({}^{\circ}\)N and 6\({}^{\circ}\)E -16\({}^{\circ}\)E) to identify areas with increased risk of UL in the colder season (ONDJFMA). As a verification, all lightning strikes at wind turbines in this domain are extracted from LLS and OpenStreetMap data and compared to the diagnosed probabilities by the random forests.
Results show that UL can be reliably diagnosed by the tower-trained random forest models at the Gaisberg and Santis Tower. The larger-scale meteorological drivers are large amounts of (convective) precipitation, strong vertical updraft velocities and slightly increased CAPE. Further, the vertical extent of the cloud as well as the amount of ice crystals and solid hydrometeors are important variables.
The extension of the random forests to a larger domain shows that probability maps coincide with observed lightning strikes at wind turbines. Extending models trained at the Gaisberg Tower including UL-noLLS flashes reveals that areas with increased risk to experience UL are expected to experience UL even more often.
The western and southern part of the domain in North-West Germany with elevated topography and the coastal region in its northwestern part are most at risk of UL at wind turbines. This study demonstrates that direct UL measurements at an instrumented tower can be reliably modeled from larger-scale meteorological conditions in a machine learning model (random forest). The study also proposes a novel way how the transfer of that model to a larger region can be justified by using UL-LLS data at wind turbine locations. Consequently regionally detailed risk maps of UL at wind turbines can be produced.
Figure 7: Panels (a) and (b): potential maps for UL in the colder season (ONDJFMA) from \(2018\) to \(2020\). Orange colors are median of hours per grid cell exceeding conditional probabilities of \(0.5\) according to \(100\) random forest models. Panel (a) shows results according to models based on Gaisberg and Santis data combined. Panel (b) shows results according to models based on Gaisberg data also including the UL-noLLS. Relative proportion of in total 12480 hours are given as reference.
## Appendix A Additional material
### Data availability
ERA5 data are freely available for download at [https://cds.climate.copernicus.eu](https://cds.climate.copernicus.eu) Hersbach et al. (2020). EUCLID data and direct observations from the Gaisberg Tower are available only on request. For more details contact Wolfgang Schulz.
### Software
All calculations as well as setting up the final data sets, modeling and predicting were performed using R (R Core Team, 2021), using packages netCDF4 (Pierce, 2019), partykit (Hothorn and Zeileis, 2015), ggplot2 package (Wickham, 2016). Retrieving the raw data and deriving further variables from ERA5 required using Python3 (Van Rossum and Drake, 2009) and cdo (Schulzweida, 2019).
### Risk assessment of UL at wind turbines using a higher probability threshold
In Sect. 4.2 the model results for the risk assessment of UL-LLS and UL-LLS + UL-noLLS are presented in the way that hours are counted exceeding a conditional probability of \(0.5\). Figure A1 illustrates the risk assessment using a higher probability threshold, namely \(0.8\). The number of hours exceeding this threshold is lower by about a factor of two in comparison to a probability threshold of \(0.5\). However, the regional pattern is still similar with maxima West/South-West of the study domain.
**Figure A1.** Panels (a) and (b): maps for the potential of UL in the colder season (ONDJFMA) from \(2018\) to \(2020\). Orange colors are median of hours per grid cell exceeding conditional probabilities of \(0.8\) according to \(100\) random forest models. Panel (a) shows results according to models based on Gaisberg and Sântis data combined. Panel (b) shows results according to models based on Gaisberg data also including the UL-noLLS. Relative proportions of in total 12480 hours are given as reference.
\begin{table}
\begin{tabular}{l l} \hline
**Large-scale variables** & **Unit** \\ \hline cloud base height above ground & m agl \\ convective precipitation & m \\ (rain + snow) & m \\ large scale precipitation & m \\ cloud size & m \\ maximum precipitation rate & kg m\({}^{-2}\) s\({}^{-1}\) \\ ice crystals (total column, tciw) & kg m\({}^{-2}\) \\ Solid hydrometeors (total column, tcsw) & kg m\({}^{-2}\) \\ supercooled liquid water & kg m\({}^{-2}\) \\ (total column, tcslw) & kg m\({}^{-2}\) \\ water vapor (total column) & kg m\({}^{-2}\) \\ vertical integral of divergence & kg m\({}^{-2}\) s\({}^{-1}\) \\ of cloud frozen water flux & kg m\({}^{-2}\) s\({}^{-1}\) \\ _vertical transport of liquids_ & kg Pa s\({}^{-1}\) \\ _around \(-10\) °C_ & kg m\({}^{-2}\) \\ _ice crystals_ & kg m\({}^{-2}\) \\ _(\(-10\) °C -\(-20\) °C)_ & kg m\({}^{-2}\) \\ _ice crystals_ & kg m\({}^{-2}\) \\ _(\(-20\) °C -\(-40\) °C)_ & kg m\({}^{-2}\) \\ _cloud water droplets_ & kg m\({}^{-2}\) \\ _(\(-10\) °C -\(-20\) °C)_ & kg m\({}^{-2}\) \\ _solid hydrometeors_ & kg m\({}^{-2}\) \\ _(\(-10\) °C -\(-20\) °C)_ & kg m\({}^{-2}\) \\ _solid hydrometeors_ & kg m\({}^{-2}\) \\ _(\(-20\) °C -\(-40\) °C)_ & kg m\({}^{-2}\) \\ _solids (cswc + civwc)_ & kg m\({}^{-2}\) \\ _around \(-10\) °C_ & kg m\({}^{-2}\) \\ _liquids (chwc + crwc)_ & kg m\({}^{-2}\) \\ _around \(-10\) °C_ & \\ \end{tabular}
\end{table}
Table 1: Table of large-scale variables taken from ERA5 and variables derived from ERA5. The derived variables (indicated in italics) are suggested to be potentially important in the charging process of a thundercloud or for the development of convection.
mean vertically integrated moisture convergence kg m\({}^{-2}\) s\({}^{-1}\)
_water vapor (\(-10\)\({}^{\circ}\)C - \(-20\)\({}^{\circ}\)C)_ kg m\({}^{-2}\)
boundary layer height m surface latent heat flux J m\({}^{-2}\)
surface sensible heat flux J m\({}^{-2}\)
downward surface solar radiation convective available potential energy convective inhibition present binary mean sea level pressure Pa _height of \(-10\)\({}^{\circ}\)C isotherm_ m agl boundary layer dissipation J m\({}^{-2}\)
_Maximum vertical updraft velocity_ Pa s\({}^{-1}\)
_total cloud shear_ m s\({}^{-1}\)
_wind speed at 10 m_ m s\({}^{-1}\)
_wind direction at 10 m shear between 10 m and cloud base_ m s\({}^{-1}\)
_Acknowledgements._ We acknowledge the funding of this work by the Austrian Research Promotion Agency (FFG), project no. 8/2656 and Austrian Science Fund (FWF) grant no. P 31836. We thank the EMC Group of the Swiss Federal Institute of Technology (EPFL) for providing the data of the Santis Tower strikes. Finally we thank Siemens, the operator of BLIDS for providing EUCLID data.
_Author contributions._ Isabell Stucke did the investigation, wrote software, visualized the results and wrote the paper. Deborah Morgenstern, Thorsten Simon and Isabell Stucke performed data curation, built the data set, and derived variables based on ERA5 data. Thorsten Simon contributed with coding concepts. Georg J. Mayr provided support on the meteorological analysis, data organization and funding acquisition. Achim Zeileis supervised the formal analysis and interpretation of the statistical methods. Achim Zeileis, Georg J. Mayr, and Thorsten Simon are the project administrators and supervisors. All authors contributed to the conceptualization of this paper, discussed on the methodology, evaluated the results, and commented on the paper.
_Competing interests._ The authors declare that they have no conflict of interest. |
2307.07288 | Implicit Neural Feature Fusion Function for Multispectral and
Hyperspectral Image Fusion | Multispectral and Hyperspectral Image Fusion (MHIF) is a practical task that
aims to fuse a high-resolution multispectral image (HR-MSI) and a
low-resolution hyperspectral image (LR-HSI) of the same scene to obtain a
high-resolution hyperspectral image (HR-HSI). Benefiting from powerful
inductive bias capability, CNN-based methods have achieved great success in the
MHIF task. However, they lack certain interpretability and require convolution
structures be stacked to enhance performance. Recently, Implicit Neural
Representation (INR) has achieved good performance and interpretability in 2D
tasks due to its ability to locally interpolate samples and utilize multimodal
content such as pixels and coordinates. Although INR-based approaches show
promise, they require extra construction of high-frequency information
(\emph{e.g.,} positional encoding). In this paper, inspired by previous work of
MHIF task, we realize that HR-MSI could serve as a high-frequency detail
auxiliary input, leading us to propose a novel INR-based hyperspectral fusion
function named Implicit Neural Feature Fusion Function (INF). As an elaborate
structure, it solves the MHIF task and addresses deficiencies in the INR-based
approaches. Specifically, our INF designs a Dual High-Frequency Fusion (DHFF)
structure that obtains high-frequency information twice from HR-MSI and LR-HSI,
then subtly fuses them with coordinate information. Moreover, the proposed INF
incorporates a parameter-free method named INR with cosine similarity (INR-CS)
that uses cosine similarity to generate local weights through feature vectors.
Based on INF, we construct an Implicit Neural Fusion Network (INFN) that
achieves state-of-the-art performance for MHIF tasks of two public datasets,
\emph{i.e.,} CAVE and Harvard. The code will soon be made available on GitHub. | ShangQi Deng, RuoCheng Wu, Liang-Jian Deng, Ran Ran, Gemine Vivone | 2023-07-14T11:59:47Z | http://arxiv.org/abs/2307.07288v2 | # INF3: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion
###### Abstract.
Multispectral and Hyperspectral Image Fusion (MHIF) is a practical task that aims to fuse a high-resolution multispectral image (HR-MSI) and a low-resolution hyperspectral image (LR-HSI) of the same scene to obtain a high-resolution hyperspectral image (HR-HSI). Benefiting from powerful inductive bias capability, CNN-based methods have achieved great success in the MHIF task. However, they lack certain interpretability and require convolution structures be stacked to enhance performance. Recently, Implicit Neural Representation (INR) has achieved good performance and interpretability in 2D tasks due to its ability to locally interpolate samples and utilize multimodal content such as pixels and coordinates. Although INR-based approaches show promise, they require extra construction of high-frequency information (_e.g._, positional encoding). In this paper, inspired by previous work of MHIF task, we realize that HR-MSI could serve as a high-frequency detail auxiliary input, leading us to propose a novel INR-based hyperspectral fusion function named Implicit Neural Feature Fusion Function (INF3). As an elaborate structure, it solves the MHIF task and addresses deficiencies in the INR-based approaches. Specifically, our INF designs a Dual High-Frequency Fusion (DHFF) structure that obtains high-frequency information twice from HR-MSI and LR-HSI, then subtly fuses them with coordinate information. Moreover, the proposed INF3 incorporates a parameter-free method named INR with cosine similarity (INR-CS) that uses cosine similarity to generate local weights through feature vectors. Based on INF, we construct an Implicit Neural Fusion Network (INFN) that achieves state-of-the-art performance for MHIF tasks of two public datasets, _i.e._, CAVE and Harvard. The code will soon be made available on GitHub.
Implicet Neural Representation (INR), Multispectral and Hyperspectral Image Fusion (MHIF) +
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
optical sensor systems face limitations in incident energy, necessitating tradeoffs between spatial resolution and spectral refinement. In particular, hyperspectral (HS) images with more than 100 bands often have a relatively low spatial resolution, while multispectral (MS) images with a limited number of bands have a relatively high spatial resolution. Therefore, exploring the fusion of a high spatial resolution multispectral image (HR-MSI) and a low spatial resolution hyperspectral image (LR-HSI) of the same scenario into a high spatial resolution hyperspectral image (HR-HSI) has attracted increasing attention. The aim is to obtain as rich and precise HR and HS data as possible.
In recent times, the CNN-based method has achieved considerable success due to its remarkable ability to extract advanced features when applied to multispectral and hyperspectral image fusion. Researchers have demonstrated that the two-stream fusion network designed for HR-MSI and MR-HSI is bounded by the two-stream fusion network for them. To maintain both spatial and spectral information, existing work attempts to design attention modules that produce high-quality spatial details. However, most existing networks are based on a generic CNN framework, which lacks interpretability for MHIF tasks.
Motivated by recent advancements in Implicit Neural Representation (INR) for 3D object/scene representation (Garfinkel et al., 2017; He et al., 2017; Wang et al., 2018) and image super-resolution (Chen et al., 2018; Wang et al., 2018; Wang et al., 2018), we propose to re-examine the fusion process from a different perspective. INR involves mapping continuous spatial coordinates to signals in a domain through an implicit function. In order to obtain prior information from different scenes and integrate it with the implicit function, an existing encoder is typically employed to extract the latent code from the scene/imagery. For 2D tasks, the implicit function usually takes a weighted average of a fixed number of neighboring latent codes to ensure output value continuity. However, due to the lack of sufficient prior information across neighboring coordinates, the weights of such implicit interpolation are commonly dependent on area (Chen et al., 2018) or network parameters (Wang et al., 2018), which limit performance or interpretability. Thus we generate fusion weights using parameter-free cosine similarity solving of the latent code. Additionally, the MLP-ReLU structure used by INR has inherent high-frequency information bias (Garfinkel et al., 2017) that is not easily eliminated during training. Therefore, we propose aligning HR-MSI and LR-HSI images to extract high-frequency information in a multiscale and multimodal manner. Finally, we integrate the learning framework of weight generation and image fusion into a unified implicit function, called the implicit neural feature fusion function (INF*) representation.
The contribution of this paper is listed as follows:
* We propose an Implicit Neural Feature Fusion Function (INF*), which is the first attempt that applied Implicit Neural Representation (INR) on Multispectral and Hyperspectral Image Fusion (MHIF) task. In the fusion stage, we only utilize an MLP layer, which reduces the burden brought by massive use of convolution.
* To enrich the network's input, our INF* adopts the practice of Dual High-Frequency Fusion (DHFF) structure across three modalities which combines high-frequency spatial information at different resolutions. Concretely, this method allows the MLP layer in INF* to access more high-frequency information for detail recovery.
* The proposed INR with cosine similarity (INR-CS) method utilizes cosine similarity to generate weights that makes better use of information inside the pixel rather than distance or area. The proposed method does not depend on any extra parameters or network structures. Instead, it generates parameters based on cosine similarity between feature vectors and fuses local information.
* Based upon INF*, we construct an Implicit Neural Fusion Network (INFN) using encoder-decoder architecture. The proposed INFN has achieved state-of-the-art performance on two public datasets, _i.e._, CAVE and Harvard. Specifically, the proposed decoder has a lightweight structure yet prevents overfitting of INR structures on MHIF tasks.
## 2. Related Work
### CNNs in MHIF
Recently, CNN-based techniques have shown significant success in multispectral and hyperspectral image fusion (MHIF) due to their capacity to learn high-level features from input data through end-to-end training. Among these methods, SSRNet (Wang et al., 2018) uses three convolution modules--fusion, spatial edge, and spectral edge--to re-structure the image, with a loss function connected to the spatial and spectral edges ensuring training reliability. Similarly, ResFNet (He et al., 2017) utilizes residual structures and a two-stream fusion network to learn input data from different modalities, inspired by the widespread application of ResNet (He et al., 2017) in image super-resolution. MHF-net (Wang et al., 2018), on the other hand, was specifically designed for the MS/HS fusion task, incorporating a well-researched linear mapping that links the HR-HSI image to the HR-MSI and LR-HSI images, as well as clear interpretability. Meanwhile, MoG-DCN (Chen et al., 2018) builds a dedicated sub-network for approximating the degradation matrix and leverages DCN-based image regularization (Chen et al., 2018) for HISR, fully exploiting prior HSI knowledge. For simultaneous extraction of spatial and spectral information and production of high-quality details, HSRnet (He et al., 2017) employs channel and spatial attention modules. To ensure bidirectional data consistency and improve accuracy in both spatial and spectral domains, DBIN (Wang et al., 2018) proposes a deep learning-based approach that optimizes the observation model and fusion procedures repeatedly and alternately during reconstruction. Finally, while CNN has a strong structure, INR-based approaches have shown tremendous potential for both 3D and 2D tasks.
### Implicit Neural Representation
Recently, implicit representations of 3D objects, scenes and shapes have gained significant momentum in research. Traditional discrete explicit representations have been partly replaced by implicit neural representations (INR), which use parameterized MLPs to map coordinate information into signals (coor-MLP) in the target domain. For example, NeRF (He et al., 2017) expanded the input 3D coordinate to a continuous 5D scene representation with a 2D viewing direction, resulting in better renderings of high-frequency scene content than explicit 3D representations such as voxel methods, point cloud, and mesh. DeepSDF (Wang et al., 2018) takes a 3D coordinate and a categorical latent code as input and outputs the signed distance (SDF) at this
coordinate to determine whether it is inside the target shape. Related works have enhanced INR's ability to model 3D surfaces and shapes (Beng et al., 2017; Chen et al., 2018; Wang et al., 2019; Wang et al., 2019). This approach has also been extended to the 2D domain, for example, Local Implicit Image Function (LIIF) (Chen et al., 2018) extracts a set of latent codes distributed in the LR domain to interpolate the HR target image. Based on LIIF, UltraSR (Wang et al., 2019) attempts to apply residual structure to the 2D INR process and add the multiple injection of coordinate information and residual structure. Furthermore, LTE (Wang et al., 2019) proposes a local texture estimator to characterize the image information into the Fourier domain and incorporate it with the coordinate information as input to the MLP. SIREN (Wang et al., 2019) proposes an overall implicit neural representation framework to adapt the complex natural signals and their derivatives using a periodic activation function. CRM (Wang et al., 2019) performs image segmentation refinement using implicit neural representations. When applied to processing multimodal data, JIIF (Wang et al., 2019) proposes using INR to reconstruct depth images in the HR domain by using LR domain RGB images guided with noisy low-resolution depth images. This work strongly inspired us to use INR to process multispectral and hyperspectral Image Fusion. However, previous work has demonstrated the limitations and biases of the MLP-ReLU structure in learning high-frequency information (Wang et al., 2019). Therefore, we focus on designing strategies for the fusion process of different modes to improve the performance of high-frequency representation and add a decoder after the MLP layer to correct the bias.
### Motivation
LR-HSI and HR-MSI provide abundant spectral and spatial information, respectively, making them a valuable resource for image analysis. However, fully utilizing the local content of these images and fusing information from different modalities, such as spatial, spectral, and coordinate, are challenging. To address this issue, we propose an implicit neural fusion network (INFN) that relies on the implicit neural representation (INR) of the image. The INR-based approaches have demonstrated exceptional performance in arbitrary-scaled image super-resolution tasks (Chen et al., 2018), frequently employing a multilayer perceptron (MLP) as the fusion component. However, MLPs tend to acquire low-frequency information, necessitating additional input of high-frequency data, such as position or frequency encoding (Wang et al., 2019; Wang et al., 2019). To overcome this limitation, we introduce the implicit neural feature fusion function (INF). Inspired by the multiscale injection branch of SSconv (Shen et al., 2017), in INF\({}^{\text{s}}\) we inject detailed high-frequency information in dual scales, specifically using MLPs to learn high-frequency data for MHIF task. Additionally, we address the challenge of identifying feature vectors that are close in distance but different in angle by proposing that our INF utilizes cosine similarity between feature vectors to compute coefficients. In detail, we utilize full-size and reduced-size HR-MSI to generate interpolated weights, eliminating the need for network learning or additional parameters. As a result, our fusion framework has demonstrated state-of-the-art performance on two publicly available datasets.
## 3. Methodology
In this section, we present our INF\({}^{\text{s}}\) representation designed for the MHIF task. We first introduce the overall architecture of our implicit neural fusion network (INFN) in Sec. 3.1. Subsequently, we review recent implicit neural representations (INR) for 2D tasks in Sec. 3.2. Finally, we describe the design of INF\({}^{\text{s}}\) in Sec. 3.3 for the fusion process.
### The Overall Architecture
As shown in Fig. 2, the INFN is generally divided into two segments: encoder and decoder. In practice, it is evident that directly applying an INR-based approach to address MHIF tasks often leads to overfitting. To overcome this challenge and ensure network stability during training, we have opted for an encoder-decoder architecture. Supplementary materials will include relevant ablation experiments. Specifically, the encoder stage can be formulated as follows:
\[\mathcal{E}=\text{Encoder}\left(\mathcal{X},\mathcal{Y},\mathcal{C} \right), \tag{1}\]
where \(\mathcal{E}\in\mathbb{R}^{H\times W\times D}\) represents the fusion result of the encoder, \(\mathcal{X}\in\mathbb{R}^{h\times w\times S}\) denotes the LR-HSI, \(\mathcal{Y}\in\mathbb{R}^{H\times W\times s}\) denotes the HR-MSI, and \(\mathcal{C}\in\mathbb{R}^{H\times W\times 2}\) is the normalized 2D coordinate map in the high resolution (HR) domain. In detail, we propose to represent a pixel by its center position and scale the coordinate map of \(H\times W\) into the square grid of size \([-1,1]\times[-1,1]\) to make it convenient to share the coordinates in both the HR and LR domains. The normalization process in HR domain can be formulated as:
\[\mathcal{C}\left(i,j\right)=\left[-1+\frac{2i+1}{H},-1+\frac{2j+1}{W}\right], \tag{2}\]
where \(i\in[0,H-1]\), \(j\in[0,W-1]\). To deal with information about different modes, _i.e._, LR-HSI and HR-MSI, we utilize function \(\text{F}_{\psi}\) and function \(\text{F}_{\phi}\) to extract the spatial and spectral information, respectively. The process of spectral function can be formulated as follows:
\[\mathcal{S}_{pe}=\text{F}_{\phi}\left(\mathcal{X}\right), \tag{3}\]
where \(\mathcal{S}_{pe}\in\mathbb{R}^{h\times w\times D_{1}}\) is the feature map of spectral modality and \(\phi\) is learnable parameters of spectral function. \(\mathcal{D}_{1}\) is the number of output channels of the spectral function. To extract information from spatial modality, we propose to concatenate bicubic interpolated LR-HSI \(\mathcal{X}^{U}\in\mathbb{R}^{H\times W\times S}\) with the HR-MSI \(\mathcal{Y}\in\mathbb{R}^{H\times W\times s}\), thus inputting it into the spatial function \(f_{\phi}\) for extracting. In special, this process can be expressed by the formula:
\[\mathcal{S}_{pa}=\text{F}_{\psi}(\text{Cat}(\mathcal{X}^{U},\mathcal{Y})), \tag{4}\]
where \(\mathcal{S}_{pa}\in\mathbb{R}^{H\times W\times D_{2}}\) is the feature map of spatial modality and \(\psi\) is learnable parameters of spatial function. \(\mathcal{D}_{2}\) is the number of output channels of the spatial function. In addition, \(\text{Cat}(\cdot)\) means the concatenation operation in channel dimension. We view the INF\({}^{\text{s}}\) framework as the key of encoder, which can be formulated as:
\[\mathcal{E}=\text{INF}^{\text{s}}(\mathcal{S}_{pe},\mathcal{S}_{pa},\mathcal{C }). \tag{5}\]
For the decoding process, we work on the encoding output \(\mathcal{E}\in\mathbb{R}^{H\times W\times C}\) to generate the decoding result \(\mathcal{D}\in\mathbb{R}^{H\times W\times S}\) via a two-layer convolution structure. The parameters of the decoder are shared by all training patches. In general, the neural network tends to predict frequencies located near a low frequency region. Yet, past work has proved that a long skip connection in local implicit representation enriches high-frequency components in residuals and stabilizes convergence (Chen et al., 2018). Thus, we add the bicubic
interpolation LR-HSI \(\mathcal{X}^{U}\) as a long skip connection to ameliorate the above problem. Thus, the final signal take the form:
\[\tilde{\mathcal{X}}=\text{Decoder}(\mathcal{E})+\mathcal{X}^{U}. \tag{6}\]
### Implicit Neural Representation
In this session, we will introduce the implicit neural representation (INR) from the perspective of interpolation method. Given a low-resolution image \(\mathbf{x}\in\mathbb{R}^{h\times w\times 3}\) and the corresponding high-resolution (HR) interpolated image \(\hat{\mathbf{x}}\in\mathbb{R}^{H\times W\times 3}\) as an example, the INR up-sampling process at position \(C_{q}\) can be expressed as:
\[\hat{\mathbf{x}}(C_{q})=\sum_{i\in N_{q}}w_{q,i}w_{q,i}, \tag{7}\]
where \(C_{q}\in\mathbb{R}^{2}\) is the normalized coordinate of the query pixel in the HR domain, \(N_{q}\in\mathbb{R}^{4}\) is the coordinate of neighbor pixels for \(C_{q}\) in the LR domain, \(w_{q,i}\in\mathbb{R}\) is the interpolation weight of \(w_{q,i}\in\mathbb{R}^{1\times 1\times 3}\), and \(w_{q,i}\) is the vector of \(\mathbf{x}\). The interpolation weights are usually normalized so that \(\sum_{i\in N_{q}}w_{q,i}=1\). Previous work usually proposes to set \(N_{q}\) to the pixels at the four nearest centers of \(C_{q}\) in the LR domain. The calculation of the interpolation weights varies from articles to articles, and the simplest formulation of area weight interpolation used by LIIF (Beng et al., 2015) is as follows:
\[w_{q,i}=\frac{A_{i}}{A}, \tag{8}\]
where \(A_{i}\) is the partial area diagonally opposite to the \(i\) corner pixel, \(A=\sum_{i\in N_{q}}A_{i}\) is the total area serving as the denominator. In detail, the LIIF fuses LR pixel information with HR relative coordinate information through MLP to generate the interpolation value \(q_{q,i}\), which takes the following form:
\[w_{q,i}=\text{MLP}_{\Theta}(x(C_{i}),C_{q}-C_{i}), \tag{9}\]
where \(\text{MLP}_{\Theta}(\cdot)\) is an MLP layer with learnable parameters \(\Theta\) that takes a local feature vector \(x(C_{q})\) in the LR domain and a relative coordinate \(C_{q}-C_{i}\) as inputs. From the above equations, the interpolated vector can be represented by a set of local feature vectors in the LR domain, which stores the low-resolution information of the local region. In general, INR-based methods implement up-sampling by querying \(x(C_{q})\) with the relative query coordinate \(C_{q}-C_{i}\) in the arbitrary super-resolution task.
### Implicit Neural Feature Fusion Function
The objective of the MHIF task is to fuse the different modal inputs of LR-HSI and HR-MSI, resulting in the generation of HR-HSI with high spectral and spatial resolution. Previous fusion techniques usually construct two separate CNN branches for the LR-HSI and HR-MSI (Wang et al., 2017), and then extract sets of CNN features (Beng et al., 2015; Wang et al., 2017; Wang et al., 2018). However, the CNN-based fusion methods are significantly dependent on stacking convolution structures and lack interpretability. Inspired by the recent developments in INR (Beng et al., 2015; Wang et al., 2018), we propose a multimodal and multiscale fusion function based on the INR framework, and use parameter-free weight generation method to facilitate the mining of high-frequency information in the fusion process. In summary, we innovatively create the Implicit Neural Feature Fusion Function (INF\({}^{3}\)) to guide the fusion process. Unlike the LIIF (Beng et al., 2015) representation, which directly generates the predicted signal, our INF\({}^{3}\) is designed to generate the fused feature map \(\mathcal{E}\in\mathbb{R}^{H\times W\times C}\), and then use a decoder structure to work out our final output \(\tilde{\mathcal{X}}\). Specifically, the fused feature map \(\mathcal{E}\) at position \(C_{q}\) can be represented as follows:
\[\mathcal{E}_{q}=\sum_{i\in N_{q}}w_{q,i}\mathcal{F}_{q,i}, \tag{10}\]
where \(N_{q}\) indicates the set of the four nearest query coordinates around \(C_{q}\) in the normalized HR domain, \(w_{q,i}\) and \(\mathcal{F}_{q,i}\) are the
Figure 2. The overall architecture of the proposed INFN, which consisted of two segments: encoder and decoder. Specifically, we input three modal information such as LR-HSI \(\mathcal{X}\), HR-HSI \(\mathcal{Y}\) and coordinate \(C\) into the encoder, and subsequently put the encoder result into the decoder and add it with up-sampled LR-HSI \(\mathcal{X}^{U}\) to get the final output \(\tilde{\mathcal{X}}\). The INR-CS is described with detail in Sec. 3.2 and Sec. 3.3.
weights and multimodal fusion information of query coordinate \(C_{q}\) at position \(C_{i}\), respectively. Typically, we regard \(\mathcal{F}_{q,i}\in\mathbb{R}^{1\times 1\times C}\) as the fused feature vector at position \(C_{i}\) when querying coordinate \(C_{q}\). In the following passage, we will introduce how to generate \(\mathcal{F}_{q,i}\) and \(w_{q,i}\).
Figure 4. The testing images from the CAVE dataset: (a) _balloons_, (b) _cd_, (c) _chart and stuffed toy_, (d) _clay_, (e) _fake and real beers_, (f) _fake and real lemon slices_, (g) _fake and real tomatoes_, (h) _features_, (i) _flowers_, (j) _hairs_, and (k) _jelly beans_. An RGB color representation is used to depict the images.
Dual high-frequency fusion: We observe that different resolution information plays an important role in MHIF tasks. To this end, we design a structure, _i.e._, dual high-frequency fusion (DHFF), which combines high-frequency spatial information at different resolutions. Firstly, we concatenate the LR domain spatial information \(\mathcal{S}^{D}_{pa}\in\mathbb{R}^{h\times w\times D_{2}}\) and spectral information \(\mathcal{S}_{pe}\in\mathbb{R}^{h\times w\times D_{1}}\) vectors at position \(C_{i}\), which can be formulated as follows:
\[\mathcal{F}^{1}_{q,i}=\text{Concat}(\mathcal{S}_{pe}(C_{i}),\mathcal{S}^{D}_{ pa}(C_{i})), \tag{11}\]
where \(\mathcal{F}^{1}_{q,i}\in\mathbb{R}^{1\times 1\times(D_{1}+D_{2})}\) is the fusion of spectral information and spatial information at the same resolution. Specifically, we generate LR domain high-frequency spatial information, _i.e._, \(\mathcal{S}^{D}_{pa}\) by the following formula:
\[\mathcal{S}^{D}_{pa}=\text{Mean}(\mathcal{S}_{pa}). \tag{12}\]
Given the up-sampling ratio is \(r\), we operate mean operation on the \(r\times r\) region of \(\mathcal{S}_{pa}\in\mathbb{R}^{H\times W\times D_{2}}\) and then get the \(\mathcal{S}^{D}_{pa}\in\mathbb{R}^{h\times w\times D_{2}}\). This design is to incorporate the LR domain high-frequency information, to better serve the MLP layer. In Sec. 4.1, we will design relevant ablation study and verify the effectiveness of this design. Secondly, we combine HR domain information \(\mathcal{S}_{pa}\in\mathbb{R}^{H\times W\times D_{2}}\) at position \(C_{q}\) with LR domain fusion information \(\mathcal{F}^{1}_{q,i}\). The process above can be expressed as:
\[\mathcal{F}^{2}_{q,i}=\text{Concat}(\mathcal{F}^{1}_{q,i},\mathcal{S}_{pa}(C_{ q})), \tag{13}\]
where \(\mathcal{F}^{2}_{q,i}\in\mathbb{R}^{1\times 1\times(D_{1}+D_{2}+D_{2})}\) serves as the result of DHFF. In general, our DHFF naturally combines feature vectors of different modalities and different scales with the aim of making the MLP acquire both spatial and spectral information. Similar to the previous INR-based work, we obtain the coordinate modal information by adding the relative positions of \(C_{q}\) and \(C_{i}\) to the fusion process, which can be represented as:
\[\mathcal{F}^{3}_{q,i}=\text{Concat}(\mathcal{F}^{2}_{q,i},C_{q}-C_{i}), \tag{14}\]
where \(\mathcal{F}^{3}_{q,i}\in\mathbb{R}^{1\times 1\times(D_{1}+D_{2}+D_{2})}\) is the result after adding the relative distance information of interpolation on the basis of \(\mathcal{F}^{2}_{q,i}\). Finally, we utilize an MLP layer to learn the information in \(F^{3}_{q,i}\) and get the following expression:
\[\mathcal{F}_{q,i}=\text{MLP}_{\Theta}(\mathcal{F}^{3}_{q,i}), \tag{15}\]
Figure 3. The first and third rows show the results using the pseudo-color representation on “_chart and stuffed toy_” and “_feature_”, respectively, from the CAVE dataset. Some close-ups are depicted in the blue rectangles. The second and fourth rows show the residuals between the GT and the fused products. (a) GT, (b) Ours, (c) Dhif [11], (d) Fusformer [9], (e) MoG-DCN [6], (f) HSRNet [10], (g) SSRNet [38], (h) DBIN [32] and (i) RestTNet [16].
where \(\mathcal{F}_{q,i}\in\mathbb{R}^{1\times 1\times C}\) is multimodal fusion information of query coordinate \(C_{i}\) when querying coordinate \(C_{q}\), \(\text{MLP}_{\Theta}(\cdot)\) is a fully connected layer, and \(\Theta\) serves as its learnable parameters.
**Cosine similarity:** The proposed INR with cosine similarity (INR-CS) method generates weights based on cosine similarity. In Eq. (10), \(w_{q,i}\in\mathbb{R}\) is the weight at the position \(C_{i}\) when querying the coordinate \(C_{i}\). Part of the previous work viewed the generation of this weight simply as a solution to the interpolation problem, using area-based method to generate the target weights (He et al., 2017), which ignores local texture and information about the data itself. The other part of the work proposes to learn the weights by network parameters, _i.e_, learning similar weights by graph attention mechanisms (Kipf and Welling, 2017) which lacks of interpretability. In order to utilize information about \(\mathcal{F}_{q,i}\) while keeping interpretability, we propose a parameter-free approach named INR-CS as follows:
\[w_{q,i}=\frac{\exp(\|\mathcal{F}_{q,\overline{q}}^{1}\|\cdot\|\mathcal{F}_{q, i}^{1}\|\langle\mathcal{F}_{q,\overline{q}}^{1}\mathcal{F}_{q,i}^{1}\rangle)}{w_{q}}, \tag{16}\]
where
\[\mathcal{F}_{q,i}^{1} \tag{17}\]
is given by Eq. (11), where the \(\widehat{q}\) is the closest point to \(C_{q}\) in the LR domain. In detail, \(\mathcal{F}_{q,\widehat{q}}^{1}=\text{Concat}(\mathcal{S}_{pa}^{D}(\widehat{q}),\mathcal{S}_{pe}(\widehat{q}))\), and \(\langle\cdot,\cdot\rangle\) represents the cosine operation between vectors. The similarity between feature vectors is normalized by the softmax function.
## 4. Experiment
**Datasets:** Following the previous studies, we conduct experiments to evaluate our model on the CAVE 1 and Harvard2 datasets. In detail, the CAVE dataset contains 32 HSIs with 31 spectral bands ranging in wavelengths from 400 nm to 700 nm in increments of 10 nm. We randomly select 20 images for training, and the remaining 11 images make up the testing dataset. In addition, the Harvard dataset includes 77 HSIs of both indoor and outdoor scenes, with each HSI having a size of \(1392\times 1040\times 31\) and spanning the 420 nm to 720 nm spectral range. We chose 20 of them and crop the upper left portion (\(1000\times 1000\)), with the 10 images being utilized for testing and the remaining 10 were used for training.
Footnote 1: [https://www.cs.columbia.edu/CAVE/databases/multispectral/](https://www.cs.columbia.edu/CAVE/databases/multispectral/)
**Data Simulation:** We input LR-HSI and HR-MSI (\(X,\mathcal{Y}\)) pairs into the end-to-end network, and use HR-ISI \(\widehat{X}\) for training. Due to ground-truth (GT) \(\widehat{X}\) is not available in real life, a simulation process is thus required. As for CAVE dataset, we crop the 20 selected training images to generate 3920 overlapping patches with the dimension \(64\times 64\times 31\), and this patches will serve as GT \(\widehat{X}\). In order to generate the proper LR-HSIs, we use a \(3\times 3\) Gaussian kernel with a standard deviation of 0.5 to blur the initial HR-HSIs and downsample the blurred patches with a scaling factor of 4. Additionally, we utilize the common spectral response function of the Nikon D7003 camera and HR-HSIs to create the HR-MSI patches. Thus, we generate 3920 LR-HSIs with a size of \(16\times 16\times 31\) and HR-MSI with a size of \(64\times 64\times 3\) form the input pairs \((\mathcal{X},\mathcal{Y})\). Following that, the inputs pairs and associated GTs are divided at random into training data (80%) and testing data (20%). To create the input LR-HSI and HR-MSI products as well as the GTs, this method is also applied to the Harvard dataset.
**Benchmark:** To verify the superiority of the proposed INF3, we compare it with various state-of-the-art methods including MTF-GLP-HS (He et al., 2017), CSTF-FUS (Han et al., 2017), LTTR(Chen et al., 2017), LTMR(Chen et al., 2017), IR-TenSR(Chen et al., 2017), DBIN (Chen et al., 2017), SSRNet (Kipf and Welling, 2017), ResTFNet (Kipf and Welling, 2017), HSRNet (Kipf and Welling, 2017), MoG-DCN (Chen et al., 2017), Fusformer (Chen et al., 2017) and the DHIF (Chen et al., 2017) network. In specific, the upsampled LR-HSI in Fig. 2 is the bicubic-interpolated result, which is added to the experiment as a baseline. By the way, all the deep learning approaches are trained with the same input pairs for a fair comparison. Moreover, the related hyperparameters are selected consistent with the original papers.
Footnote 3: [http://vision.ac.uk/~cn/wiki_d700_study.htm](http://vision.ac.uk/~cn/wiki_d700_study.htm)
**Implementation Details:** The proposed network implements in PyTorch 1.11.0 and Python 3.8.0 using Adam optimizer(Kingmaa et al., 2014) with a learning rate of 0.0001 to minimize sum of absolute difference \(\mathcal{L}_{1}\) by 1000 epochs and Linux operating system with a NVIDIA RTX3080 GPU (12GB).
**Results on CAVE Dataset:** In this section, we evaluate the effectiveness of our proposed INF4 method on the CAVE dataset (scaling factor of 4) and compare it with existing MHIF methods. As shown in the left part of Tab. 1, our INF5 outperforms other state-of-the-art deep learning models by a large margin. For instance, our INF6 improves PSNR by 1.31 dB, 2.40 dB, 0.75 dB, and 2.00 dB compared with DHIF (Chen et al., 2017), Fusformer (Chen et al., 2017), MoG-DCN (Chen et al., 2017), and HSRNet (Kipf and Welling, 2017), respectively. The proposed INF6 achieves significant improvements in two QIs, i.e., SAM and ERGAS. In particular, our INF7 improves ERGAS by 11.71% and 18.33%, compared with the second and third best models. In addition, our INF7 outperforms MoG-DCN (Chen et al., 2017) and DHIF (Chen et al., 2017) on SAM and has only two-fifths and one-seventh of their parameters. Moreover, to aid in visual verification, we provide pseudo-color depictions of the fused products and some error maps in Fig.3. It can be observed that the generated results of our INF7 are very close to the ground truth and maintain better reconstruction quality with more accurate textures. Regarding the absolute error maps in Fig.3, the closer the reconstruction impact is to the original picture, the more blue the error map's color is. It is evident that INF7 restores texture details better than the other techniques under comparison, which is consistent with the analysis in Tab. 1.
Footnote 3: [https://www.maxmxmx.com/wiki_d700_study.htm](https://www.maxmxmx.com/wiki_d700_study.htm)
**Results on Harvard Dataset:** Fig.6 displays 10 test images from the Harvard dataset. Moreover, the right-hand portion of Tab.1 presents the comparison results of five indices obtained by all compared methods on another hyperspectral image dataset, namely Harvard, for a scaling factor of 4. It is evident that the average PSNR value of our proposed INF4 is higher by 0.17 dB and 0.51 dB compared to the second-best and third-best methods, respectively. Although our model is slightly inferior to the second-best MoG-DCN (Chen et al., 2017) in terms of SAM, our model's parameters are only two-fifths of MoG-DCN's. Moreover, our model achieves the best results on ERGAS and SSIM, indicating the best structural recovery. Furthermore, Fig. 5 illustrates that our proposed INF4 is capable of reconstructing the detailed structure of the original image. Notably, our method restores the finest details of the bike, the metallic sheen, and the texture of the backpack. These error maps also demonstrate that our proposed INF5 achieves the best fidelity in terms of texture details. Additionally, the fact that our residuals are closer to blue indicates that our recovery is better than other methods.
### Ablation Study
In this section, we profoundly discuss the effectiveness of dual high-frequency fusion (DHIF), which combines LR and HR domain in the INF3. Our primary concern is whether injecting relative location information can aid the network in image recovery. Therefore, we conducted an ablation study to assess this. Furthermore, we included the proposed weight generation method in the ablation study. To maintain brevity and generality, the analysis is conducted on the CAVE dataset.
#### 4.1.1. Dual high-frequency fusion
To evaluate the effectiveness of dual-high-frequency information injection, we conducted several experiments. As shown in Tab. 2, we found that the removal of high-frequency information injection in HR domain resulted in a significant decline in the performance of INF3. This indicates that high-resolution and high-frequency information provides more
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c}{CAVE} & \multicolumn{5}{c}{Harvard} \\ \cline{2-10} & PSNR & SAM & ERGAS & SSIM & \#params & PSNR & SAM & ERGAS & SSIM & \#params \\ \hline Bicubic & 34.33\(\pm\)3.88 & 4.45\(\pm\)1.62 & 7.21\(\pm\)4.90 & 0.944\(\pm\)0.0291 & \(-\) & 38.71\(\pm\)4.33 & 2.53\(\pm\)0.67 & 4.45\(\pm\)41.81 & 0.948\(\pm\)0.0268 & \(-\) \\ MTF-GLP-HS (Sang et al., 2017) & 37.69\(\pm\)3.85 & 5.33\(\pm\)1.91 & 4.57\(\pm\)2.66 & 0.973\(\pm\)0.0158 & \(-\) & 33.81\(\pm\)3.50 & 6.25\(\pm\)2.42 & 3.47\(\pm\)1.82 & 0.952\(\pm\)0.0321 & \(-\) \\ CSTF-FUS (Sang et al., 2017) & 34.46\(\pm\)4.28 & 14.37\(\pm\)5.30 & 8.29\(\pm\)5.29 & 0.866\(\pm\)0.0747 & \(-\) & 39.13\(\pm\)3.50 & 6.91\(\pm\)2.66 & 4.64\(\pm\)1.80 & 0.913\(\pm\)0.0487 & \(-\) \\ LTTR (Beng et al., 2017) & 35.85\(\pm\)3.49 & 6.99\(\pm\)2.55 & 5.99\(\pm\)2.92 & 0.956\(\pm\)0.0288 & \(-\) & 37.91\(\pm\)3.58 & 5.35\(\pm\)1.94 & 2.44\(\pm\)1.06 & 0.972\(\pm\)0.0183 & \(-\) \\ LTMR (Beng et al., 2017) & 36.54\(\pm\)3.30 & 6.71\(\pm\)2.19 & 5.39\(\pm\)2.53 & 0.963\(\pm\)0.0208 & \(-\) & 38.41\(\pm\)3.58 & 5.05\(\pm\)1.70 & 2.24\(\pm\)0.97 & 0.970\(\pm\)0.0166 & \(-\) \\ IR-TenSR (Sang et al., 2017) & 35.61\(\pm\)3.45 & 12.30\(\pm\)4.68 & 5.90\(\pm\)3.05 & 0.945\(\pm\)0.0267 & \(-\) & 40.47\(\pm\)3.04 & 4.36\(\pm\)1.52 & 5.57\(\pm\)1.57 & 0.962\(\pm\)0.0140 & \(-\) \\ \hline DBIN (Sang et al., 2017) & 50.83\(\pm\)4.29 & 2.21\(\pm\)0.63 & 1.24\(\pm\)1.06 & 0.996\(\pm\)0.0026 & 0.469M & 47.88\(\pm\)3.87 & 2.31\(\pm\)0.46 & 1.95\(\pm\)0.81 & 0.988\(\pm\)0.0066 & 0.469M \\ RestFNet (Sang et al., 2017) & 45.58\(\pm\)5.47 & 2.82\(\pm\)0.70 & 2.36\(\pm\)2.59 & 0.993\(\pm\)0.0056 & 2.387M & 45.93\(\pm\)4.35 & 2.61\(\pm\)0.69 & 2.56\(\pm\)1.32 & 0.985\(\pm\)0.0082 & 2.387M \\ SSRNet (Sang et al., 2017) & 48.62\(\pm\)3.92 & 2.54\(\pm\)0.84 & 1.63\(\pm\)1.21 & 0.995\(\pm\)0.0023 & 0.027M & 47.95\(\pm\)3.37 & 2.31\(\pm\)0.60 & 2.30\(\pm\)1.42 & 0.987\(\pm\)0.0070 & **0.027M** \\ HSRNet (Sang et al., 2017) & 50.38\(\pm\)3.38 & 2.23\(\pm\)0.66 & 1.20\(\pm\)0.75 & 0.996\(\pm\)0.0014 & 0.633M & **48.29\(\pm\)3.03** & 2.26\(\pm\)0.56 & **1.87\(\pm\)0.81** & **0.988\(\pm\)0.0064** & 0.633M \\ MoG-DCN (Beng et al., 2017) & **51.63\(\pm\)4.10** & 2.03\(\pm\)0.62 & **1.11\(\pm\)0.82** & 0.997\(\pm\)0.0018 & 6.840M & 47.89\(\pm\)4.09 & 2.11\(\pm\)0.52 & 1.89\(\pm\)0.82 & 0.988\(\pm\)0.0073 & 6.840M \\ Fusformer (Fus et al., 2017) & 49.98\(\pm\)8.10 & 2.20\(\pm\)0.85 & 2.50\(\pm\)5.21 & 0.994\(\pm\)0.0111 & 0.504M & 47.87\(\pm\)5.13 & 2.84\(\pm\)2.07 & 2.04\(\pm\)0.99 & 0.986\(\pm\)0.0101 & **0.467M** \\ DHIF (Sang et al., 2017) & 51.07\(\pm\)4.17 & **2.01\(\pm\)0.63** & 1.22\(\pm\)0.97 & **0.997\(\pm\)0.0016** & 22.462M & 47.68\(\pm\)3.85 & 2.32\(\pm\)0.53 & 1.95\(\pm\)0.92 & 0.988\(\pm\)0.0074 & 22.462M \\ INF3 (ours) & **52.36\(\pm\)3.93** & **1.99\(\pm\)0.60** & **0.99\(\pm\)0.73** & **0.997\(\pm\)0.0013** & 2.902 M & **48.46\(\pm\)3.43** & **2.14\(\pm\)0.52** & **1.83\(\pm\)0.76** & **0.989\(\pm\)0.0064** & 2.902 M \\ \hline
**Ideal value** & \(\infty\) & **0** & **0** & **1** & - & \(\infty\) & **0** & **0** & **1** & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. Average quantitative comparisons on 11 CAVE examples and 10 Harvard examples simulating a scaling factor of 4. The best values are highlighted in red, and the second best values are signed in blue. M refers to millions.
Figure 5. The first and third rows show the results using the pseudo-color representation from the Harvard dataset. Noting that we select three bands (31-20-10) from HSIs as the red, green and blue channels. We zoomed in on the blue rectangles to show more detail. The second and fourth rows show the residuals between the GT and the fused products. (a) GT, (b) Ours, (c) DHIF (Sang et al., 2017), (d) Fusformer (Fus et al., 2017), (e) MoG-DCN (Beng et al., 2017), (f) HSRNet (Sang et al., 2017), (g) SSRNet (Sang et al., 2017), (h) DBIN (Sang et al., 2017) and (i) RestFNet (Sang et al., 2017).
detailed information during the fusion process of INF\({}^{3}\). Moreover, the performance of INF\({}^{3}\) slightly decreased when LR domain high-frequency information injection was removed, suggesting that high-frequency information of LR domain plays a supportive role in the fusion process. The utilization of different resolution information resulted in the best performance for our INF\({}^{3}\). The importance of information at various resolutions for MHIF tasks inspired us to design this structure, and the experiments supported the rationality behind this design.
#### 4.2.2 Relative coordinate
In this section, we will analyze the effectiveness of the relative coordinate \(C_{q}-C_{i}\) in INF\({}^{3}\). The relative coordinate and pixels belong to different modalities, where the former represents the distance of interpolation, and the latter represents the value of interpolated value. We are curious whether the information from different modalities can aid the MLP in understanding the fusion and interpolation processes in INF\({}^{3}\). To address this, we conducted an ablation experiment to eliminate our confusion. Specifically, we removed the relative coordinate from INF\({}^{3}\) while keeping the rest unchanged. Tab. 3 presents our results, showing that the inclusion of the relative coordinate improves the network's understanding of the MHIF task and has a positive impact on its realization.
#### 4.2.3 Weight generation method
To assess the superiority of our cosine similarity method, we conducted a comparison with area-based and network-based weight generation methods on the CAVE dataset, with INF\({}^{3}\) serving as the backbone. As shown in Tab. 4, our approach significantly outperforms the other methods on certain images, such as 'chart and stuffed toy'. To further illustrate the possible spectral distortions in the fused products, we visualized the spectral vectors. Fig.7 shows the spectral vectors for the 31 bands at position (276, 260) in the 'chart and stuffed toy' image. For the purpose of clarity, we have zoomed in on the spectral vectors of the 18th-22th bands, as indicated by the rectangular boxes in Fig.7. In both the figures, it is evident that the spectral vectors of the proposed method (the red lines) are the closest to the ground truth (GT).
#### 4.2.4 Upsampling methods
In this section, we present experiments that compare INF\({}^{3}\) with other upsampling methods. Intuitively, INF\({}^{3}\) can be regarded as an interpolation algorithm. Unlike traditional interpolation algorithms, it provides each interpolated point with
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Methods & PSNR & SAM & ERGAS & SSIM \\ \hline Area-based & 54.741 & 1.294 & 0.335 & 0.9978 \\ Net-based & 54.392 & 1.281 & 0.350 & 0.9977 \\ Ours & **54.813** & **1.283** & **0.331** & **0.9978** \\ \hline \hline \end{tabular}
\end{table}
Table 4. The four QIs and the corresponding parameters on the image ‘balloons’ simulating a scaling factor of 4. Net-based and Area-based represent the method of generating weights based on network and area in JIIF [28] and LIIF [2], respectively.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline LR & HR & PSNR & SAM & ERGAS & SSIM \\ \hline ✓ & ✗ & 42.55\(\pm\)2.58 & 2.91\(\pm\)0.93 & 2.82\(\pm\)1.74 & 0.990\(\pm\)0.0020 \\ ✗ & ✓ & 52.17\(\pm\)4.02 & 2.01\(\pm\)0.61 & 1.02\(\pm\)0.77 & 0.997\(\pm\)0.0014 \\ ✓ & ✓ & **52.36\(\pm\)3.93** & **1.99\(\pm\)0.60** & **0.99\(\pm\)0.73** & **0.997\(\pm\)0.0013** \\ \hline \hline \end{tabular}
\end{table}
Table 2. The average four QIs and the corresponding parameters on the CAVE dataset simulating a scaling factor of 4. LR and HR mean low-resolution and high-resolution domain high-frequency information injection, respectively.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \(\mathcal{O}_{\mathcal{C}}\) & PSNR & SAM & ERGAS & SSIM \\ \hline ✗ & 52.22\(\pm\)3.92 & **1.98\(\pm\)0.58** & 1.00\(\pm\)0.74 & 0.997\(\pm\)0.0013 \\ ✓ & **52.36\(\pm\)3.93** & 1.99\(\pm\)0.60 & **0.99\(\pm\)0.73** & **0.997\(\pm\)0.0013** \\ \hline \hline \end{tabular}
\end{table}
Table 3. The average four QIs and the corresponding parameters on the CAVE dataset simulating a scaling factor of 4. \(\delta_{\text{c}}\) means the relative coordinate \(C_{q}-C_{i}\).
Figure 6. The 10 images tested on the Harvard dataset are (a) _bikes_, (b) _sofa1_, (c) _window_, (d) _fence_, (e) _tree_, (f) _sofa2_, (g) _backpack_, (h) _wall_, (i) _door_ and (j) _parcels_.
Figure 7. The blue cross on the top left image ‘chart and stuffed toy’ shows where the spectral vector is located, while the top right picture and the bottom picture represent the spectral vector at position (276, 260). The bottom picture displays the output of the network-based and area-based weight generation methods, denoted by ‘Net’ and ‘Area’, respectively.
additional relative position information via the MLP layer, which incorporates multi-modal information. Specifically, we compared INF\({}^{3}\) with pixel-shuffle (Wang et al., 2019) and traditional interpolation methods that are commonly used in convolutional neural networks. As shown in Tab. 5, our INF\({}^{3}\) outperforms other methods in terms of MHIF tasks with fewer parameters.
## 5. Conclusion
In this paper, we propose the Implicit Neural Feature Fusion Function (INF\({}^{3}\)) and design an Implicit Neural Fusion Network (INFN) based on it for multispectral and hyperspectral image fusion task. Unlike previous CNN-based approaches, we novelly fuse multi-modal information including coordinate, spatial and spectral data for multiple times, and accordingly modify the previous Implicit Neural Representation of upsampling interpolation to make better use of high-frequency information. By training two different branches of the encoder, the input information is fused in two stages and entered within the INR framework, whose effectiveness in utilizing high-frequency information has also been verified. The INF\({}^{3}\)-based process also provides a generalized paradigm for other multimodal fusion tasks. Experimental results demonstrate that our method can achieve state-of-the-art performance on two different datasets. Moving forward, we will persist in exploring dependable network-based interpolation fusion methods and stable weight generation techniques.
|
2308.00362 | Near-Field Communications: A Degree-of-Freedom Perspective | Multiple-antenna technologies are advancing towards large-scale aperture
sizes and extremely high frequencies, leading to the emergence of near-field
communications (NFC) in future wireless systems. To this context, we
investigate the degree of freedom (DoF) in near-field multiple-input
multiple-output (MIMO) systems. We consider both spatially discrete (SPD)
antennas and continuous aperture (CAP) antennas. Additionally, we explore three
important DoF-related performance metrics and examine their relationships with
the classic DoF. Numerical results demonstrate the benefits of NFC over
far-field communications (FFC) in terms of providing increased spatial DoFs. We
also identify promising research directions for NFC from a DoF perspective. | Chongjun Ouyang, Yuanwei Liu, Xingqi Zhang, Lajos Hanzo | 2023-08-01T08:08:47Z | http://arxiv.org/abs/2308.00362v2 | # Near-Field Communications: A Degree-of-Freedom Perspective
###### Abstract
Multiple-antenna technologies are advancing towards large-scale aperture sizes and extremely high frequencies, leading to the emergence of near-field communications (NFC) in future wireless systems. To this context, we investigate the degree of freedom (DoF) in near-field multiple-input multiple-output (MIMO) systems. We consider both spatially discrete (SPD) antennas and continuous aperture (CAP) antennas. Additionally, we explore three important DoF-related performance metrics and examine their relationships with the classic DoF. Numerical results demonstrate the benefits of NFC over far-field communications (FFC) in terms of providing increased spatial DoFs. We also identify promising research directions for NFC from a DoF perspective.
## I Introduction
The electromagnetic (EM) radiation field emitted by antennas is divided into two regions: the far-field and the radiation near-field. The Rayleigh distance, determined by the product of the array aperture's square and the carrier frequency, serves as the boundary between these regions [1]. In the far-field region, beyond the Rayleigh distance, EM waves exhibit different propagation characteristics compared to the near-field region within it. Planar waves effectively approximate the far-field EM field, while the near-field EM field requires precise modeling using spherical waves [1].
Limited by the size of antenna arrays and the operating frequency bands, the Rayleigh distance in current cellular systems typically spans only a few meters, making the near-field effects negligible. Thus, existing cellular communications predominantly rely on theories and techniques from far-field communications (FFC). However, with the rapid advances of wireless technology, next-generation wireless communications rely on extremely large-scale antenna arrays and higher frequencies to cater for the ever-increasing thrust for communication services [2]. In these advanced scenarios, near-field communications (NFC) can extend over longer distances, surpassing the conventional proximity range. The deployment of massive antenna arrays and the utilization of high-frequency bands allow NFC to be effective at distances of hundreds of meters, thereby opening up novel opportunities for the development of NFC theories and techniques [1, 2].
In the realm of wireless communications, the degree of freedom (DoF) concept has emerged as a crucial framework for understanding the capabilities and potential of different communication systems [3]. Briefly, the DoF provides insights into the number of independent signal dimensions that can be exploited for conveying information in a wireless channel. While traditional FFC have been extensively studied within this context, the unique physical properties of NFC exhibit distinct characteristics that necessitate a fresh exploration of DoF.
The adoption of a DoF perspective in NFC is motivated by several factors. Firstly, NFC offers increased DoFs, which represents a significant advantage over FFC. By understanding the DoF characteristics of NFC systems, we can unveil the superior data capacity and transmission capabilities of NFC compared to FFC. Secondly, characterizing the DoF in NFC assists in optimizing the system parameters, such as the antenna configurations and transmission strategies, leading to improved overall performance. Thirdly, adopting a DoF perspective facilitates the development of communication protocols and algorithms specifically tailored for NFC environments, resulting in enhanced reliability, coverage, and throughput. Although there are some studies analyzing NFC's DoF [1], this field is still in its infancy.
Hence, we aim for the critical appraisal of NFC and its DoF. Our focus is on point-to-point multiple-input multiple-output (MIMO) channels under line-of-sight (LoS) propagation, as illustrated in Fig. 1. This emphasis arises from the anticipation that future NFC will operate at high frequencies, leading to a prevalence of LoS communication associated with limited multi-path effects. We commence by exploring the DoFs achieved in near-field MIMO by spatially discrete antennas (SPD-MIMO). Subsequently, we extend our analysis to the near-field MIMO supported by continuous aperture antennas (CAP-MIMO). Utilizing numerical simulations, we demonstrate the superiority of NFC over FFC concerning its DoF and establish connections between the DoF and effective DoF (EDoF). Finally, future research ideas are discussed.
## II DoFs Achieved in SPD-MIMO
In practical implementations of NFC, a viable approach is to equip the transceiver with an extensive antenna array comprising a large number of SPD patch antennas. In this section, we will delve into a comprehensive analysis of the achievable DoFs in near-field SPD-MIMO.
### _Calculation of the DoF_
#### Ii-A1 DoF
In the context of SPD-MIMO, the overall channel response can be represented as a matrix \(\mathbf{H}\) having dimensions of \(N_{\text{r}}\times N_{\text{t}}\), where \(N_{\text{r}}\) denotes the number of receive antennas and \(N_{\text{t}}\) represents the number of transmit antennas. By applying the singular value decomposition (SVD) to this channel matrix, the SPD-MIMO channel can be effectively decomposed into multiple independent single-input single-output (SISO) sub-channels that operate in parallel without mutual interference. Mathematically, the number of positive singular values or the rank of the correlation matrix \(\mathbf{H}\mathbf{H}^{\text{H}}\) corresponds to the number of sub-channels having a non-zero signal-to-noise ratio (SNR). Each of these sub-channels accommodates an independent communication mode within the MIMO channel. The total number of communication modes is referred to as the _spatial DoF_ of the channel, denoted as DoF. On the other hand, for a MIMO Gaussian channel, the capacity growth rate can be shown to be \(\mathsf{DoF}\cdot\log_{2}(\mathsf{SNR})\) at high SNR. Therefore, the DoF is also termed as the _high-SNR slope_ or maximum _multiplexing gain_ (relative to a SISO channel) [1].
Given a channel matrix \(\mathbf{H}\), the spatial DoFs are inherently limited and cannot exceed the minimum value between \(N_{\text{r}}\) and \(N_{\text{t}}\). In a far-field MIMO LoS channel, only a single incident angle is available due to the almost parallel planar-wave propagation. Consequently, the channel matrix is of rank-\(1\), resulting in a very limited DoF, namely \(1\). By contrast, within the near-field region, the spherical waves exhibit different phase-shifts and power levels for each link. This diversity leads to a higher rank for the MIMO channel matrix and subsequently a higher DoF compared to the far-field scenario. Notably, if the SPD antennas are well separated, the achievable DoFs for the near-field MIMO LoS channel can approach the minimum value between \(N_{\text{r}}\) and \(N_{\text{t}}\). _This signifies that spatial multiplexing can be supported even in the absence of a rich scattering environment, which is a significant advantage of NFC._
#### Ii-A2 \(\mathsf{EDoF}_{1}\)
The aforementioned arguments suggest that employing a high number of antennas constitutes an effective technique of increasing the DoFs in NFC. By reducing the antenna spacing within a fixed aperture size, the number of spatial DoFs can be expanded. It is worth noting that when two antennas are in each other's close proximity, the waves they generate at the receiver antenna array become nearly identical. Consequently, these two antennas become indistinguishable at the receiver. This limitation should be considered as it could restrict the potential increase in channel capacity, when a large number of transceiving antennas are incorporated into a fixed aperture. This limitation has been theoretically demonstrated in [4, 5].
To augment our exposition, we represent the ordered positive singular values of matrix \(\mathbf{H}\) as \(\sigma_{1}\geq\ldots\geq\sigma_{\mathsf{DoF}}\). Miller [4] demonstrated by employing prolate spheroidal wave functions that for small values of \(n\), the \(\sigma_{n}\) values fall off slowly until they reach a critical threshold, beyond which they decay rapidly. This critical threshold is termed as the "_effective degree of freedom (EDoF)_", denoted as \(\mathsf{EDoF}_{1}\). Moreover, this phenomenon becomes more prominent as the number of transceiving antennas increases. These findings indicate that although harnessing more antennas can lead to an increased number of independent sub-channels, only the dominant \(\mathsf{EDoF}_{1}\) ones can be effectively utilized for supporting reliable communications.
Furthermore, for a large number of antennas, Miller [4] concludes that the upper limit of \(\mathsf{EDoF}_{1}\) is proportional to the product of transmitter and receiver areas and it is inversely proportional to the link distance. These findings are derived using the uniform spherical wave (USW) model described in [1, Eqn. (35)]. The USW model is applicable in the near-field region, where the communication distance exceeds the uniform-power distance, exhibiting uniform channel gains and non-linear phase-shifts. However, it is important to note that as the link distance becomes comparable to the transceiver sizes (i.e., NFC within the uniform-power distance), the accuracy of the USW model and the EDoF derived in [4] diminishes. To address this, Dardari introduced a more general formula for \(\mathsf{EDoF}_{1}\) based on 2D sampling theory arguments for the non-uniform spherical wave (NUSW) model of [5]. Although this formula may present tractability challenges, again, it reveals that the upper limit of \(\mathsf{EDoF}_{1}\) is proportional to the product of the transmitter and receiver areas, while it is inversely proportional to the link distance. These improvements enhance our understanding of EDoF in NFC systems.
In summary, the conclusions drawn from [4] and [5] suggest that the number of dominant communication modes and channel capacity can be enhanced in two primary means: increasing the aperture size and reducing the communication distance. Remarkably, these strategies align with the commonly employed techniques for supporting NFC, emphasizing the superior spatial EDoF capabilities of NFC systems.
### _Exploitation of the DoF_
To fully utilize the increased DoFs or EDoFs (i.e., \(\mathsf{EDoF}_{1}\)) offered by near-field SPD-MIMO, it is crucial to apply SVD to the channel matrix \(\mathbf{H}\). This allows for the identification of the right and left singular vectors corresponding to the dominant \(\mathsf{EDoF}_{1}\) singular values. To further optimize the achievable
Fig. 1: Illustration of near-field communications.
channel capacity, the water-filling algorithm can be utilized for judiciously sharing the power among the \(\mathsf{EDoF}_{1}\) parallel sub-channels. Fig. 2(a) illustrates the detailed architecture that outlines the exploitation of DoF in NFC relying on SPD antennas.
### _Discussion and Outlook_
#### Ii-C1 MIMO NLoS Channel
The DoF of MIMO NLoS channels is influenced by the geometrical distribution of scatterers. In a rich scattering environment, the MIMO channel can achieve full rank for both the near-field and far-field regions due to the random phase shifts introduced by scatterers. As a result, the achievable DoFs in MIMO LoS channels may approach the minimum value between the numbers of receive and transmit antennas. When a large number of transceiving antennas are employed, the authors of [6] and [7] have demonstrated by leveraging sampling theory that the upper limit of \(\mathsf{EDoF}_{1}\) is directly proportional to the effective aperture of the transceivers.
For SPD-MIMO, the exact values of \(\mathsf{DoF}\) and \(\mathsf{EDoF}_{1}\) can be obtained from the SVD of the channel matrix \(\mathbf{H}\) for both LoS and NLoS channels. However, obtaining tractable closed-form expressions for these two performance metrics remains challenging. To address this, previous studies have investigated the upper limit of \(\mathsf{EDoF}_{1}\) under various channel conditions by considering the asymptotic scenario of a large number of transceiving antennas [4, 5, 6, 7]. These elegant expressions are derived using Green's function model, which may appear impervious to newcomers, who are experts in other fields. This leads to an important question: _Can there be DoF-related metrics that evaluate NFC performance in a non-asymptotic manner in closed form?_ The answer is affirmative, and the following parts provide the details of these metrics.
#### Ii-C2 \(\mathsf{EDoF}_{2}\)
Recently, some researchers have introduced an alternative metric to assess NFC performance, also termed as the "_effective degree of freedom (EDoF)_", which is given by \((\mathrm{tr}(\mathbf{H}\mathbf{H}^{\mathsf{H}})/\|\mathbf{H}\mathbf{H}^{ \mathsf{H}}\|_{F})^{2}\) and denoted as \(\mathsf{EDoF}_{2}\). \(\mathsf{EDoF}_{2}\) can be readily calculated for any arbitrary channel matrix, regardless of whether the system operates in near- or far-field regions, and under LoS or NLoS propagations. As an example, let us consider the LoS channel. In far-field LoS MIMO, the channel matrix has a rank of \(1\), and hence \(\mathsf{EDoF}_{2}\) becomes \(1\). Conversely, for near-field LoS MIMO, \(\mathsf{EDoF}_{2}\) falls between \(1\) and DoF, and it is also proportional to the number of transceiving antennas [8]. The upper limit of \(\mathsf{EDoF}_{2}\) is obtained for near-field LoS MIMO by letting the number of antennas approach infinity, demonstrating its inverse proportionality to the link distance. The results in [8] indicate that the near-field effect can enhance \(\mathsf{EDoF}_{2}\). Several studies have claimed, without any justifications, that \(\mathsf{EDoF}_{2}\) represents the equivalent number of
Fig. 2: Communication architecture based on orthogonal parallel sub-channels for MIMO NFC. In this figure, \(\{x_{n}\}_{n=1}^{\mathsf{EDoF}_{1}}\) are the transmitted symbols, \(\{\hat{x}_{n}\}_{n=1}^{\mathsf{EDoF}_{1}}\) are the received symbols, \(\{\sigma_{n}\}_{n=1}^{\mathsf{EDoF}_{1}}\) are the dominant singular values of the MIMO channel (channel matrix or Green’s function), and AWGN stands for additive white Gaussian noise.
sub-channels, as depicted in Fig. 2(a), and can be employed for evaluating the NFC performance [8]. However, it is crucial to note that these statements lack mathematical rigor and may lead to misinterpretations of the actual meaning and implications of EDoF\({}_{2}\).
The concept of EDoF\({}_{2}\) was originally introduced by Muharemovic _et al._[9], who built upon Verdu's previous work [10] to approximate the MIMO channel capacity as \(\mathsf{EDoF}_{2}\cdot[\log_{2}(\frac{E_{0}}{N_{0}})-\log_{2}(\frac{E_{0}}{N_{0 }\min})]\) in the low-SNR regime. Here, \(\frac{E_{0}}{N_{0}}\) represents the bit energy over noise power spectral density, and \(\frac{E_{0}}{N_{0}\min}\) is the minimum value required for reliable communications. Additionally, \(\frac{E_{0}}{N_{0}}\) is determined by the product of the channel capacity and the SNR [10, Eqn. (14)]. By considering the insights gleaned from [9] and [10], it becomes evident that EDoF\({}_{2}\) possesses a distinct physical interpretation when compared to EDoF\({}_{1}\) and DoF. Generally, the value of EDoF\({}_{2}\) is not directly associated with the number of dominant sub-channels depicted in Fig. 2(a). However, an exception occurs when the dominant sub-channels have nearly identical channel gains, i.e., \(\sigma_{1}\approx\ldots\approx\sigma_{\mathsf{EDoF}_{1}}\gg\sigma_{\mathsf{EDoF }_{1}+1}>\ldots>\sigma_{\mathsf{DoF}}\). In such cases, EDoF\({}_{1}\) can be approximately represented by the value of EDoF\({}_{2}\). Our numerical results in Section IV suggest that this scenario can happen in certain LoS channels. Nonetheless, this approximation remains heuristic, and its generality lacks mathematical rigor.
To summarize, EDoF\({}_{2}\) serves as a significant performance metric for NFC in the low-SNR region, yet it cannot be simply interpreted as the equivalent number of sub-channels. Its significance and interpretation are different from those of EDoF\({}_{1}\) and DoF. Hence, it is important to discern its distinct role in NFC.
#### Ii-C3 EDoF\({}_{3}\)
To fully harness the spatial DoFs offered by NFC MIMO, it is desirable to operate the system in the high-SNR region. In such scenarios, the channel capacity should exhibit roughly linear growth vs. the DoF or EDoF\({}_{1}\), given a fixed transmit power. However, achieving this high SNR condition may not always be feasible in practical settings. In recognition of this fact, Shiu _et al._[11] introduced an alternative metric, also termed as the "_effective degree of freedom (EDoF)_", which represents the number of equivalent sub-channels actively participating in conveying information under specific operating conditions. For clarity, we refer to this metric as EDoF\({}_{3}\).
In a SISO channel, a \(G\)-fold increase in transmit power leads to a capacity increase of \(\log_{2}G\) bps/Hz at high SNRs. If a system is equivalent to EDoF\({}_{3}\) SISO channels in parallel, the overall system capacity should increase by EDoF\({}_{3}\cdot\log_{2}G\) bps/Hz when the transmit power is multiplied by a factor of \(G\). To formally define EDoF\({}_{3}\cdot\log_{2}G\), Shiu _et al._[11] express it as \(\frac{\mathrm{d}}{\mathrm{d}\delta}C(\mathsf{SNR}\cdot 2^{\delta})\big{|}_{\delta=0}\), where \(C(\mathsf{SNR})\) represents the MIMO channel capacity at a given SNR. It is important to note that \(C(\cdot)\) can refer to the instantaneous capacity, outage capacity, or ergodic capacity, making the expression of EDoF\({}_{3}\) applicable to arbitrary channel matrices, regardless of whether the system operates in the near- or far-field regions, and under LoS or NLoS propagations. Let us consider the LoS channel as an example. In far-field LoS MIMO, the channel matrix has a rank of \(1\), leading to EDoF\({}_{3}\) being no larger than \(1\). Conversely, for near-field LoS MIMO, EDoF\({}_{3}\) could exceed \(1\)[11]. Observe from this comparison that the near-field effect can improve EDoF\({}_{3}\).
Essentially, EDoF\({}_{3}\) describes the number of equivalent SISO sub-channels at a given SNR, making it a valuable performance indicator for NFC in different SNR scenarios.
#### Ii-C4 Summary and Outlook
A detailed comparison among DoF, EDoF\({}_{1}\), EDoF\({}_{2}\), and EDoF\({}_{3}\) is summarized in Table I. Taken together, these four DoF-related metrics possess different physical meanings and scopes of application. As such, they should be appropriately utilized based on the practical demands of NFC. While the existing results have been primarily focused on EDoF\({}_{1}\), it is crucial to develop a comprehensive mathematical framework for calculating the upper limits of EDoF\({}_{2}\) and EDoF\({}_{3}\) under both LoS and NLoS scenarios. This avenue represents a potential direction for future research.
## III DoFs Achieved in CAP-MIMO
Utilizing CAP antennas presents a promising technique of improving the performance of MIMO systems having limited apertures. In contrast to SPD-MIMOs, which involve a large number of discrete antennas having specific spacing, CAP-MIMO adopts an infinite number of antennas with infinitesimal spacing. This section investigates the spatial DoFs in near-field CAP-MIMO.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Degre of Freedom: DoF} & \multicolumn{2}{c|}{Effective Degree of Freedom: EDoF} \\ \cline{2-5} & & DoF & EDoF\({}_{1}\) & EDoF\({}_{2}\) & EDoF\({}_{3}\) \\ \hline Definition & [3] & [4] & [9] & [11] \\ \hline Values Range & \(\in\mathbb{Z}^{*}\), \([1,N_{\min}]\) & \(\in\mathbb{Z}^{*}\), \([1,N_{\min}]\) & \(\in\mathbb{R}^{*}\), \([1,N_{\min}]\) & \(\in\mathbb{R}^{*}\), \([1,N_{\min}]\) \\ \hline SNR Range & \multicolumn{2}{c|}{High-SNR region} & Low-Medium-SNR region & Low-SNR region & All SNR ranges \\ \hline Relation with & Number of sub-channels & Number of & No direct relation with & Number of \\ Sub-channels & with a non-zero SNR & dominant sub-channels & the number of sub-channels & equivalent sub-channels \\ \hline SPD-MIMO & LoS & 1 & 1 & 1 \\ Face-Field & NLoS & Rank of \(\mathbf{H}^{*}\), \(>1\) & Obtained from SVD of \(\mathbf{H}\), \(\geqslant 1\) & \((\mathbf{tr}(\mathbf{H}^{*})/\|\mathbf{H}^{*}\|^{*})^{[9]}\), \(>1\) & \(\pm C(\mathsf{SNR}^{*}\cdot 2^{\delta})_{\min}\) [11] \(<N_{\min}\) \\ \cline{2-5} (Calculation) & \multicolumn{2}{c|}{Upper bound: \(N_{\min}\)} & Upper bound: \(N_{\min}\) & Upper bound: \(N_{\min}\) & Upper bound: \(N_{\min}\) \\ \hline SPD-MIMO & LoS & Rank of \(\mathbf{H}^{*}\), \(\geqslant 1\) & Obtained from SVD of \(\mathbf{H}\), \(\geqslant 1\) & \((\mathbf{tr}(\mathbf{H}^{*})/\|\mathbf{H}^{*}\|^{*})^{[2]}\), \(>1\) & \(\pm C(\mathsf{SNR}^{*}\cdot 2^{\delta})_{\min}\) [11] \(<N_{\min}\) \\ Near-Field & \multicolumn{2}{c|}{Upper bound: \(N_{\min}\)} & Upper limit: \(\pm C\), \(\pm C\) & \((\mathbf{tr}(\mathbf{H}^{*})/\|\mathbf{H}^{*}\|^{*})^{[2]}\), \(>1\) & \(\pm C(\mathsf{SNR}^{*}\cdot 2^{\delta})_{\min}\) [2] \(<N_{\min}\) \\ \cline{2-5} (Calculation) & \multicolumn{2}{c|}{NLoS} & Rank of \(\mathbf{H}^{*}\), \(\geqslant 1\) & Obtained from SVD of \(\mathbf{H}\), \(\geqslant 1\) & \((\mathbf{tr}(\mathbf{H}^{*})/\|\mathbf{H}^{*}\|^{*})^{[2]}\), \(>1\) & \(\pm C(\mathsf{SNR}^{*}\cdot 2^{\delta})_{\min}\) [2] \(<N_{\min}\) \\ \hline \end{tabular}
* \(A_{i/r}\) is the effective aperture size of the transmitter/receiver
* \(N_{\min}\) is the minimum value between \(N_{\mathrm{r}}\) and \(N_{\mathrm{t}}\)
* \(d\) is the link distance between the transmitter and receiver
\end{table} TABLE I: Summary of DoF-related metrics for MIMO NFC supported by SPD antennas.
### _Calculation of the DoF_
We consider a scenario where both the transmitter and receiver are equipped with CAP antennas, which is analogous to the MIMO setup for SPD antennas. However, in contrast to the SPD antenna array that delivers finite-dimensional signal vectors, the CAP surface supports a continuous distribution of source currents within the transmitting aperture, giving rise to the generation of an electric radiation field at the receiver aperture. The spatial channel impulse response between any two points on the transceiving surfaces is described by Green's function, which connects the transmitter's current distribution and the receiver's electric field via a spatial integral. Green's function accurately models the EM characteristics in free space and effectively represents the channel response between the transceivers, akin to the channel matrix for SPD-MIMOs.
#### Ii-A1 DoF
Based on the above considerations, the spatial CAP-MIMO channel can be decomposed into a series of parallel SISO sub-channels by finding the equivalent "SVD" of Green's function [4, Eqn. (27)]. The resultant equivalent "left singular vectors" and "right singular vectors" form two complete sets of orthogonal basis functions, one for the transmitter's aperture and the other for the receiver's aperture. The resultant equivalent "singular values" correspond to the channel gains of the decomposed sub-channels. Alternatively, these "singular values" can be obtained through the eigenvalue decomposition of the Hermitian kernel of Green's function (analogous to the correlation matrix \(\mathbf{H}\mathbf{H}^{\text{H}}\) for SPD antennas); see [4, Eqn. (42)] and [1, Section II-C] for more details. The number of non-zero "singular values" of Green's function, or equivalently, the non-zero eigenvalues of its kernel, is defined as the DoF, denoted as DoF. The DoF also signifies the number of SISO sub-channels at a non-zero SNR, each of which supports an independent communication mode within the entire system.
As noted in [4], the far-field LoS CAP-MIMO can support a maximum of one communication mode. Consequently, the DoF of far-field LoS CAP-MIMO is limited to \(1\). However, in the case of near-field LoS MIMO, the DoF has the potential to approach infinity due to the associated spherical wave propagation [4]. Therefore, we may conclude that the near-field effect significantly enhances the spatial DoFs for CAP-MIMO.
#### Ii-A2 \(\mathsf{EDoF}_{1}\)
The near-field CAP-MIMO system has the remarkable ability to support infinitely many communication modes. However, it is crucial to recognize that only those modes having significant channel gains can be effectively utilized to convey information. The total number of these effective communication modes is known as the \(\mathsf{EDoF}\), i.e., \(\mathsf{EDoF}_{1}\). Several methods have been proposed to determine or approximate the value of \(\mathsf{EDoF}_{1}\), such as analyzing the eigenvalues of the kernel of Green's function [4], employing sampling theory [5, 6, 7], utilizing diffraction theory [12], or leveraging Landau's theorem [13].
Prior research has demonstrated that for near-field LoS CAP-MIMO, the value of \(\mathsf{EDoF}_{1}\) is directly proportional to the product of the transmitter and receiver areas while being inversely proportional to the link distance [5, 6, 7]. On the other hand, for far-field LoS CAP-MIMO, the value of \(\mathsf{EDoF}_{1}\) is limited to \(1\). These findings highlight the superiority of NFC in terms of enhancing the spatial DoFs.
### _Exploitation of the DoF_
To fully exploit the increased EDoFs offered by near-field CAP-MIMO, it becomes essential to determine the left and right singular functions of Green's function and their associated singular values. This task involves solving the eigenvalue problem for the Hermitian kernel [1]. A potential architecture for the CAP-MIMO is illustrated in Fig. 2(b), which closely resembles that of SPD-MIMO. However, it is important to acknowledge that the computational complexity associated with solving the eigenvalue problem for CAP-MIMO is significantly higher than that for SPD-MIMO. Additionally, the architecture depicted in Fig. 2(b) requires the use of infinitely many radio-frequency chains.
### _Discussion and Outlook_
#### Ii-C1 MIMO NLoS Channel
The DoFs of near-field CAP-MIMO have also been investigated in the context of NLoS propagation. In [6] and [7], the authors explored various scattering environments and utilized sampling theory to analyze the \(\mathsf{EDoF}\). Their findings revealed that \(\mathsf{EDoF}_{1}\) of NLoS CAP-MIMO is higher than \(1\) in both the near-field and far-field regions. Moreover, they demonstrated that increasing the effective aperture of the transceivers can lead to further improvements of \(\mathsf{EDoF}_{1}\).
#### Ii-C2 \(\mathsf{EDoF}_{2}\)
The concept of \(\mathsf{EDoF}_{2}\) has been extended to CAP-MIMO channels upon replacing the channel matrix by Green's function [14, Eqn. (8)]. Closed-form formulas of \(\mathsf{EDoF}_{2}\) have been derived for near-field CAP-MIMO [8, 14], specifically for the LoS channel. The analysis reveals that while \(\mathsf{EDoF}_{2}\) of FFC is limited to \(1\), \(\mathsf{EDoF}_{2}\) of NFC is inversely proportional to the link distance. These findings underscore the advantage of NFC in terms of \(\mathsf{EDoF}_{2}\). However, it is essential to acknowledge that there are currently no studies proving that the channel capacity of CAP-MIMO satisfies \(\mathsf{EDoF}_{2}\cdot\left[\log_{2}(\frac{\underline{F}_{0}}{N_{0}})-\log_{ 2}(\frac{\underline{F}_{0}}{N_{0}\,\min})\right]\) in the low-SNR regime. As a result, \(\mathsf{EDoF}_{2}\) remains a heuristic concept for CAP-MIMO, lacking precise physical interpretations. Further research is needed to establish a more rigorous and practical understanding of \(\mathsf{EDoF}_{2}\) in the context of CAP-MIMO.
#### Ii-C3 \(\mathsf{EDoF}_{3}\)
The concept of \(\mathsf{EDoF}_{3}\) is also applicable to CAP-MIMO. It is evident that \(\mathsf{EDoF}_{3}\) of a far-field LoS channel cannot exceed \(1\), while for near-field LoS CAP-MIMO, \(\mathsf{EDoF}_{3}\) can be higher than \(1\). However, it is important to note that due to the lack of closed-form expressions for the channel capacity of CAP-MIMO, calculating the exact value of \(\mathsf{EDoF}_{3}\) for near-field CAP-MIMO becomes intractable [15]. Therefore, further investigations are required to address this aspect and gain a deeper understanding of \(\mathsf{EDoF}_{3}\) in the context of near-field CAP-MIMO.
#### Ii-C4 Summary and Outlook
A detailed comparison of \(\mathsf{DoF}\), \(\mathsf{EDoF}_{1}\), \(\mathsf{EDoF}_{2}\), and \(\mathsf{EDoF}_{3}\) is summarized in Table II. The results presented in Table II primarily pertain to point-to-point CAP-MIMO channels. However, investigating the spatial
DoFs introduced by the near-field effect in a multiuser CAP-MIMO setup holds both theoretical and practical significance. As previously mentioned, the practical implementation of near-field CAP-MIMO is computationally intractable. Therefore, it is imperative to explore practical and scalable techniques of CAP-MIMO implementations.
## IV Numerical Results
In this section, we explore the enhanced DoFs and EDoFs offered by MIMO NFC through computer simulations in LoS channel scenarios.
### _Spd-Mimo_
Fig. 3 illustrates the DoFs and EDoFs in SPD-MIMO, showcasing the increased DoFs provided by the near-field effect. Specifically, in Fig. 3(a), we present the singular values of the MIMO channel matrix for different link distances and numbers of antennas. Notably, the DoF of NFC is significantly higher than the value achieved by FFC, surpassing the single DoF threshold. As shown, the singular values exhibit a slow decline until they reach a critical threshold, after which they decrease rapidly. The number of dominant singular values defines the \(\mathsf{EDoF}_{1}\). From Fig. 3(a), we can infer that as the number of antennas increases, the singular values, and thus the channel gains of the decomposed sub-channels, experience slight improvements, with \(\mathsf{EDoF}_{1}\) converging rapidly to its upper limit (as calculated in [4]). Additionally, it is noteworthy that a shorter link distance results in a higher \(\mathsf{EDoF}_{1}\)[4], showing the superiority of NFC in terms of DoF enhancement.
Fig. 3(b) presents the plot of \(\mathsf{EDoF}_{2}\) for SPD-MIMO in the near-field. We observe that as the number of antennas increases, \(\mathsf{EDoF}_{2}\) of SPD-MIMO converges to its limit, which is equivalent to \(\mathsf{EDoF}_{2}\) of CAP-MIMO [8]. This convergence occurs more rapidly for higher link distances. Remarkably, as depicted in the graph, SPD-MIMO having half-wavelength antenna spacing can achieve nearly the same \(\mathsf{EDoF}_{2}\) as CAP-MIMO. The results in Fig. 3(a) indicate that the singular values of our system satisfy \(\sigma_{1}\approx\ldots\approx\sigma_{\mathsf{EDoF}_{1}}\gg\sigma_{\mathsf{ EDoF}_{1}+1}>\ldots>\sigma_{\mathsf{DoF}}\), where \(\mathsf{EDoF}_{1}\) can be approximated by the value of \(\mathsf{EDoF}_{2}\). This observation aligns with the findings from Fig. 3(b).
In Fig. 3(c), we plot \(\mathsf{EDoF}_{3}\) as a function of the SNR. Observe that reducing the link distance enhances \(\mathsf{EDoF}_{3}\), further validating the use of the near-field effect to improve channel capacity. Additionally, we note that in the high-SNR regime, \(\mathsf{EDoF}_{3}\) can exceed \(\mathsf{EDoF}_{1}\) and \(\mathsf{EDoF}_{2}\). This phenomenon arises because the non-dominant sub-channels can also support reliable communications, when sufficient transmit power resources are available.
### _Cap-Mimo_
Fig. 4 presents an analysis of the DoFs in CAP-MIMO systems. Due to the computational complexity associated with calculating the channel capacity of CAP-MIMO [15], we focus
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Metric & \multicolumn{2}{c|}{Degree of Freedom: DoF} & \multicolumn{2}{c|}{Effective Degree of Freedom: EDoF} \\ \hline Definition & \multicolumn{2}{c|}{DoF} & \(\mathsf{EDoF}_{1}\) & \(\mathsf{EDoF}_{2}\) & \(\mathsf{EDoF}_{3}\) \\ \hline Values Range & \(\varepsilon\,\mathbb{Z}^{+},[1,\infty)\) & \(\varepsilon\,\mathbb{Z}^{+},[1,\infty)\) & \(\varepsilon\,\mathbb{R}^{+},[1,\infty)\) & \(\varepsilon\,\mathbb{R}^{+},[1,\infty)\) & \(\varepsilon\,\mathbb{R}^{+},(0,\infty]\) \\ \hline SNR Kangs & High-SNR region & Low-Medium-SNR region & Unknown & All SNR ranges \\ \hline Relation with & Number of sub-channels & Number of & No direct relation with & Number of \\ Sub-Channels & with a non-zero SNR & dominant sub-channels & the number of sub-channels & equivalent sub-channels \\ \hline CAP-MIMO & LoS & 1 & 1 & 1 \\ Far-Field & \multirow{2}{*}{NLoS} & Obtained from solving the & Obtained from solving the & The exact expression & \(\frac{1}{d}C(\mathsf{SNR}\cdot 2^{l})_{l=0}\) \\ (Calculation) & & eigenvalue problem, \(>1\) & eigenvalue problem, \(\geq 1\) & eigenvalue problem, \(\geq 1\) & is unknown, \(>1\) \\ \hline CAP-MIMO & LoS & Obtained from solving the & \(\propto A_{t}A_{t}\), \(\propto d^{-2}\), \(\geq 1\), [4, 5] & \(\propto d^{-1}\), \(\geq 1\), [14] & \(\frac{d}{d}C(\mathsf{SNR}\cdot 2^{l})_{l=0}\) \\ Near-field & \multirow{2}{*}{NLoS} & Obtained from solving the & \multirow{2}{*}{\(\propto A_{t}\), \(d\), [6, 7]} & The exact expression & \multirow{2}{*}{\(\frac{1}{\mathsf{SNR}\cdot 2^{l}}\)} \\ (Calculation) & & eigenvalue problem, \(>1\) & & is unknown, \(>1\) & \\ \hline \end{tabular}
* \(A_{t/t}\) is the effective aperture size of the transmitter/receiver, \(d\) is the link distance between the transmitter and receiver
\end{table} TABLE II: Summary of DoF-related metrics for MIMO NFC supported by CAP antennas.
Fig. 3: Illustration of EDoFs in SPD-MIMO LoS channels, where both transmitter and receiver are equipped with uniform linear arrays (ULAs), each containing \(N\) antennas, and the system operates at a frequency of \(28\) GHz (with a corresponding wavelength of \(\lambda=1\) cm). The ULAs have an aperture size of \(1.37\) m. The center of the transmitter is located at the origin of a three-dimensional plane, while the center of the receiver is at \((0,d,0)\) with \(d\) denoting the link distance. The ULAs face each other and are parallel to the \(z\)-axis.
on illustrating \(\mathsf{EDoF}_{1}\) and \(\mathsf{EDoF}_{2}\) in this figure, and the numerical results for \(\mathsf{EDoF}_{3}\) are omitted.
In Fig. 4(a) and Fig. 4(b), we showcase \(\mathsf{EDoF}_{1}\) and \(\mathsf{EDoF}_{2}\) as functions of the link distance, respectively. To differentiate between the near-field and far-field regions, we mark the Rayleigh distance in both graphs. The figures demonstrate that both \(\mathsf{EDoF}_{1}\) and \(\mathsf{EDoF}_{2}\) can be enhanced by either increasing the aperture sizes of the transceivers or reducing the link distance. These strategies align with commonly employed techniques for supporting NFC. A notable observation from the comparison of Fig. 4(a) and Fig. 4(b) is that the curves for \(\mathsf{EDoF}_{1}\) follow similar trends to those of \(\mathsf{EDoF}_{2}\), corroborating the findings from Fig. 3(b).
The numerical results presented in Fig. 3 and Fig. 4 collectively underscore the substantial impact of near-field effects on augmenting the DoFs in MIMO systems. These findings contribute valuable insights to the understanding and design of NFC technologies.
## V Conclusion and Promising Research Directions
In this article, we have conducted an in-depth investigation into the performance of MIMO NFC from a DoF perspective. We began by elucidating the spatial DoFs achievable in near-field SPD-MIMO and exploring how these increased DoFs can be exploited for enhancing the channel capacity. Next, we analyzed and compared three DoF-related performance metrics, namely \(\mathsf{EDoF}_{1}\), \(\mathsf{EDoF}_{2}\), and \(\mathsf{EDoF}_{3}\), to their far-field counterparts for demonstrating the superiority of NFC in terms of spatial multiplexing and channel capacity. To further explore the potential of MIMO NFC, we extended these results to CAP-MIMO to determine the upper limit of performance. We have deepened the understanding of the augmented spatial DoFs offered by the near-field effect, with the hope of inspiring further innovations in this field. There are still numerous open research problems in this area, which are summarized from three aspects.
* DoF-Based Information-Theoretic Limits: The DoF is a significant information-theoretic measure directly related to channel capacity. Exploring the DoF to characterize the fundamental information-theoretic limits of NFC, including deriving the achievable DoF region, can provide essential insights for system design. Additionally, the pursuit of capacity-approaching transmission schemes for NFC from a DoF perspective represents a valuable endeavor.
* DoF-Based Performance Analysis: Although our analysis has concentrated on point-to-point MIMO NFC, extending our investigations to multiuser scenarios holds the potential of offering valuable insights into the spatial DoFs in more complex communication setups, presenting a promising avenue for future research. Additionally, the heuristic nature of \(\mathsf{EDoF}_{2}\) and the computational challenges in calculating \(\mathsf{EDoF}_{3}\) for CAP-MIMO necessitate further research efforts to derive precise physical interpretations and practical implications for these metrics.
* DoF-Inspired Beamforming Design: Effective beamforming designs are crucial for fully harnessing the increased DoFs offered by NFC. However, the computational and hardware complexities, particularly in the context of CAP-MIMO implementation, pose significant challenges. Therefore, there is a pressing need to explore scalable and computation-hardware efficient beamforming techniques that can exploit the benefits of augmented DoFs in practical NFC scenarios.
|
2301.13173 | Shape-aware Text-driven Layered Video Editing | Temporal consistency is essential for video editing applications. Existing
work on layered representation of videos allows propagating edits consistently
to each frame. These methods, however, can only edit object appearance rather
than object shape changes due to the limitation of using a fixed UV mapping
field for texture atlas. We present a shape-aware, text-driven video editing
method to tackle this challenge. To handle shape changes in video editing, we
first propagate the deformation field between the input and edited keyframe to
all frames. We then leverage a pre-trained text-conditioned diffusion model as
guidance for refining shape distortion and completing unseen regions. The
experimental results demonstrate that our method can achieve shape-aware
consistent video editing and compare favorably with the state-of-the-art. | Yao-Chih Lee, Ji-Ze Genevieve Jang, Yi-Ting Chen, Elizabeth Qiu, Jia-Bin Huang | 2023-01-30T18:41:58Z | http://arxiv.org/abs/2301.13173v1 | # Shape-aware Text-driven Layered Video Editing
###### Abstract
Temporal consistency is essential for video editing applications. Existing work on layered representation of videos allows propagating edits consistently to each frame. These methods, however, can only edit object appearance rather than object shape changes due to the limitation of using a fixed UV mapping field for texture atlas. We present a shape-aware, text-driven video editing method to tackle this challenge. To handle shape changes in video editing, we first propagate the deformation field between the input and edited keyframe to all frames. We then leverage a pre-trained text-conditioned diffusion model as guidance for refining shape distortion and completing unseen regions. The experimental results demonstrate that our method can achieve shape-aware consistent video editing and compare favorably with the state-of-the-art.
## 1 Introduction
Image editing.Recently, image editing [19, 20, 24, 34, 40, 44] has made tremendous progress, especially those using diffusion models [19, 20, 44, 40]. With free-form text prompts, users can obtain photo-realistic edited images without artistic skills or labor-intensive editing. However, unlike image editing, video editing is more challenging due to the requirement of temporal consistency. Independently editing individual frames leads to undesired inconsistent frames, as shown in Fig. 1(a). A naive way to deal with temporal consistency in video editing is to edit a single frame and then propagate the change to all the other frames. Nevertheless, artifacts are presented when there are unseen pixels from the edited frame in the other frames, as shown in Fig. 1(b).
Video editing and their limitations.For consistent video editing, _Neural Layered Atlas_ (NLA) [18] decomposes a video into unified appearance layers _atlas_. The layered decomposition helps consistently propagate the user edit to
individual frames with per-frame UV sampling association. Based on NLA, Text2LIVE [2] performs text-driven editing on atlases with the guidance of the Vision-Language model, CLIP [39]. Although Text2LIVE [2] makes video editing easier with a text prompt, it can only achieve _appearance manipulation_ due to the use of fixed-shape associated UV sampling. Since per-frame UV sampling gathers information on motion and shape transformation in each frame to learn the pixel mapping from the atlas, shape editing is not feasible, as shown in Fig. 2c.
**Our work.** In this paper, we propose a _shape-aware_ text-guided video editing approach. The core idea in our work lies in a novel UV map deformation formulation. With a selected keyframe and target text prompt, we first generate an edited frame by image-based editing tool (_e.g_., Stable Diffusion [44]). We then perform pixel-wise alignment between the input and edited keyframe pair through a semantic correspondence method [51]. The correspondence specifies the deformation between the input-edited pair at the keyframe. According to the correspondence, the shape and appearance change can then be mapped back to the atlas space. We can thus obtain per-frame deformation by sampling the deformation from the atlas to the original UV maps. While this method helps with shape-aware editing, it is insufficient due to unseen pixels in the edited keyframe. We tackle this by further optimizing the atlas texture and the deformation using a pretrained diffusion model by adopting the gradient update procedure described in DreamFusion [38]. Through the atlas optimization, we achieve consistent _shape_ and _appearance_ editing, even in challenging cases where the moving object undergoes 3D transformation (Fig. 1).
**Our contributions.**
* We extend the capability of existing video editing methods to enable shape-aware editing.
* We present a deformation formulation for frame-dependent shape deformation to handle target shape edits.
* We demonstrate the use of a pre-trained diffusion model for guiding atlas completion in layered video representation.
## 2 Related Work
**Text-driven image synthesis and editing.** Recent years have witnessed impressive progress in text-guided image synthesis and manipulation using GANs [24, 25, 27, 41, 43, 55, 61]. On text-to-image _generation_, DALL-E [41] first demonstrates the benefits of training text-to-image models
Figure 2: **Limitation of existing work.** Compare these results from baseline methods with our “sports car” result in Fig. 1. (a) Multiple frames are edited _independently_ and interpolated by frame interpolation method [42]. Such an approach shows realistic per-frame results but suffers from temporal flickering. (b) Extracting a single keyframe for image editing, the edits are propagated to each frame via [17]. The propagated edits are temporally stable. However, it yields visible distortions due to the unseen pixels from the keyframe. (c) The SOTA Text2LIVE [2] results demonstrate temporally-consistent appearance editing but remain the source shape “Jeep” instead of the target prompt “sports car” by using the fixed UV mapping of NLA.
using a massive image-text dataset. Most recent text-to-image generators [6, 30] use a pre-trained CLIP [39] as the guidance. On text-to-image _manipulation/editing_, recent methods also take advantage of the pretrained CLIP embedding for text-driven editing [9, 58, 36]. These methods either pretrain the model with CLIP embedding as inputs or use a test-time optimization approach [2, 8, 21].
Recently, diffusion models [14, 50, 7] have shown remarkable success in both text-guided image generation [1, 44, 45, 46, 35, 46] and editing [12, 35, 44] tasks. Stable Diffusion [44] performs a denoising diffusion process in a latent space and achieves high-resolution text-to-image generation and image-to-image translation results. In particular, the release of the model trained on large-scale text-image pair dataset [47] facilitates various creative applications from artists and practitioners in the community. Our work leverages the state-of-the-art text-to-image model, Stable Diffusion [44], and extends its semantic image editing capability to consistent video editing.
**Video generation.** Building upon the success of photorealistic (text-driven) image generation, recent work has shown impressive results on video generation, with a focus on generating long video [5, 11, 49, 60] and videos from free-form text prompts [13, 48, 52]. Unlike video _generation_ methods, our work differs in that we perform text-driven video _editing_ for real videos.
**Video editing.** In contrast to the breakthrough of image editing, video editing methods are faced with two core challenges: 1) temporal consistency and 2) computational complexity of the additional dimension. To attain temporally consistent editing effects, EbSynth [17] utilizes keyframes and propagates the edits to the entire video with optical flows computed from consecutive frames. Such flow-based techniques have been applied in other tasks such as video synthesis [3], video completion [10, 26, 15], and blind video consistency [22, 23]. Several studies address temporal inconsistency in the latent space via GAN inversion [54, 29, 57]. However, current GAN-based models can only model datasets with limited diversity (e.g., portrait or animal faces). Another line of approaches [33, 32, 28, 59, 18] decomposes a video into unified layer representation for consistent editing. Neural Layered Atlas (NLA) [18] performs test-time optimization on a given input video to learn the canonical appearance layer and per-frame UV mapping using video reconstruction loss. With layer decomposition, one can use text-driven image editing techniques to the unified layers to consistently broadcast the edits to each frame. The work most relevant to ours is Text2LIVE [2] and Loeschcke _et al_. [31]. Both methods build upon NLA to perform text-driven editing on the learned atlases. A pre-trained CLIP is used for each input video to guide the atlas editing via a test-time optimization framework. Yet, limited by the formulation of NLA, they only allow _appearance_ edits due to the fixed UV mapping from the atlas to frames. The mapping fields store the original shape information in each frame so that the fixed UV mapping restricts the freedom of _shape editing_ in [2, 31, 18]. Our work also builds upon NLA for achieving temporally consistent video editing. In contrast to existing methods [2, 31], we extend the capability of text-driven editing to enable shape editing.
## 3 Method
Given an input video \(\mathcal{I}^{s}_{1.N}\) and a text prompt, our proposed shape-aware video editing method produces a video \(\mathcal{I}^{\prime}_{1.N}\) with appearance _and_ shape changes while preserving the motion in the input video. For maintaining temporal consistency, our method uses the pre-trained video decomposition method, NLA [18], to acquire the canonical atlas layer \(\mathcal{I}^{s}_{A}\) and the associated per-frame UV map \(\mathcal{W}^{s}_{A\to 1.N}\) per motion group. For simplicity, we assume a single moving object in an input video so that there are two atlases \(\mathcal{I}^{s,FG}_{A}\) and \(\mathcal{I}^{s,BG}_{A}\) for foreground and background contents, respectively. The edits in \(\mathcal{I}^{s,FG}_{A}\) can be consistently transferred to each frame with UV mapping. To render the image \(\mathcal{I}^{s}_{j}\) back, we use the \(\mathcal{W}^{s}_{A\to t}\) and an alpha map \(\alpha^{s}_{t}\) to sample and blend:
\[\begin{split}\mathcal{I}^{s}_{j}&=\mathcal{I}^{s, FG}_{j}*\alpha^{s}_{j}+\mathcal{I}^{s,BG}_{j}*(1-\alpha^{s}_{j}),\\ \mathcal{I}^{s,g}_{j}&=\mathcal{W}^{s,g}_{A\to j} \otimes\mathcal{I}^{s,g}_{A},g\in\{FG,BG\},\end{split} \tag{1}\]
where \(\otimes\) denotes the warping operation. Following our shape deformation introduction, we focus on the foreground atlas and will omit \(FG\) from \(\mathcal{I}^{s,FG}\) for simplicity.
We first select a single source keyframe \(\mathcal{I}^{s}_{k}\) to pass into a text-driven image editing tool (_e.g._, Stable Diffusion [44]). The edits in target \(\mathcal{I}^{s}_{k}\) will then be propagated to \(\mathcal{I}^{s}_{1.N}\) through the atlas space with the mapping of \(\mathcal{W}^{s}_{A\to 1.N}\). Yet, the UV mapping cannot work when the edits involve _shape changes_ since \(\mathcal{W}^{s}_{A\to 1.N}\) are specifically for reconstructing the original shapes in the input video. Hence, to associate the target shape correctly, we propose a UV deformation formulation (Sec. 3.2) to transform each \(\mathcal{W}^{s}_{A\to j}\) into \(\mathcal{W}^{s}_{A\to j}\) according to the deformation between \((\mathcal{I}^{s}_{k},\mathcal{I}^{s}_{k})\). In other words, the keyframe deformation \(\mathcal{D}^{s\to t}_{k}\) between \((\mathcal{I}^{s}_{k},\mathcal{I}^{s}_{k})\) serves as the _bridge_ between input and output videos for changing into the edited target shape while preserving the source motion in the input. Note that the edits and keyframe deformation \(\mathcal{D}^{s\to t}_{k}\) alone are insufficient due to some unobserved areas from the viewpoint of image \(\mathcal{I}^{s}_{k}\). Therefore, to acquire a complete and consistent editing result, we leverage a pre-trained diffusion model to optimize the editing appearance and deformation parameters in the atlas space in Sec. 3.3. The process produces the final edited video \(\mathcal{I}^{s}_{1.N}\) with desired object shape and appearance changes.
### Keyframe editing
With the given text prompt, we edit a representative keyframe \(\mathcal{I}^{s}_{k}\) (_e.g._, the middle frame of the video) by a pre-trained Stable Diffusion [44] to obtain target edited keyframe \(\mathcal{I}^{t}_{k}\). Afterward, we leverage a pre-trained semantic correspondence model [51] to associate the correspondence between two different objects. The pixel-level semantic correspondence is the deformation that transforms the target shape in \(\mathcal{I}^{t}_{k}\) to the source shape in \(\mathcal{I}^{s}_{k}\).
### Deformation formulation
With the estimated semantic correspondence, we can obtain the pixel-level _shape deformation vectors_, \(\mathcal{D}^{t\to s}_{k}\in\mathbb{R}^{H\times W\times 2}\). The target shape in \(\mathcal{I}^{t}_{k}\) are then deformed into the source shapes in \(\mathcal{I}^{s}_{k}\) via \(\mathcal{D}^{t\to s}_{k}\):
\[\mathcal{I}^{t\to s}_{k}=\mathcal{D}^{t\to s}_{k}\otimes\mathcal{I}^{t}_{k}. \tag{2}\]
With the aid of \(\mathcal{D}^{t\to s}_{k}\), the edited object can be back-projected to the atlas to form an edited atlas, \(\mathcal{I}^{t\to s}_{A}\), by \(\mathcal{W}^{s}_{k\to A}\). Since it maintains the original shape, we cannot directly map the edited \(\mathcal{I}^{t}_{k}\) to the atlas with \(\mathcal{W}^{s}_{k\to A}\).
Given the edited atlas \(\mathcal{I}^{t\to s}_{A}\), the appearance edits can already be propagated to each frame with \(\mathcal{W}^{s}_{A\to 1.N}\) in source shapes. However, this needs improvement since our goal is to generate a new video with the target shape. In addition to propagating the edited appearance via the atlas space, we spread the displacement vectors to each frame to obtain per-frame deformation by back projecting keyframe deformation \(\mathcal{D}^{t\to s}_{k}\) into atlas space \(A\) with \(\mathcal{W}^{s}_{k\to A}\) to get \(\mathcal{D}^{t\to s}_{A}\). Yet, simply _warping_ into the new image space is insufficient as the coordinate system also got transformed by the warping operation. Therefore, we formulate a _shape deformation vector transformation_ matrix, \(\mathbf{M}_{\mathcal{W}}\), to handle the deformation vectors w.r.t. the original coordinate system by a warp field \(\mathcal{W}\):
\[\mathcal{D}^{\prime}(x^{\prime},y^{\prime})^{T}=\mathbf{M}_{\mathcal{W}}D(x,y)^{ T}, \tag{3}\]
where \((x,y)\) and \((x^{\prime},y^{\prime})\) represent the corresponding pixels in the source and target images, respectively, by the warping field, \(\mathcal{W}\) (_i.e._, \((x^{\prime},y^{\prime})=\mathcal{W}(x,y)\)). For pixel-level deformation, we compute a per-pixel deformation vector \(\mathbf{M}_{\mathcal{W}}\) for
Figure 3: **Method overview. Given an input video and a target edit text prompt, our method first bases on a pre-trained NLA [18] to decompose the video into unified atlases with the associated per-frame UV mapping. Aside from video decomposition, we use the text-to-image diffusion model to manipulate a single keyframe in the video (Sec. 3.1). Subsequently, we estimate the dense semantic correspondence between the input and edited keyframes for shape deformation. The shape deformation of the keyframe serves as the _bridge_ between input and output videos for per-frame deformation through the UV mapping and atlas. Our deformation module (Sec. 3.2) transforms the UV map with the semantic correspondence to associate with the edits for each frame. To address the issues of unseen pixels from the single keyframe, we optimize the edited atlas and the deformation parameters guided by a pre-trained diffusion model with the input prompt (Sec. 3.3).**
each pixel \((x,y)\) by:
\[\mathbf{M}_{\mathcal{W}}=\begin{bmatrix}\mathcal{W}(x+\Delta x,y)-\mathcal{W}(x,y)\\ \mathcal{W}(x,y+\Delta y)-\mathcal{W}(x,y)\end{bmatrix}^{T}\begin{bmatrix}1/ \Delta x\\ 1/\Delta y\end{bmatrix}, \tag{4}\]
where \(\Delta x\) and \(\Delta y\) denote small scalar shifts to form the local coordinate system in the source space. In practice, to avoid discrete sampling of warping, we use thin-plate spline [4] to approximate the warping field smoothly. We illustrate the transformation of the shape deformation vector in Fig. 3(c). With the transformation for the vector, we can obtain the corresponding deformation in the target warped space with the warp function \(\mathcal{W}\), which is the UV map in the atlas framework. Thus, the deformation map \(\mathcal{D}_{k}^{t\to s}\) is propagated to each \(I_{j}^{t}\) by:
\[\begin{split}\mathcal{D}_{A}^{t\to s}&=\mathbf{M}_{ \mathcal{W}_{k\to A}^{s}}\star(\mathcal{W}_{k\to A}^{s}\otimes\mathcal{D}_{k} ^{t\to s})\\ \mathcal{D}_{j}^{t\to s}&=\mathbf{M}_{\mathcal{W}_{A \to j}^{s}}\star(\mathcal{W}_{A\to j}^{s}\otimes\mathcal{D}_{A}^{t\to s}), \end{split} \tag{5}\]
where \(\star\) denotes the per-pixel matrix multiplication for the deformation map. Hence, we can deform the UV map \(\mathcal{W}_{A\to j}^{s}\) into \(\mathcal{W}_{A\to j}^{t}\) by \(\mathcal{W}_{A\to j}^{t}=\mathcal{D}_{j}^{s\to t}\otimes\mathcal{W}_{A \to j}^{s}\). Note that the alpha map for blending the target-shape object is also deformed in the same manner by \(\alpha_{j}^{t}=\mathcal{D}_{j}^{s\to t}\otimes\alpha_{j}^{s}\). Finally, the edited \(\mathcal{I}_{j}^{t}\) with initial deformation on the foreground object can be obtained by:
\[\mathcal{I}_{j}^{t}=\mathcal{W}_{A\to j}^{s}\otimes\mathcal{I}_{A}^{t\to s} \star\alpha_{j}^{t}+\mathcal{I}_{A}^{BG}\star(1-\alpha_{j}^{t}). \tag{6}\]
### Atlas optimization
Through the deformation formulation in Sec. 3.2, we can already obtain an edited video with the corresponding shape changes if the semantic correspondence, _i.e._, \(D_{k}^{t\to s}\), is reliable. However, the estimated semantic correspondence is often inaccurate for shape deformation. As a result, it would yield distortions in some frames. Moreover, the edited atlas could be incomplete since it only acquires the editing pixels from the single edited keyframe so the unseen pixels from the keyframe are missing. Hence, these incomplete pixels produce visible artifacts in other frames.
To address these issues, we utilize an additional atlas network \(F_{\theta_{\mathcal{R}}}\) and semantic correspondence network \(F_{\theta_{\mathcal{R}}}\) to fill the unseen pixels and refine the noisy semantic correspondence via an optimization. Here, the atlas network \(F_{\theta_{\mathcal{R}}}\) takes the initial appearance and deformation of the foreground atlas \((I_{A}^{t\to s},D_{A}^{t\to s})\) as input and outputs the _refined_\((\mathcal{\tilde{A}}_{A}^{t\to s},\mathcal{\tilde{P}}_{A}^{t\to s})\). Similarly, the semantic correspondence \(\mathcal{D}_{k}^{t\to s}\) is approximated by a thin-plate spline. We feed the control points into the semantic correspondence network \(F_{\theta_{\mathcal{R}}}\) to obtain the refined \(\mathcal{\tilde{D}}_{k}^{t\to s}\).
We select several frames that capture different viewpoints for optimization. Our training of synthesizing the edited frames, \(\mathcal{I}^{s}\), is guided by a pre-trained Vision-Language model with the target prompt. Inspired by DreamFusion [38], we leverage a pre-trained diffusion model [44] to provide pixel-level guidance by backpropagating the gradient of noise residual to the generated images (_without_ backpropagating through the U-Net model). Adding a noise \(\epsilon\) on \(\mathcal{I}^{t}\) as the input, the pretrained diffusion UNet outputs a predicted noise \(\mathbf{\hat{\epsilon}}\). The gradient of the noise
Figure 4: **Deformation formulation. Given the semantic correspondence between the input and edited keyframes, we map the edits back to the atlas via the original UV map (in the shape of the original atlas). Meanwhile, we transform the per-pixel deformation vectors into the atlas space with the same UV mapping field by (c). Consequently, the UV map samples the color and the deformation vectors onto each frame to deform the original UV map respecting the edited shape.**
residual \(\hat{\epsilon}-\epsilon\) is backpropagated to update \(\theta\):
\[\nabla_{\theta}\mathcal{L}_{diff}(\mathcal{I}^{t})\triangleq\mathbb{E}_{i, \epsilon}[w(i)(\hat{\epsilon}-\epsilon)\frac{\partial\mathcal{I}^{t}}{\partial \theta}], \tag{7}\]
where \(i\) stands for the time step for the diffusion model and the parameter set \(\theta=\{\theta_{A},\theta_{\text{SC}}\}\). We update the unified information in the atlas space to maintain the temporal consistency of the editing appearance and deformation with only training on a few generated frames \(\mathcal{I}^{t}\).
In addition to the guidance of the diffusion model on multiple frames, we also apply several constraints to the learning of the refinement networks, \(F_{\theta_{A}}\) and \(F_{\theta_{\text{SC}}}\), to preserve the editing effects as in the target edited keyframe \(I^{t}_{k}\). To ensure that the deformation through the atlas can successfully reconstruct the original edited \(\mathcal{I}^{t}_{k}\), the keyframe loss, \(\mathcal{L}_{k}\), measures the error between the original \(\mathcal{I}^{t}_{k}\) and the reconstructed \(\mathcal{\bar{I}}^{t}_{k}\) by L1 loss:
\[\mathcal{L}_{k}=|\mathcal{\bar{I}}^{t}_{k}-\mathcal{\bar{I}}^{t}_{k}|. \tag{8}\]
Besides, we also apply a total variation loss to encourage the spatial smoothness of the refined appearance in the atlas. The atlas loss is as follows:
\[\mathcal{L}_{A}=\mathcal{L}_{tv}(\mathcal{\bar{I}}^{t\to s}_{A}). \tag{9}\]
During the optimization, we also refine the semantic correspondence \(\mathcal{\bar{D}}^{t\to s}_{k}\) of the keyframe pair. An ideal semantic correspondence matches semantically-similar pixels and perfectly transforms the target shape into the source shape. Therefore, we compute the errors of the deformed target and the source object masks, \(\mathcal{M}^{t}_{k}\) and \(\mathcal{M}^{s}_{k}\):
\[\mathcal{L}_{\text{SC}}=|(\mathcal{\bar{D}}^{t\to s}_{k}\otimes \mathcal{M}^{t}_{k})-\mathcal{M}^{s}_{k}| \tag{10}\]
The total loss function \(\mathcal{L}=\mathcal{L}_{diff}+\lambda_{k}\mathcal{L}_{k}+\lambda_{A} \mathcal{L}_{A}+\lambda_{\text{SC}}\mathcal{L}_{\text{SC}},\lambda_{k}, \lambda_{\text{4}},\lambda_{\text{SC}}=10^{6},10^{3},10^{3}\). The optimized parameters \(\theta^{*}\) are then used to generate the final edited video \(\mathcal{\bar{I}}^{t*}_{1\_N}\).
**Implementation details.**
We implement our method in PyTorch. We follow the video configuration in NLA with the resolution of \(768\times 432\). We use a thin-plate spline to inverse a warping field to prevent introducing holes by forward warping. The refinement networks, \(F_{\theta_{A}}\) and \(F_{\theta_{\text{SC}}}\) exploits the architecture of Text2LIVE [2] and TPS-STN [16], respectively. The optimization performs on 3 to 5 selected frames, including \(I^{t}_{1}\), \(I^{t}_{k}\), and \(I^{t}_{N}\), for 600 to 1000 iterations. The optimization process takes 20 mins on a 24GB A5000 GPU. We further utilize an off-the-shelf super-resolution model [53] to obtain sharp details in the final edited atlases.
## 4 Experimental Results
Here we show sample editing results in the paper. We include additional video results in the supplementary material. We will make our source code and editing results publicly available to foster reproducibility.
### Experimental Setup
**Dataset.** We select several videos from DAVIS [37]. Each video contains a moving object in 50 to 70 frames. We edit each video with a prompt that describes a target object with a different shape from the original one.
**Compared methods.** We compare our results with SOTA and several baseline methods. For fair comparisons, all the baseline methods use the same image editing method, Stable Diffusion [44].
\(\bullet\)**Multi-frame baseline**: Multiple keyframes in a video are edited individually. The nearby edited keyframes temporally interpolate the remaining frames with FILM [42].
\(\bullet\)**Single-frame baseline**: We extract a single keyframe from a video to be edited. The edited information is then propagated to each frame with EbSynth [17].
\(\bullet\)**Text2LIVE**[2]: The SOTA text-driven editing method with NLA. Note that it utilizes a structure loss to preserve the original shape. We compare the official Text2LIVE in this section and show the comparison of removing structure loss in our supplementary material.
### Visual Comparison
We show a visual comparison with the baseline methods and Text2LIVE in Fig. 5. In the first example with "blackswan\(\rightarrow\)duck", the multi-frame baseline shows inconsistent editing in different frames. The single-frame baseline suffers from inaccurate frame motion and thus yields distortion during propagation. Text2LIVE shows a promising target appearance with temporal consistency but cannot change the shape that matches the target object. In contrast, our method provides the desired appearance _and_ consistent shape editing. In the second example with "boat\(\rightarrow\)yacht", the single-frame baseline shows an inconsistent shape since the frame propagation relies on the frame motion of the source shape. Consequently, it cannot propagate the edited pixels correctly in a different shape. In the third example with "dog\(\rightarrow\)cat", the input video contains a non-rigid motion moving object. It poses further challenges for multi- and single-frame baselines. Again, Text2LIVE demonstrates plausible cat appearance while remaining in the source dog shape. Our shape-aware method maintains the object motion and manipulates the texture and shape corresponding to the desired editing.
Figure 5: **Visual comparison with baselines and SOTA.** We show three examples with edits of “blackswan \(\rightarrow\)duck”, “boat \(\rightarrow\)yacht”, and “dog \(\rightarrow\)cat”. The multi-frame baseline shows inconsistency in the edited objects. The single-frame method suffers from the incomplete flow motion of the source object shape and thus could not propagate the edits properly. Text2LIVE demonstrates consistent appearance editing corresponding to the target edits. Nevertheless, the shape remains the same as the original object. In contrast, our proposed method outperforms the compared methods with consistent and plausible appearance and shape editing.
Figure 6: **Ablation study.** We study the effects of removing the deformation and optimization components. (a) Editing with fixed NLA UV mapping. (b) Using a semantic correspondence with fixed UV, the edits are mapped to the atlas properly but still remains the original shapes in results. (c) With deformation initialization (Sec. 3.2), the NLA UV maps are deformed to restore the target shape. (d) With further atlas optimization (Sec. 3.3), the incomplete pixels in edited atlas and distortion (in car’s roof and back wheel) are refined.
### Ablation Study
We conduct an ablation study in Fig. 6 to validate the effectiveness of the UV deformation and atlas optimization. With fixed NLA UV mapping, the shape edits in the keyframe cannot be adequately transformed through the atlas to each frame (Fig. 6a). Therefore, by adding a keyframe semantic correspondence to deform the target into the source shape, the fixed UV maps the edits correctly into the atlas but remains source shapes in the edited frames (Fig. 6b). To restore the target shape, our deformation module deforms the UV maps by the semantic correspondence (Fig. 6c). However, the unseen pixels and inaccurate correspondence yield artifacts in different views (, in the car's roof and back wheel). We refine the edited atlas and deformation with the atlas optimization (Fig. 6d).
### Application
We present an application of shape-aware interpolation in Fig. 7. Through interpolating the deformation maps, the object shape can be easily interpolated _without_ additional frame interpolation methods. Similarly, we can interpolate atlas textures. Note that we directly apply image editing on the background atlas since it can be treated as a natural panorama image (shown in Fig. 3). However, the foreground atlas is an unwrapped object texture, which is unnatural for general pre-trained editing models. Therefore, we edit the video frame and map it back to the atlas. This approach is more general and allows users to use their chosen images for video editing.
### Limitations
Our method strictly relies on the _many-to-one_ mapping from individual frames to a unified atlas. However, NLA may fail to get the ideal mapping in challenging scenarios with complex motions. Therefore, we observe artifacts in the erroneous mapping regions (, the motion of hind legs shown in Fig. 8). In addition, it remains difficult to build semantic correspondence between two different objects. While the atlas optimization can improve noisy correspondences, poor semantic correspondence initialization would hinder the optimization. We show that user manual correction (in Fig. 9) can lead to better video editing results.
## 5 Conclusions
We have presented a shape-aware text-driven video editing method. We tackle the limitation of appearance-only
Figure 8: **Limitations.** We visualize a failure example (bear \(\rightarrow\)lion). The inaccurate NLA mapping in the motion of crossing hind legs yields distortion in the edited result.
Figure 7: **Shape-aware interpolation.** Our methods allow interpolation between two shapes by simply interpolating the atlas deformation maps. The examples demonstrate the gradual changes from source objects to edited objects over the time.
Figure 9: **User-guided correspondence.** Associating two different objects remains challenging even for the SOTA semantic correspondence methods. For a pair of source (a) and target (b), the severe false matching can be corrected by users’ manual warping for better results.
manipulation in existing methods. We propose a deformation formulation using layered video representation to transform the mapping field corresponding to the target shape edits. We further refine the unseen regions by utilizing the guidance from a pre-trained text-to-image diffusion model. Our method facilitates a variety of shape and texture editing applications.
Societal impacts.Our work proposes a tool for enabling creative video editing applications. Nevertheless, similar to many image/video synthesis applications, care should be taken to prevent misuse or malicious use of such techniques. We will release our code under a similar license as Stable Diffusion that focuses on ethical and legal use.1
Footnote 1: [https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md)
|
2305.01080 | Temporal Betweenness Centrality on Shortest Walks Variants | Betweenness centrality has been extensively studied since its introduction in
1977 as a measure of node importance in graphs. This measure has found use in
various applications and has been extended to temporal graphs with time-labeled
edges. Recent research by Buss et al. and Rymar et al. has shown that it is
possible to compute the shortest path betweenness centrality of all nodes in a
temporal graph in $O(n^3\,T^2)$ and $O(n^2\,m\,T^2)$ time, respectively, where
$T$ is the maximum time, $m$ is the number of temporal edges, and $n$ is the
number of nodes. These approaches considered paths that do not take into
account contributions from intermediate temporal nodes.
In this paper, we study the classical temporal betweenness centrality paths
that we call \textit{passive} shortest paths, as well as an alternative variant
that we call \textit{active} shortest paths, which takes into account
contributions from all temporal nodes. We present an improved analysis of the
running time of the classical algorithm for computing betweenness centrality of
all nodes, reducing the time complexity to $O(n\,m\,T+ n^2\,T)$. Furthermore,
for active paths, we show that the betweenness centrality can be computed in
$O(n\,m\,T+ n^2\,T^2)$. We also show that our results hold for different
shortest paths variants.
Finally, we provide an open-source implementation of our algorithms and
conduct experiments on several real-world datasets. We compare the results of
the two variants on both the node and time dimensions of the temporal graph,
and we also compare the temporal betweenness centrality to its static
counterpart. Our experiments suggest that for the shortest foremost variant
looking only at the first $10\%$ of the temporal interaction is a very good
approximation for the overall top ranked nodes. | Mehdi Naima | 2023-05-01T20:27:27Z | http://arxiv.org/abs/2305.01080v2 | # Temporal Betweenness Centrality on Shortest Paths Variants +
###### Abstract
Betweenness centrality has been extensively studied since its introduction in 1977 as a measure of node importance in graphs. This measure has found use in various applications and has been extended to temporal graphs with time-labeled edges. Recent research by Buss et al. [4] and Rymar et al. [15] has shown that it is possible to compute the shortest path betweenness centrality of all nodes in a temporal graph in \(O(n^{3}\,T^{2})\) and \(O(n^{2}\,m\,T^{2})\) time, respectively, where \(T\) is the maximum time, \(m\) is the number of temporal edges, and \(n\) is the number of nodes. These approaches considered paths that do not take into account contributions from intermediate temporal nodes.
In this paper, we study the classical temporal betweenness centrality paths that we call _passive_ shortest paths, as well as an alternative variant that we call _active_ shortest paths, which takes into account contributions from all temporal nodes. We present an improved analysis of the running time of the classical algorithm for computing betweenness centrality of all nodes, reducing the time complexity to \(O(n\,m\,T+n^{2}\,T)\). Furthermore, for active paths, we show that the betweenness centrality can be computed in \(O(n\,m\,T+n^{2}\,T^{2})\). We also show that our results hold for different shortest paths variants.
Finally, we provide an open-source implementation of our algorithms and conduct experiments on several real-world datasets. We compare the results of the two variants on both the node and time dimensions of the temporal graph, and we also compare the temporal betweenness centrality to its static counterpart. Our experiments suggest that for the shortest foremost variant looking only at the first 10% of the temporal interaction is a very good approximation for the overall top ranked nodes.
Keywords:Graph mining Betweenness Centrality Temporal Graphs Temporal Paths Shortest Paths Time centrality restless walks.
Introduction
Betweenness centrality is a well-known centrality measure in static graphs that aims to identify central nodes in a graph. Centrality measures assign a value to each node (or edge) in based to its importance (centrality). In a static graph, the betweenness centrality of a node is based on the number of shortest paths passing through that node. It was introduced by Freeman in [7]. This centrality has been studied extensively in the literature and is a classical measure in network analysis used in a variety of domains such as social networks [3], transports [14], biology [13; 20] and scientific collaboration networks [12]. Additionally, betweenness centrality has been utilized as an efficient method for graph partitioning and community detection [8]. Brandes in [2] introduced a method for computing betweenness centrality of a whole graph in \(O(n\,m+n^{2})\) which remains the fastest known algorithm.
Recently, betweenness centrality has been extended to dynamic graph formalisms such as temporal graphs [10] and stream graphs [11]. The generalization of betweenness centrality to a temporal setting is not unique, and many optimality criteria have been considered in the literature [4; 15; 19; 18; 9; 11], including shortest paths, fastest walks, foremost walks, and shortest fastest walks. However, for this paper, we only focus on the shortest paths criteria which has also been studied in [4; 15] as it is the most straightforward generalization of the static case. It is then possible to define the betweenness centrality of a node \(v\) at time \(t\) by:
\[B(v,t)=\sum_{s\neq v\neq z\in V}\frac{\sigma_{sz}(v,t)}{\sigma_{sz}},\]
where \(\frac{\sigma_{sz}(v,t)}{\sigma_{sz}}\) is the fraction of shortest temporal paths from \(s\) to \(z\) that pass through node \(v\) at time \(t\). Recent results on temporal betweenness centrality, tried with success to adapt Brandes algorithm to the temporal setting [4; 15]. For shortest paths their approach lead to time complexities of \(O(n^{3}\,T^{2})\) and \(O(n^{2}\,m\,T^{2})\) to compute the betweenness of a whole temporal graph. However, their algorithms considered only what we call _passive_ temporal paths in which the path only exists when it arrives at a certain temporal node, and moreover, they did not apply Brandes algorithm to its full extent as we shall see.
In this paper, we consider both what we call _passive_ and _active_ shortest paths, so that active paths exist all along a node until leaving it while passive paths correspond to the classical version. For the classical (passive) **shortest paths**, we improve the time analysis of [4] and show that the Betweenness centrality of the whole graph can be computed in \(O(n\,m\,T+n^{2}\,T)\). This bound increases to \(O(n\,m\,T+n^{2}\,T)\) if considering active shortest paths. We also show that these bounds are still true for **shortest \(k\)-restless walks** where it is not possible to stay more than \(k\) time units on the same node, for **strict shortest paths** where traversing a node takes one time unit and for **shortest foremost paths** where it is required to arrive at the earliest possible time. Our time analysis results show that we can use Brandes approach to its full extent in the temporal setting since when the temporal graph is static (i.e its edges exist at only one
timestamp) our analysis reduces to the state of art algorithm on static graph [2]. In fact active walks were considered in [11, 18] but not on shortest walks (number of transitions) and we also seek to have a _systematic_ study of all these shortest walks variants as was the case in [4, 15] by designing a single algorithm for all these variants which was not the aim of these works.
We also provide an open-source implementation in C++ and use it to assess the differences between active and passive variants on real-world temporal graphs in both their _node_ and _time_ dimensions. On the _node dimension_ of the temporal graph we compare temporal betweenness centrality to the static betweenness centrality computed on the aggregated graph. Our experiments show that the temporal and static betweenness centrality rankings of nodes are close to each others with the static betweenness running 100 times faster. On the _time dimension_ our experiments show that the active variant gives much more importance to central times than the (passive) classical variant.
The paper is organized as follows, in Section 2 we introduce our formalism that is a modified version of [15]. We start by defining active and passive walks, and the betweenness centrality of a temporal node and in the end of this section we give the statement of our main Theorem 2.1. After that in Section 3 we give the main ideas and algorithms to prove our results. There, we use the construction of the predecessor graph and count shortest temporal paths. Finally, Section 4 presents our experimental results where we focus on the differences of behaviours between active and passive paths and show how close the rankings of nodes of temporal betweenness centrality compare to the static betweenness centrality on the aggregated static graph. We also assess how much of the time interaction is needed check to infer a reasonable ranking for the top ranked nodes. We end with a Conclusion that can be found in Section 5.
## 2 Formalism
We use a formalism close to the ones used in [4, 15]. We define a directed temporal graph \(G\) as a triple \(G=(V,\mathcal{E},T)\) such that \(V\) is the set of vertices, \(T\in\mathbb{N}\), is the maximal time step with \([T]:=\{1,\ldots,T\}\) and \(\mathcal{E}\subseteq V\times V\times[T]\) is the set of temporal arcs (transitions). We denote by \(n:=|V|\) and \(m:=|\mathcal{E}|\). We call \(V\times[T]\) the set of temporal nodes. Then \((v,w,t)\in\mathcal{E}\) represents a temporal arc from \(v\) to \(w\) at time \(t\).
Definition 1 (Temporal walk): Given a temporal graph \(G=(V,\mathcal{E},T)\), a temporal walk \(W\) is a sequence of transitions \(e\in\mathcal{E}^{k}\) with \(k\in\mathbb{N}\), where \(e=(e_{1},\ldots,e_{k})\), with \(e_{i}=(u_{i},v_{i},t_{i})\) such that for each \(1\leq i\leq k-1\) of \(u_{i+1}=v_{i}\) and \(t_{i}\leq t_{i+1}\).
The **length** of a temporal walk \(W\) denoted \(len(W)\) is its number of transitions. We also denote by \(arr(W)\) the time of the last transition of \(W\). We can associate a type of walks to consider on a temporal graph. We will study in this paper two types of walks on temporal graphs that are called **active** and **passive**
walks. We will denote the type of walks considered on a temporal graph \(G\) by
\[\operatorname{type}(G)\in\{active,passive\}.\]
The important difference between active and passive walks is that, a passive walk only exists on node transitions, therefore a passive walk that arrive to \(v\) at time \(t\) and leaves \(v\) at \(t^{\prime}\) only exists on node \(v\) for a single time \(t\), while an active walk exists on \(v\) for all times \(t\leq i\leq t^{\prime}\). This difference is formally defined in the following definition.
Definition 2 (Node appearances and visited nodes): For a temporal graph \(G\), fix a type of walks. Let \(W\) be a temporal walk such that \(len(W)=k\) and let \(k>0\). We denote by \(\mathcal{A}(W)\) the list of node appearances of \(W\), for both types of walks it is defined as:
\[\mathcal{A}(W)=[(u_{1},t_{1})]+[(v_{i},t_{i})\,|\,1\leq i\leq k],\]
where \(+\) denotes list concatenation. We denote by \(\mathcal{V}(W)\) the list of visited temporal nodes. Then:
\[\mathcal{V}(W)=\begin{cases}\mathcal{A}(W)&\text{if }\operatorname{type}(G)= passive\\ \vspace{-0.1cm}[(u_{1},t_{1})]+\left(\biguplus_{i=1}^{k-1}[(v_{i},t)\,|\,t_{i} \leq t\leq t_{i+1}]\right)+&\\ [(v_{k},t_{k})]&\text{otherwise}\end{cases} \tag{1}\]
where \(\biguplus\) is used for concatenation of several lists.
We will denote by \(W[-1]\) the last visited temporal node corresponding to the last element in \(\mathcal{V}(W)\). Let \(G=(V,\mathcal{E},T)\) be a temporal graph with \(\operatorname{type}(G)=active\). We will also consider **extending** active walks further in time on their last node. Let \(W\) be a non-empty active temporal walk with \(len(W)=k\). Let \(t_{k}<t\leq T\), then \(W_{t}\) is the extended temporal walk of \(W\) and we have \(\mathcal{V}(W_{t})=\mathcal{V}(W)+[(v_{k},t^{\prime})\,|\,t_{k}<t^{\prime}\leq t]\). We can denote a temporal walk by \(W=a\overset{1}{\rightarrow}b\overset{5}{\rightarrow}c\), even if the notation does not distinguish between \(W\) being active or passive, when we use this notation the type of the walk will be clear from the context. See Figure 1 for an example of these concepts. This distinction between active and passive walks is important since the authors of recent results in this line of research [4, 15] consider only passive walks. A temporal walk is called a **path** if each node \(v\in V\) in the list of visited nodes appears exactly once. Moreover, a temporal walk \(W\) is a **strict** temporal walk if for each transition time label is strictly larger than the previous one, that is for \(2\leq i\leq k,t_{i}>t_{i-1}\). Otherwise, the temporal walk is a **non-strict** temporal walk. Finally, for \(k\in\mathbb{N}\), a temporal walk is \(k\)**-restless** if the difference between two consecutive transitions time stamps \(t_{i}-t_{i-1}\leq k\).
Remark 1: We focus on shortest walks in the paper. However, it should be clear that all definitions hold when restricting the set of walks to \(k\)-restless ones and/or strict ones.
A walk \(W\) is an \(s-v\) walk if \(W\) starts in node \(s\) and ends in node \(v\), we denote by \(W_{sv}\) the set of all \(s-v\) walks. Let \(s-(v,t)\) walks be the set of walks that go from \(s\) to \(v\) and end by visiting \((v,t)\) that is \(W[-1]=(v,t)\). For passive walks, only walks with a last transition of the form \((w,v,t)\) for some \(w\in V\) achieve this condition. We denote by \(W^{pas}_{s(v,t)}\) the set of **passive**\(s-(v,t)\)** walks** and by \(W^{act}_{s(v,t)}\) the set of **active**\(s-(v,t)\) walks, then:
\[W^{pas}_{s(v,t)}=\{W|W\in W_{sv},W[-1]=(v,t)\}\cup\{\epsilon\,|\,s=v,(s,v,t) \in\mathcal{E}\},\]
However, for active walks there can be two types of walks, either the last transition of the walk is \((w,v,t)\) which we call an exact-\(s-(v,t)\) walk or, the walk can be an extension of an active walk that arrived earlier to \(v\) at time \(t^{\prime}<t\). Then:
\[W^{act}_{s(v,t)}=\{W_{t}|W\in W_{sv},arr(W)\leq t\}\cup\{\epsilon\,|\,s=v,(s,v,t)\in\mathcal{E}\}.\]
Note that when the distinction is not needed we will drop the superscript and simply write \(W_{s(v,t)}\). Let \(G=(V,\mathcal{E},T)\) be a temporal graph. Fix a source \(s\in V\) and a type of walks. Then, for every temporal node \((v,t)\in V\times[T]\) we define \(c_{s}(v,t)\) to be the minimum value of \(c\) assumed over all temporal s-(v, t)-walks. Then considering **shortest paths** we define
\[c_{s}(v,t)=\min_{W\in W_{s(v,t)}}len(W).\]
If considering **shortest \(k\)-restless walks**
\[c_{s}(v,t)=\min_{\begin{subarray}{c}W\in W_{s(v,t)}\\ W\text{is $k$ restless}\end{subarray}}len(W).\]
Finally, if considering **shortest foremost paths**
\[c_{s}(v,t)=\min_{W\in W_{s(v,t)}}(n\cdot t+len(W)),\]
where \(n\) is the number of nodes in the graph. For any of these version if we want a strict version of the walks we simply add this condition to \(W\). For all versions considered if no \(W\) exists then \(c_{s}(v,t)=\infty\).
Now, the overall optimal value from \(s\) to any time in node \(v\) is defined as
\[c_{s}(v)=\min_{t\in[T]}c_{s}(v,t).\]
Again if the set of \(s-z\) walks is empty, then \(c_{s}(v)=\infty\). Finally, a temporal \(s-(v,t)\) walk \(W\) is a **optimal \(s-(v,t)\) walk** if \(len(W)=c_{s}(v,t)\). Similarly an \(s-z\) walk \(W\) is a **optimal \(s-z\) walk** if \(len(W)=c_{s}(z)\). In the active type, an extended \(s-v\) walk \(W\) can be shortest to different times on \(v\). For instance on the graph of Figure 1. The walk \(W=a\stackrel{{ 1}}{{\rightarrow}}b\stackrel{{ 2}}{{ \rightarrow}}c\), \(W_{2}\) is a shortest \(a-(c,2)\) walk and \(W_{4}\) is a shortest \(a-(c,4)\) walk.
Remark 2: **Shortest walks** are necessarily paths while this is not true in general for **shortest \(k\)-restless walks**. In fact, finding a \(k\)-restless path has been shown to be NP-hard in [5]. As a consequence we will use the term walks in general because we want to encompass all variants.
Definition 3 (Set of optimal walks): Let \(G=(V,\mathcal{E},T)\) be a temporal graph and fix a type of walks. Then
\[\mathcal{W}=\begin{cases}\bigcup_{s,z\in V,s\neq z}\{W|W\in W_{sz},len(W)=c_{s }(z))\}&\text{if type($G$) = passive}\\ \bigcup_{s,z\in V,s\neq z}\{W_{T}|W\in W_{sz},len(W_{T})=c_{s}(z)\}&\text{ otherwise.}\end{cases}\]
The extension of shortest walks to the end of time in the active variant is related to the convenience of the calculations, in fact the starting and ending nodes of a walk do not contribute to the betweenness centrality as we will see in Definition 6. Likewise, we denote by \(\mathcal{W}_{s(v,t)}\) the set of **optimal \(s-(v,t)\) walks**. That is
\[\mathcal{W}_{s(v,t)}=\{W|W\in W_{s(v,t)},len(W)=c_{s}(v,t).\}\]
As we see in the active case the walks extend to the last time unit while in the passive type the walk stops at its last vertex appearance. We see that \(\mathcal{W}\) is the set of shortest walks between any pair of nodes, it keeps only walks with an overall shortest value. For instance on the graph in Figure 1, considering shortest path of passive type, the path \(W=a\stackrel{{ 1}}{{\rightarrow}}b\stackrel{{ 2}}{{ \rightarrow}}c\stackrel{{ 4}}{{\rightarrow}}b\) is an shortest \(a-(b,4)\) walk but not a shortest \(a-b\) walk. In the active case \(W\) is neither a shortest \(a-(b,4)\) walk nor a shortest \(a-b\) walk, since the shortest \(a-(b,4)\) walk is the extended \((a\stackrel{{ 1}}{{\rightarrow}}b)_{4}\).
Definition 4: Let \(G=(V,\mathcal{E},T)\) be a temporal graph. Fix a type of walks and let \(s,v,z\in V\) and \(t\in[T]\). Let \(\mathcal{W}\) be as in Definition 3. Then,
* \(\sigma_{sz}\) is the number of \(s-z\) walks in \(\mathcal{W}\).
* \(\sigma_{sz}(v,t)\) _is the number of_ \(s-z\) _walks_ \(W\in\mathcal{W}\)_, such that_ \(W\) _passes through_ \((v,t)\) _that is_ \((v,t)\in\mathcal{V}(W)\)_._
Definition 5: \[\delta_{sz}(v,t)=\begin{cases}0&\text{if }\sigma_{sz}=0\text{,}\\ \dfrac{\sigma_{sz}(v,t)}{\sigma_{sz}}&\text{otherwise.}\end{cases}\qquad\qquad \delta_{s\bullet}(v,t)=\sum_{z\in V}\delta_{sz}(v,t).\]
Definition 6 (Betweenness centrality of a temporal node): The betweenness centrality of node \(v\) at time \(t\) is:
\[B(v,t)=\sum_{s\neq v\neq z}\delta_{sz}(v,t). \tag{2}\]
From the preceding we define
\[\hat{B}(v,t)=\sum_{s,z\in V}\delta_{sz}(v,t)\implies\quad\hat{B}(v,t)=\sum_{s \in V}\delta_{s\bullet}(v,t).\]
The quantities \(B(v,t)\) and \(\hat{B}(v,t)\) are related through:
\[B(v,t)=\hat{B}(v,t)-\sum_{w\in V}(\delta_{vw}(v,t)+\delta_{wv}(v,t))=\hat{B}( v,t)-\delta_{v\bullet}(v,t)-\sum_{w\in V}\delta_{wv}(v,t) \tag{3}\]
For instance on Figure 1, considering **passive** shortest walks (approach used in [4, 15]), \(B(b,1)=2\), and \(\forall t>1,B(b,t)=0\) while if we consider **active** shortest walks we have \(B(b,1)=B(b,2)=2,B(b,3)=B(b,4)=1\) showing that the **active** version takes into account contributions from intermediate temporal nodes in shortest paths while it is not true for the **passive** version.
From the betweenness centrality of a temporal node we can get an overall betweenness centrality of a node \(v\) and an overall betweenness centrality of a time.
Definition 7 (Overall betweenness of a node and overall betweenness of a time): \[B(v)=\sum_{t\in[T]}B(v,t),\quad B(t)=\sum_{v\in V}B(v,t).\] (4)
The authors of [4, 15] focus on computing \(B(v)\) for all \(v\in V\). Our main result is the following
Theorem 4.1: _Let \(G=(V,\mathcal{E},T)\) be a temporal graph. For passive walks, the betweenness centrality of all temporal nodes can be computed in \(O(n^{2}\,T\,+n\,m\,T)\). For active walks, the betweenness centrality of all temporal nodes can be computed in \(O(n^{2}\,T^{2}\,+n\,m\,T)\). Both results hold for (strict and non-strict) shortest walks as well as (strict and non-strict) shortest \(k\)-restless walks._
Proof: We leave the proof of Theorem 4.1 to the end of Section 3.
**Discussion**. The authors of [4, 15] showed that for passive walks, the betweenness centrality can be computed in \(O(n^{3}\,T^{2})\) and \(O(n^{2}\,m\,T^{2})\) respectively. Since the maximal number of temporal arcs is \((n-1)^{2}T\), our bounds are always better than the previously known ones. Additionally, in the introduction we mentioned using _Brandes approach to its full extent_, since these previous approaches when \(T=1\) reduce to \(O(n^{3})\) and \(O(n^{2}\,m)\) while our analysis lead to \(O(nm+n^{2})\). Therefore our approach leads to the static optimal time algorithm if the temporal graph is static. Table 1 summarises our results.
## 3 Main algorithms and proofs
The two major steps of the proof are the following. First step is to build a predecessor graph from a fixed node \(s\in V\) efficiently. This predecessor graph allows then to compute the contributions of node \(s\in V\) to the betweenness centrality of all other nodes. Second step is to find a recurrence that allows to compute the aforementioned contributions efficiently. Proposition 4 and Proposition 5 correspond to these steps. For all quantities defined in this paper when the distinction between active and passive walks is needed we write pas for passive and act for active in the superscript. For instance \(\delta_{ab}^{act}(v,t)\), means we want \(\delta_{ab}(v,t)\) for active walks on the considered temporal graph.
Remark 3: All missing proofs can be found in the **Appendix**.
Definition 8 (Exact shortest \(s-(v,t)\) walks): We say that an \(s-(v,t)\) walk \(W\) is an exact shortest \(s-(v,t)\) walk if \(W\in\mathcal{W}_{s(v,t)}\) and \(arr(W)=t\). We denote by \(\dot{\mathcal{W}}_{s(v,t)}\) the set of all exact shortest \(s-(v,t)\) walks.
Note that for passive walks \(\dot{\mathcal{W}}_{s,(v,t)}\) and \(\mathcal{W}_{s,(v,t)}\) coincide while this is not true in general for active walks. A consequence of our framework is that the empty walk is an exact \(s-(s,t)\) whenever there exists \(w\in V\) with \((s,w,t)\in\mathcal{E}\). Then \(c_{s}(s,t)=0\). The predecessor graph in temporal settings has been used in [15, 4]. Here, we extend its definition to encompass active walks as well.
Definition 9 (predecessor set, successor set): Let \(G=(V,\mathcal{E},T)\) be a temporal graph, fix a type of walks and a source \(s\in V\), then for all \(w\in V,w\neq s\)
\begin{table}
\begin{tabular}{l c c} \hline & [15] & [4] & Theorem 1 \\ \hline Shortest paths (passive) & \(O(n^{2}\,m\,T^{2})\) & \(O(n^{3}\,T^{2})\) & \(O(n^{2}T\,+nmT)\) \\ Shortest \(k\)-restless walks (passive) & \(O(n^{2}\,m\,T^{2})\) & - & \(O(n^{2}T\,+nmT)\) \\ Shortest foremost (passive) & \(O(n^{2}\,m\,T^{2})\) & \(O(n^{3}\,T^{2})\) & \(O(n^{2}T\,+nmT)\) \\ Shortest paths (active) & - & - & \(O(n^{2}T^{2}\,+nmT)\) \\ \(k\)-restless walks (active) & - & - & \(O(n^{2}T^{2}\,+nmT)\) \\ \hline \end{tabular}
\end{table}
Table 1: Improvement of previously known results by Theorem 1. The results on active walks were not studied in this form to the best of our knowledge. All results hold for non-strict and strict variants.
_let \(\dot{\mathcal{W}}_{s,(v,t)}\) be the set of exact shortest \(s-(v,t)\) walks:_
\[pre_{s}(w,t^{\prime})= \{(v,t)\in V\times[T]\mid\exists\,m\in\dot{\mathcal{W}}_{s(w,t^{ \prime})},m=s\stackrel{{ t_{1}}}{{\rightarrow}}\ldots\stackrel{{ t_{t}}}{{\rightarrow}}v\stackrel{{ t^{\prime}}}{{ \rightarrow}}w\}\] \[\cup\{(s,t^{\prime})\mid\exists\,m\in\dot{\mathcal{W}}_{s(w,t^{ \prime})},m=s\stackrel{{ t^{\prime}}}{{\rightarrow}}w\}.\]
_The successor set of a node \(\text{succ}_{s}(w,t^{\prime})=\{(v,t)\,|\,(w,t^{\prime})\in pre_{s}(v,t)\}\)._
Definition 10 (Predecessor graph): The predecessor graph is the directed graph obtained from \(pre_{s}\), whose vertices are
\[V_{s}=\{(s,t^{\prime})\,|\,\exists(w,t^{\prime}),(s,t^{\prime})\in pre_{s}(w, t^{\prime})\}\cup\{(w,t^{\prime})\mid pre_{s}(w,t^{\prime})\neq\emptyset\}\]
and arcs
\[E_{s}=\{((v,t),(w,t^{\prime}))\mid(v,t)\in pre_{s}(w,t^{\prime})\}\]
Then the predecessor graph \(G_{s}=(V_{s},E_{s})\).
An example of the predecessor of active and passive walks is depicted on Figure 2. As we shall see, a path in the predecessor graph represents a unique walk in the temporal graph. The next Proposition gives the relationship between these quantities. A path in the predecessor graph is denoted \((s,t_{1})\rightarrow(v_{1},t_{1})\rightarrow(v_{2},t_{2})\).
Lemma 1: _Fix \(s\in V\). There is a one-to-one correspondence between a path \(p\) in the predecessor graph \(G_{s}\) starting from node \(s\) at some time and ending in \((v,t)\) and an exact shortest \(s-(v,t)\) walk._
Proof: By induction we show that a path in the predecessor graph \(G_{s}\) starting at node \(s\) corresponds to an exact shortest \(s-(v,t)\) walk. Let \(p\) be a path in the predecessor graph. Then \(p\equiv p^{\prime}\rightarrow(v,t)\) for some path \(p^{\prime}\). Let \((w,t^{\prime})\) be the last node in \(p^{\prime}\). By induction hypothesis, suppose that \(p^{\prime}\) corresponds to an exact shortest \(s-(w,t^{\prime})\) walk \(P^{\prime}\). Since \(((w,t^{\prime}),(v,t))\in E_{s}\), this implies
Figure 2: The predecessor graphs of shortest paths from node \(a\) on the temporal graph of Figure 1. (left) the walks are considered active and (right) the walks are considered passive.
that there exists an exact shortest \(s-(v,t)\) walk \(M\) that passes through \((w,t^{\prime})\) before arriving to \((v,t)\) (in the ordering of \(\mathcal{A}(M)\)). Let \(M\equiv M^{\prime}\oplus(w\stackrel{{ t}}{{\rightarrow}}v)\). \(M^{\prime}\) is necessarily an exact shortest \(s-(w,t^{\prime})\) walk since else \(M\) would not be an exact shortest \(s-(v,t)\) walk. Now \(M^{\prime}\) and \(P^{\prime}\) are both exact shortest \(s-(w,t^{\prime})\) walks implying that they have the same length. Since \(M\) extends \(M^{\prime}\) with a single edge then \(c_{s(v,t)}=c_{s(w,t^{\prime})}+1\). Therefore, \(P\equiv P^{\prime}\oplus(w\stackrel{{ t}}{{\rightarrow}}v)\) is an exact shortest \(s-(v,t)\) walk as well. On the other hand, We show by induction that an exact shortest \(s-(v,t)\) walk \(P\) corresponds to a single path in \(G_{s}\) starting at \((s,i)\) for some time \(i\). Let \((w,t^{\prime})\) be the last node appearance before \((v,t)\). Then \(P\equiv P^{\prime}\oplus(w\stackrel{{ t}}{{\rightarrow}}v)\). \(P^{\prime}\) is an exact shortest \(s-(w,t^{\prime})\) walk. Then by induction hypothesis, let \(p^{\prime}\) be the corresponding path in predecessor graph. Then \(((w,t^{\prime}),(v,t))\in E_{s}\) and the path \(p\equiv p^{\prime}\rightarrow(v,t)\) is a path in the predecessor graph.
Lemma 2: _The predecessor graph \(G_{s}\) from \(s\in V\) is acyclic for any of the considered variants._
Proof: Using Lemma 1, there are no exact shortest \(s-(v,t)\) walks containing cycles because else we could immediately construct an exact \(s-(v,t)\) walk \(W\) without cycles and \(len(W)<c_{s,(v,t)}\) which is impossible. Hence the predecessor graph from \(s\) is a acyclic.
Lemma 3: _The predecessor graph \(G_{s}\) from \(s\in V\) based on passive shortest or passive shortest foremost paths is the same._
Proof: Any shortest \(s-(v,t)\) path \(p\) is a shortest foremost \(s-(v,t)\) path as well implying that each temporal node \((v,t)\) has the same predecessor set.
Remark 4: In fact, a shortest \(s-(v,t)\) path \(p\) could belong to the set of optimal walks for shortest foremost paths and not to the set of optimal walks for shortest paths. But the predecessor graph does not take this into account.
We define \(\dot{\sigma}_{s,(v,t)}=|\dot{\mathcal{W}}_{s,(v,t)}|\) which corresponds to the number of shortest exact \(s-(v,t)\) walks. We also denote by \(\sigma_{s(v,t)}\) the number of shortest \(s-(v,t)\) walks, then \(\sigma_{s(v,t)}=|\mathcal{W}_{s(v,t)}|\).
Proposition 1: _For any temporal node \((v,t)\), there holds that:_
\[\dot{\sigma}_{s(v,t)}=\begin{cases}0&\text{ if }(v,t)\neq V_{s},\\ 1&\text{ if }(v,t)\in V_{s}\text{ and }v=s,\\ \sum\limits_{(w,t^{\prime})\in pre_{s}(v,t)}\dot{\sigma}_{s(w,t^{\prime})}& \text{ otherwise.}\end{cases} \tag{5}\]
Proof: We know that \(\dot{\sigma}_{s(v,t)}\) corresponds to the number of walks in the predecessor graph from \(s\) ending in \((v,t)\) by Proposition 1. By induction any shortest exact \(s-(v,t)\) walk comes from a predecessor \((w,t^{\prime})\) with \((w,t^{\prime})\in pre_{s}(v,t)\). Each shortest exact \(s-(w,t^{\prime})\) walk can be extended uniquely by appending \((w\stackrel{{ t^{\prime}}}{{\rightarrow}}v)\) to it and make it an shortest exact \(s-(v,t)\) walk.
Proposition 2: _Let \(s\in V\) and for any temporal node \((v,t)\in V\times[T]\):_
\[\sigma_{s(v,t)}^{pas}=\begin{cases}\dot{\sigma}_{s(v,t)}&\text{if }(v,t)\in V_{s},\\ 0&\text{otherwise.}\end{cases},\quad\sigma_{s(v,t)}^{act}=\sum_{\begin{subarray} {c}t^{\prime}\leq t,\exists W\in\dot{\mathcal{W}}_{s,(v,t^{\prime})}\\ c(\dot{W}_{t})=c_{s}(v,t)\end{subarray}}\dot{\sigma}_{s(v,t^{\prime})} \tag{6}\]
Proof: For passive walks, the set \(\mathcal{W}_{s(v,t)}\) of shortest-\(s-(v,t)\) walks and the set \(\dot{\mathcal{W}}_{s(v,t)}\) of exact shortest-\(s-(v,t)\) walks coincide. Therefore, \(\sigma_{s(v,t)}^{pas}=\dot{\sigma}_{s(v,t)}\). For active walks, since an exact shortest \(s-(v,t)\) walk \(W\) can still be an shortest \(s(v,t^{\prime})\) walk for \(t^{\prime}>t\) if \(c_{s(v,t)}=c_{s(v,t^{\prime})}\) since the walks extend after their last transition.
Finally it is straight forward to compute for all \(v\in V\),\(t\in[T]\), \(\sigma_{sv}\) and \(\sigma_{sv}(v,t)\) from the preceding. The same relation applies for both types:
\[\sigma_{sv}=\sum_{\begin{subarray}{c}t\in[T]\\ c_{s}(v,t)=c_{s}(v)\end{subarray}}\dot{\sigma}_{s(v,t)},\quad\sigma_{sv}(v,t)= \begin{cases}\sigma_{s(v,t)}&\text{if }c_{s}(v,t)=c_{s}(v)\\ 0&\text{otherwise.}\end{cases} \tag{7}\]
We use a temporal BFS algorithm variant of the one used in [4]. The relaxing technique builds the shortest \(s-(v,t)\) walks and for active walks it checks if the extension of a walk arriving to \((w,t^{\prime})\) is also shortest to \((w,t^{\prime\prime})\) with \(t^{\prime\prime}>t^{\prime}\). The procedure is defined in Algorithm 1.
```
1:functionRelax_extend(\((a,t),(b,t^{\prime})\))
2:if\(dist[b][t^{\prime}]>\ell\)then
3:\(dist[b][t^{\prime}]=\ell\), \(pre[b][t^{\prime}]=\{\}\)
4:functionrelax(\(a,b,t,t^{\prime},dist,pre,Q,\ell\), \(k\))
5:if\((dist[b][t^{\prime}]=\infty)\) or \((dist[b][t^{\prime}]\geq\ell\) and \(|\,pre[b][tp]\,|=0)\)then
6:\(dist[b][t^{\prime}]=\ell\), \(pre[b][t^{\prime}]=\{\}\)
7:\(Q\).enqueue(\((b,t^{\prime})\))
8:for\(t^{\prime\prime}\in\{r\,|\,\exists w,w\overset{\text{\tiny$\rightarrow$}}{ \rightarrow}b\in\mathcal{E},(r>t^{\prime}\,and\,(r-t^{\prime})\leq k)\}\)do\(\triangleright\) for passive walks, ignore this loop.
9:relax_extend(\(b,t^{\prime\prime},pre,dist,\ell\))
10:if\(dist[b][t^{\prime}]=\ell\)then
11:\(pre[b][t^{\prime}]\).add(\((a,t)\))
```
**Algorithm 1** Relaxing an arc \(((a,t),(b,t^{\prime}))\)
Definition 11 (Exactly reachable temporal nodes): _Let \(G=(V,\mathcal{E},T)\) be a temporal graph, fix a type of walks and a source \(s\in V\). Then:_
\[ER_{s}=\{(v,t)\,|\,(v,t)\in V\times[T],\dot{\mathcal{W}}_{s(v,t)}\neq\emptyset\}.\]
```
0:\(G=(V,\mathcal{E},T):\) a temporal graph, \(s:\) a node in \(G\), \(k:\) maximal waiting time for restless walks or \(k=\infty\) for shortest paths.
0: A dictionary \(dist\) containing shortest values \(c_{s}(v,t)\) to temporal nodes. A dictionary \(pre\) that contains the set of predecessor temporal nodes.
1:functionTemporal_BFS(\(G\),\(s\),\(k\))
2:\(pre,dist,Q=\textsc{initialization}(G,s)\)
3:\(Q^{\prime}=\textsc{Empty\_Queue}(),\ell=1\)
4:while\(Q\neq\emptyset\)do
5:for\((a,t)\) in \(Q\)do
6:for\(a\stackrel{{ t^{\prime}}}{{\rightarrow}}b\in\mathcal{E}\) such that (not (\((s=a)\) and \((t^{\prime}\neq t)\))) do
7:if\((s=a)\,or\,((t^{\prime}\geq t)\,and\,(t^{\prime}-t)\leq k)\)then\(\triangleright\) for strict walks replace \(\geq\) by \(>\)
8:\(\textsc{relax}(a,b,t,t^{\prime},pre,dist,Q^{\prime},\ell,k)\)
9:\(\ell=\ell+1\)
10:\((Q,Q^{\prime})=(Q^{\prime},\textsc{Empty\_Queue}())\)
11:return\(pre,dist\)
1:functioninitialization(\(G\),\(s\))
2:\(Q=\textsc{Empty\_Queue}()\)
3:\(\mathrm{dist}[v]=\{t:\infty\text{ for t in }\{0,\dots,T\}\}\)\(\forall v\in V\)
4:\(\mathrm{pre}[v]=\{t:\{\}\) for t in \(\{0,\dots,T\}\}\)\(\forall v\in V\)
5:for\(t\in\{t^{\prime}\,|\,\exists w\in V,s\stackrel{{ t^{\prime}}}{{ \rightarrow}}w\in\mathcal{E}\}\)do
6:\(dist[s][t]=0\), \(pre[s][t]=\{(nil,nil)\}\)
7:\(Q.\textsc{enqueue}((s,t))\)
8:return\(pre,dist,Q\)
```
**Algorithm 2** shortest walks from a node \(s\)
Proposition 3.: _Algorithm Temporal_BFS solves the shortest walk problem for a temporal graph \(G=(V,\mathcal{E},T)\). That is For all \((v,t)\in ER_{s}\), \(dist[v][t]=c_{s}(v,t)\), \(pre[v][t]=pre_{s}(v,t)\) and \((v,t)\) is added exactly once in the queue \(Q\)._
Proof: We show that at the start \(i\)-th iteration of Line 4 in Algorithm 2, all temporal nodes which are exactly reachable have \(\mathrm{dist}[v][t]=c_{s(v,t)}=i\), and for temporal nodes \((v,t)\) with an exact shortest \(s-(v,t)\) walk \((v,t)\in Q\) and \(pre[v][t]=pre_{s}(v,t)\) and \((v,t)\) is added exactly once to the queue. The property is true just before entering the loop the first time. Only temporal nodes \((s,t)\) where \((s,w,t)\in\mathcal{E}\) for some \(w\in V\) are in the queue. These temporal nodes correspond to the empty walks as stated in Remark 1. The condition (not (\((s=a)\) and \((t^{\prime}\neq t)\))) can only be met during the first iteration of the main loop. It ensure that all paths of length \(1\) created have their first appearance time at the time of their first transition. Suppose that the property holds for iteration \(i-1\). Then at iteration \(i\). All temporal nodes in \(Q\) are such that \(\mathrm{dist}[v][t]=c_{s(v,t)}=i-1\) and each of them has an exact shortest \(s-(v,t)\) walk to it. Each one of them is scanned for its outgoing neighbors \((w,t^{\prime})\) and relaxed using Algorithm 1.
If \((w,t^{\prime})\) was never reached neither with an exact walk arriving to it or with an extended walk (only happens in active type). Then \(dist[w][t^{\prime}]=\infty\) and \((w,t^{\prime})\)
is added to the queue and \(dist[w][t^{\prime}]\) set to \(i\). If \((w,t^{\prime})\) was reached before then \(dist[w][t^{\prime}]<\infty\). If it was reached with a prior loop \(j<i\) nothing happens. If it was reached exactly by a prior \((z,t^{\prime\prime})\) in the same loop, then \((w,t^{\prime})\) is not added to the queue and \(dist[w][t^{\prime}]\) is set to \(i\) since neither branches of condition in Line 2 is met. Finally, if \((w,t^{\prime})\) was reached by an extended \((w,t^{\prime\prime})\) of the same loop \(i\) then \(dist[w][t^{\prime}]\) is set to \(i\) and \(pre[w][t^{\prime}]=\{\}\). Then the condition \((dist[b][t^{\prime}]\geq\ell\) and \(|\,pre[b][tp]\,|=0)\) is met and \((w,t^{\prime})\) is added to the queue and its predecessor list will have size \(>0\) ensuring \((w,t^{\prime})\) is not added another time. In all cases \((w,t^{\prime})\) is added exactly once. By definition all shortest exact \(s-(w,t^{\prime})\) walks have the same length and therefore all predecessor's of \((w,t^{\prime})\) are added in the same loop. Finally all predecessors of \((w,t^{\prime})\) arise in the same iteration \(i\) and since \(dist[w][t^{\prime}]=i\) they are all added in the predecessor set.
Proposition 4: _Let \(G=(V,\mathcal{E},T)\) be a temporal graph, fix a type of walks and a source \(s\in V\), then the predecessor graph \(G_{s}\) can be computed in \(O(m\,T+n\,T)\)._
Proof: By using a queue each temporal node \((v,t)\in ER_{s}\) is scanned at most one time by Algorithm 2 as it was shown in Proposition 3. Then the same temporal arc \((v,w,t)\in\mathcal{E}\), can be relaxed up to \(T\) times in Line 4 of Algorithm 2 and \(T\) other times in Line 5 of Algorithm 1. Remember that each temporal nodes in \(ER_{s}\) is added at most once to the queue. Thus the overall time complexity of Algorithm 2 is \(O(m\,T+n\,T)\).
Corollary 1: _Fix a type of walk, the quantities \(\sigma_{s(v,t)}\) and \(\delta_{sv}(v,t)\) can be computed for all temporal nodes \((v,t)\) in \(O(n\,T+m\,T)\)._
Proof: For passive walks, \(\dot{\sigma}_{s(v,t)}\) and hence \(\sigma_{s(v,t)}\) can be computed recursively from the predecessor graph. Then \(\sigma_{sz}\) and \(\sigma_{sv}(v,t)\) can be computed using Equation (7). For active walks \(\dot{\sigma}_{s(v,t)}\) can be computed recursively from the predecessor graph and then \(\sigma_{s(v,t)}\) can be computed using Equation (6). \(\sigma_{sz}\) and \(\sigma_{sv}(v,t)\) can be computed using Equation (7) as in the passive case.
Definition 12 (arc dependency): Fix a node \(s\) and a type of walks. Then, \(\delta_{sz}(v,t,(v,w,t^{\prime})\) denotes the fraction of shortest \(s-z\)-walk in \(\mathcal{W}\) that go through the node appearance \((v,t)\) and then use the temporal arc \((v,w,t^{\prime})\in\mathcal{E}\).
Lemma 4: _Let \(G=(V,\mathcal{E},T)\) be a temporal graph, fix a type of walks and source a node \(s\in V\). Let \(G_{s}=(V_{s},E_{s})\) be the predecessor graph from \(s\). Let \((v,t)\) and \((w,t^{\prime})\) be two temporal nodes with \((v,t),(w,t^{\prime})\in V_{s}\) and \(((v,t),(w,t^{\prime}))\in E_{s}\). If \(\delta_{sz}(v,t,(v,w,t^{\prime}))>0\), then_
\[\delta_{sz}(v,t,(v,w,t^{\prime}))=\frac{\sigma_{s(v,t)}}{\sigma_{s(w,t^{ \prime})}}\frac{\sigma_{sz}(w,t^{\prime})}{\sigma_{sz}}.\]
Proof: For passive walks the proof corresponds to the one in [15]. For active walks the same arguments hold, as the number of suffixes of shortest \(s-z\) walks starting from \((w,t^{\prime})\) and ending in \(z\) is \(\frac{\sigma_{sz}(w,t^{\prime})}{\sigma_{s,(w,t^{\prime})}}\), multiplying this quantity by \(\sigma_{s(v,t)}\) gives the total number of shortest walks, going from \(s\) passing through \((v,t)\) and taking the temporal arc \((v,w,t^{\prime})\).
The main result allowing to efficiently compute the contributions from a node \(s\) to the betweenness centrality of all others is a an extension of the recurrence found by Brandes in [2]. This recurrence has been adapted to temporal graphs in [4, 15] and here we extend it further for active walks. Let \(G_{s}=(V_{s},E_{s})\) be the predecessor graph of node \(s\), we define \(before_{G_{s}}(v,t)\) to be the largest time \(t^{\prime}\) such that \(t^{\prime}\leq t\), with \((v,t^{\prime})\in V_{s}\). Then
Proposition 5 (General contribution): _Fix a node \(s\), a type of walks, then if we are considering active walks for any temporal node \((v,t)\in V\times[T]\):_
\[\delta_{s\bullet}^{act}(v,t)=\delta_{sv}^{act}(v,t)+\sum_{\begin{subarray}{c }t^{\prime\prime}:=before_{G_{s}}(v,t)\\ (w,t^{\prime})\in succ_{s}(v,t^{\prime\prime})\\ t^{\prime}\geq t\end{subarray}}\frac{\sigma_{s(v,t)}^{act}}{\sigma_{s(w,t^{ \prime})}^{act}}\delta_{s\bullet}^{act}(w,t^{\prime}), \tag{8}\]
_If we are considering passive walks then for \((v,t)\in V_{s}\)_
\[\delta_{s\bullet}^{pas}(v,t)=\delta_{sv}^{pas}(v,t)+\sum_{\begin{subarray}{c }(w,t^{\prime})\in succ_{s}(v,t)\\ t^{\prime}\geq t\end{subarray}}\frac{\sigma_{s(v,t)}^{pas}}{\sigma_{s(w,t^{ \prime})}^{pas}}\delta_{s\bullet}^{pas}(w,t^{\prime}). \tag{9}\]
Proof: The proofs closely follows the one in [15] by using Lemma 4. The index \(succ_{s}(v,arr(v,t))\) in the sum, represents the fact that the walks to be considered from \((v,t)\) are all the ones that can be found the predecessor graph as successor of the first time after \(t\) to be present in the predecessor graph.
Proposition 5 allows to compute the values of \(\delta_{s\bullet}\) by recurrence for all temporal nodes by starting the recurrence from the sources of \(G_{s}\). Finally, to compute the betweenness centrality of the whole temporal graph it suffices to sum \(\delta_{s\bullet}\) for all \(s\in V\) and use the correction formula given in Equation (3) needed to go from \(\hat{B}(v,t)\) to \(B(v,t)\). These steps are summarised in Algorithm 3. For active walks, we finally need to ensure that the general contribution is also computed for temporal nodes not lying on the predecessor graph. We discuss this in the Appendix and show that the values of these temporal nodes can be computed on the fly during the general contribution recurrence provided that we order the predecessor graph which adds a factor in the final complexity of active walks.
```
0:\(G=(V,\mathcal{E},T)\) : a temporal graph, \(k\) : maximal waiting time for restless walks or \(k=\infty\) for shortest paths.
0:\(B(v,t),\forall v\in V,t\in T\)
1:functionBetweenness_centrality(\(G,k\))
2:\(B(v,t)=0,\forall v\in V,t\in T\)
3:for\(s\in V\)do
4:\(G_{s}=\textsc{Temporal\_BFS}(G,s,k)\)
5:\(\delta_{sv},\sigma_{s(v,t)}=\textsc{count\_walks}(G,s)\)\(\triangleright\) Apply Corollary 1
6:\(\delta_{s\bullet},\delta_{sv}=\textsc{Contributions}(G_{s},s,\delta_{sv}, \sigma_{s(v,t)})\)
7: Update_betweenness(\(B(v,t),\delta_{s\bullet},\delta_{sv}\))\(\triangleright\) Apply Equation (3) return\(B(v,t)\)
```
**Algorithm 3** Computes the values of \(B(v,t)\) for all temporal nodes
**Discussion of general contribution for temporal nodes not lying on the predecessor graph**
For passive walks, only temporal nodes lying on the predecessor graph have contributions while this is not true in general for the active case. For instance suppose that \((v,t)\in G_{s}\) and that \((w,t^{\prime})\in succ_{s}(v,t)\) with \(t^{\prime}>t\). Then any temporal node \((v,t^{\prime\prime})\) with \(t<t^{\prime\prime}\leq t\) will have contribution as well even if \((v,t^{\prime\prime})\notin G_{s}\). Then, for active walks, let \(G_{s}=(V_{s},E_{s})\) be the predecessor graph from \(s\). Let \(t^{\prime}=before_{G_{s}}(v,t)\). Then \(\sigma_{s(v,t)}=\sigma_{s(v,t^{\prime})}\). This result can be seen since we know that \(t^{\prime}\leq t\), and \((v,t^{\prime})\in V_{s}\). If \(t^{\prime}=t\) the result is immediate. If \(t^{\prime}<t\), then \((v,t)\notin V_{s}\), and therefore there are no exact shortest \(s-(v,t)\) walks. All the shortest \(s-(v,t)\) walks arrive from \(t^{\prime}\). As a consequence:
\[\delta_{s\bullet}^{act}(v,t)=\delta_{sv}^{act}(v,t)+\sum_{\begin{subarray}{ c}t^{\prime\prime}:=before_{G_{s}}(v,t)\\ (w,t^{\prime})\in succ_{s}(v,t^{\prime\prime})\\ t^{\prime}\geq t\end{subarray}}\frac{\sigma_{s,(v,t^{\prime\prime})}^{act}}{ \sigma_{s,(w,t^{\prime})}^{act}}\delta_{s\bullet}^{act}(w,t^{\prime}), \tag{10}\]
This last Equation ensures that for active walks the computation \(\delta_{s\bullet}(v,t)\) for temporal nodes \((v,t)\notin V_{s}\) can be done on the fly while computing \(\delta_{s\bullet}(v,t^{\prime})\) with \(t^{\prime}=before_{G_{s}}(v,t)\). This implies computing the elements \((w,t^{\prime\prime})\) of the successor set of \((v,t)\) in a decreasing order of time and give the value \(\delta_{s\bullet}(v,t)\) before the sum has completed since we need to stop when the elements \((w,t^{\prime\prime})\) have \(t^{\prime\prime}<t\).
```
1:\(G=(V,\mathcal{E},T)\) : a temporal graph, \(G_{s}\) the predecessor graph of \(s\)
2:\(\delta_{s\bullet}(v,t),\forall v\in V,t\in T\)
3:functionContributions(\(G_{s},s,\delta_{sv},\sigma_{s(v,t)}\))
4:\(\delta_{s\bullet}(v,t)=0,\forall v\in V,t\in T\)
5:\(visited=\{\}\)
6:for\((v,t)\in sources(G_{s})\)do
7:\(\textsc{general\_rec}((v,t),\delta_{s\bullet},\delta_{sv},\sigma_{s(v,t)},visited)\) return\(\delta_{s\bullet},\delta_{sv}\)
8:functionGeneral_rec(\((v,t),\delta_{s\bullet},\delta_{sv},\sigma_{s(v,t)},visited)\)
9:if\((v,t)\) not in visited then
10:\(su=0\)
11:for\(t^{\prime}\in\{t^{\prime\prime}|\exists((v,t),(w,t^{\prime\prime}))\in E_{s}\}\), in decreasing order do\(\triangleright\) Ordering is necessary only for active walks
12:for\(w\in\{((v,t),(w,t^{\prime}))\in E_{s}\}\)do
13:\(\textsc{General\_rec}((w,t^{\prime}),\delta_{s\bullet},\delta_{sv},\sigma_ {s(v,t)},visited)\)
14:\(su=su+\dfrac{\sigma_{s(v,t)}}{\sigma_{s(w,t^{\prime})}}\delta_{s\bullet}(w,t^ {\prime})\)
15:\(\textsc{inter\_contribution}(G_{s},\delta_{s\bullet},\delta_{sv},(v,t),t^{\prime}, su,visited)\)\(\triangleright\) for passive walks ignore this loop
16:\(\delta_{s\bullet}(v,t)=su+\delta_{sv}(v,t)\)
17:\(visited\).add(\((v,t)\))
```
**Algorithm 4** Compute the values of \(\delta_{s\bullet}(v,t)\) for a temporal graph \(G\)
Proof (Proof of Theorem 1).: For active and passive walks, the total cost of the predecessor graph construction in (Line 4) of Algorithm 3 is for any node \(s\in V\) is \(O((n+m)T)\). Then we can compute all necessary quantities for the main recurrence (Line 5). The recurrence of Proposition 5 computing all contributions from node \(s\in V\) (Line 6) can be computed in \(O((n+m)T)\) for passive walks. However, for active walks the same computations can be done and by ordering the predecessor graph we can compute contributions of temporal nodes not lying on the predecessor graph. The predecessor graph ordering costs then \(O(nT^{2})\) and the overall cost of (Line 6) is \(O(nT^{2}+mT)\) for active walks. Finally, the application of the correction formula in Line 7 can be done in \(O(nT)\).
## 4 Experimental results
For our experiments we built our algorithms on top of the code of [4] to be able to compare running times. We implemented our algorithms with the different possible variants. We focus next on the non-strict variants of (active/passive) shortest walks, (active/passive) shortest \(k\)-restless walks for \(k\) being equal to 10% of the time life of the graph, (passive) shortest foremost walks and finally we ran Brandes algorithm on the aggregated static graph.
Our code is available online 1. The code is written in C++. We used an Intel(R) Xeon(R) Silver \(4210R\) CPU \(2.40\)GHz without parallel processes. All datasets are available publicly on sociopatterns.org. On table 3 we give information about the datasets that we used as well as running times of our algorithms and the state of the art algorithm of [4]. Our implementation is complementary to the one of [4]. For instance the authors of [4] compute the overall betweenness centrality of nodes \(B(v)\) for passive shortest walks. In comparison, our implementation computes all the values of \(B(v,t)\), \(B(v)\) and \(B(t)\) for both active and passive variants of shortest and \(k\)-restless walks.
Footnote 1: github.com/busyweaver/code_temporal_betweenness_
### Results on the datasets
We compared the ranking correlation and intersection size of shortest walks active and passive variants together and each one with the static betweenness centrality on the aggregated graph gives. We also compared shortest and shortest \(k\)-restless variants together. On Figure 3 we see that the rankings and intersection of top 20 nodes of \(B(v)\) for several real world datasets show that passive betweenness centrality and the static one have a high correlation for all our datasets, always higher than the correlation between the active and static one (last two rows). Our results suggest that if we want an approximate ranking of \(B(v)\) for passive (and to a lesser extent active) shortest walks, the static betweenness centrality gives a good approximation to it and runs much faster (100
\begin{table}
\begin{tabular}{l|c c c c} \hline dataset & nodes & events & edges & agg\_edges \\ \hline primaryschool & 242 & 3100 & 125773 & 16634 \\ highschool\_2012 & 180 & 11273 & 45047 & 4440 \\ highschool\_2011 & 126 & 5609 & 28539 & 3418 \\ hospital\_ward & 75 & 9453 & 32424 & 2278 \\ workplace\_2013 & 92 & 7104 & 9827 & 1510 \\ ht09 & 113 & 5246 & 20818 & 4392 \\ \end{tabular}
\end{table}
Table 2: Statistics for the datasets. From left to right number of nodes (nodes), number of times (events), number of edges (edges) and number of edges of the aggregated static graph (agg_edges).
times faster for all our datasets) than the temporal version. While the comparison between active temporal betweenness centrality and the static one show less correlation in general. Finally the comparison between active and passive variants show high correltation for \(B(v)\) while the behaviour is largely different for \(B(t)\) between active and passive variants. We also notice that the ranking correlation and intersection size is higher between passive variant of shortest and shortest \(k\)-restless than it is with its active counterpart (first two rows).
On Figure 4, we see that the distribution of the values of \(B(t)\) are much more concentrated around central times (those in the middle of the temporal graph) for the active shortest walks than for shortest passive walks (first two rows). In fact, this summarises our idea for introducing active variants where intermediate temporal nodes get contributions from shortest walks as well. For passive shortest walks important times tend to be in the beginning of the time life of the graph since many shortest walks can be formed when starting walks early in time and combining times later on. This pattern is less obvious on the shortest \(k\)-restless walks (last two rows). For shortest foremost paths
### Predicting most important nodes
In the following we try to assess how much of the time graph is necessary in order to infer information about the highest ranked nodes or the total ranking of nodes. In many cases in realworld datasets we do not have the full information of the temporal graph. Therefore, in this section, we compare the nodes with highest betweenness value \(B(v)\) in a temporal graph \(G\) with the nodes of highest values if we only have access to the first times of the graph which is a more realistic setting. Therefore, if we have access to the 20% of the interactions of the graph, are the nodes with highest betweenness correspond to these when we have the full temporal graph?. It turns out that this depends on the criteria that we consider. For instance, considering passive shortest foremost variant looking at the first 10% of the times in the graph is enough is infer most of the important nodes in the graph. In fact this is in accordance with the last line of Figure 4 where we see that most important times \(B(t)\) in the graphs are the ones in the
\begin{table}
\begin{tabular}{l|c|c c|c c|c} \hline dataset & Bus & sh act & sh pas & sh-rl act & sh-rl pas & sh-fm pas & static \\ \hline primayschool & 970.96 & 409.84 1382.1 & 1402.2 & 392.19 & 673.57 & 3.0104 \\ highschool\_2012 & 212.84 & 341.18 289.40 & 273.22 & 367.26 & 137.96 & 0.6674 \\ highschool\_2011 & 75.215 & 96.710 97.160 & 96.590 & 84.870 & 37.329 & 0.2817 \\ hospital\_ward & 114.70 & 66.365 164.14 & 151.85 & 56.801 & 71.232 & 0.1609 \\ workplace\_2013 & 11.580 & 49.070 16.046 & 15.994 & 48.774 & 6.6492 & 0.0734 \\ ht09 & 45.466 & 60.094 61.762 & 62.188 & 61.847 & 31.455 & 0.2009 \\ \end{tabular}
\end{table}
Table 3: The execution time in seconds of (Busß) implementation of [4], Algorithm 3 active shortest (active), passive shortest (passive), active shortest \(k\)-restless (sh-rl act), passive shortest restless (sh-rl pas), passive shortest foremost (sh-fm pas) and Brandes Algorithm on the aggregated static graph (static).
beginning of it. More formally, we will define the temporal graph containing the first times of a temporal graph \(G\).
Definition 13 (The \(\mu\)-temporal graph of \(G\)): Let \(G=(V,\mathcal{E},T)\) be a temporal graph and let \(\mu\in[0,1]\), and let then \(G^{\leq\mu}=(V,\mathcal{E}^{\prime},\mu\,T)\) with \(\mathcal{E}^{\prime}=\{(u,v,t)\,|\,(u,v,t)\in\mathcal{E},t\leq\mu\,T\}\).
Our results are summarized in Figure 5 and Figure 6. In these figures, for each dataset treated as well as for each shortest variant considered we give a curve representing the accuracy of most important nodes if we only look at a small proportion of the times in the graph. Figure 5 shows the intersection size of the top 10 nodes in term of \(B(v)\) for the temporal graph \(G\) and for \(G^{\leq\mu}\) with \(\mu\in[0,1]\) on the x-axis. A point \((\alpha,r)\) on the curve implies that there are \(r\) common nodes in the top 10 nodes between \(G\) and \(G^{\leq\alpha}\). We see that for passive shortestforemst variant the curve grows fast meaning that the top 10 nodes are very accurate by looking at the first 10% to 20% interactions. For the other variants the number of top nodes recovered grows approximately linearly as we consider more interactions. It is interesting to note that the same pattern is still true if we consider the overall ranking of nodes as it is the case in Figure 6. All these finding are in agreement with Figure 4 as stated before. For passive shortest foremost the most important times \(B(t)\) in the graphs are the ones in the beginning of it. Therefore we can infer the final top nodes and ranking by looking at the first time interactions while the time importance is more spread and centered in other variants suggesting that we need to consider more interactions.
Figure 3: Heatmap of betweenness centrality of \(B(v)\) comparisons of the datasets. (left) Kendall-tau rank correlation rankings and (right) intersection rate of the top 10 nodes. In the figure act stands for active shortest, pas for passive shortest, sh-rl_act active shortest restless, sh-rl_pas for passive shortest restless, sh-fm for passive shortest foremost and stat for the static betweenness centrality on the aggregated graph.
Figure 4: Distribution of time centrality values \(B(t)\) for the datasets. Each column represents a dataset. (1st row) Figures correspond to the distribution of \(B(t)\) for passive shortest \(k\)-restless, (2nd row) active shortest \(k\)-restless, (3rd row) passive shortest, (4th row) active shortest and (5th row) passive shortest foremost. The x-axis represents the renormalized life time of the temporal graph and the y-axis represents the values of \(B(t)\) grouped into 20 bars. The upper value on the y-axis corresponds to the cumulative sum of \(B(t)\) on the highest bar divided by 100.
Figure 5: (1st row) Figures correspond to the proportion of top 10 nodes for passive shortest \(k\)-restless, (2nd row) active shortest \(k\)-restless, (3rd row) passive shortest, (4th row) active shortest and (5th row) passive shortest foremost. The x-axis represents the renormalized life time of the temporal graph and the y-axis the size of the intersection set. A point \((\alpha,r)\) on the curve implies that there are \(r\) common nodes in the top 10 nodes between \(G\) and \(G^{\leq\alpha}\).
Figure 6: (1st row) Figures correspond to the proportion of top 10 nodes for passive shortest \(k\)-restless, (2nd row) active shortest \(k\)-restless, (3rd row) passive shortest, (4th row) active shortest and (5th row) passive shortest foremost. The x-axis represents the renormalized life time of the temporal graph and the y-axis the Kendall-tau correlation coefficient. A point \((\alpha,\beta)\) on the curve implies that the correlations between the rankings of \(B(v)\) between \(G\) and \(G^{\leq\alpha}\) is \(\beta\).
## 5 Conclusion
Our results improve the theoretical time analysis of previously known methods to compute the temporal betweenness centrality on shortest paths variants. It would be interesting to know if these results could be improved or it is not the case extend known complexity results on static betweenness centrality such that [1] to the temporal case.
Another direction is to look for guaranteed approximations to the temporal betweenness centrality which started to be studied recently in [16, 6].
Finally, the temporal betweenness centrality has been defined in different time dependent formalisms such that Stream Graphs [11, 17] that allow for continuous time and it would be interesting to find out if the same kind of results hold in that setting as well.
|
2310.05050 | Universality of the Neutrino Collisional Flavor Instability in Core
Collapse Supernovae | Neutrinos are known to undergo flavor conversion processes among the three
flavors. The fast flavor conversion (FFC) has been the central piece of flavor
conversions taking place in core-collapse supernovae (CCSNe) due to its shorter
timescale to the completion of flavor conversion compared to other types of
flavor conversion. Although the ordinary collisions between neutrinos and
matter were once thought to decohere neutrinos and thus damp flavor
conversions, it was recently realized that they can also induce the flavor
conversion. The linear analysis showed that the so-called collisional flavor
instability or CFI occurs in the absence of FFC. In this paper, we investigate
if CFI takes place in of the post-bounce core of CCSNe, using the results of
spherically symmetric Boltzmann simulations of CCSNe for four progenitor models
with different masses. We also provide a necessary (but not sufficient)
condition of matter properties for the occurrence of CFI in optically thick and
semi-transparent regions; baryon mass density ($\rho$), electron fraction
($Y_e$), and the degeneracy of electron-type neutrinos ($\eta_{\nu_e}$) need to
be $10^{10} {\rm g/cm^3} \lesssim \rho \lesssim 10^{12} {\rm g/cm^3}$,
$Y_e\lesssim 0.4$, and $\eta_{\nu_e} \lesssim 0.5$, respectively. This
condition allows us to easily locate the place of possible CFI occurence
without detailed stability analyses, which is useful for analyzing CFI in CCSN
models phenomenologically | Jiabao Liu, Hiroki Nagakura, Ryuichiro Akaho, Akira Ito, Masamichi Zaizen, Shoichi Yamada | 2023-10-08T07:21:14Z | http://arxiv.org/abs/2310.05050v2 | # Universality of the Neutrino Collisional Flavor Instability in Core Collapse Supernovae
###### Abstract
Neutrinos are known to undergo flavor conversion processes among the three flavors. The fast flavor conversion (FFC) has been the central piece of flavor conversions taking place in core-collapse supernovae (CCSNe) due to its shorter timescale to the completion of flavor conversion compared to other types of flavor conversion. Although the ordinary collisions between neutrinos and matter were once thought to decohere neutrinos and thus damp flavor conversions, it was recently realized that they can also induce the flavor conversion. The linear analysis showed that the so-called collisional flavor instability or CFI occurs in the absence of FFC. In this paper, we investigate if CFI takes place in of the post-bounce core of CCSNe, using the results of spherically symmetric Boltzmann simulations of CCSNe for four progenitor models with different masses. We also provide an empirical correlation between matter properties and the occurrence of CFI in optically thick and semi-transparent regions; baryon mass density (\(\rho\)), electron fraction (\(Y_{e}\)), and the degeneracy of electron-type neutrinos (\(\eta_{\nu_{e}}\)) need to be \(10^{10}\mathrm{g/cm^{3}}\lesssim\rho\lesssim 10^{12}\mathrm{g/cm^{3}}\), \(Y_{e}\lesssim 0.4\), and \(\eta_{\nu_{e}}\lesssim 0.5\), respectively. This condition allows us to easily locate the place of possible CFI occurence without detailed stability analyses, which is useful for analyzing CFI in CCSN models phenomenologically.
## I Introduction
A star of mass \(\gtrsim 10\,M_{\odot}\) undergoes a catastrophic gravitational collapse in the inner iron core when the matter density and temperature become high enough to trigger electron captures or photodisoccications of heavy nuclei, marking the onset of core-collapse supernova (CCSN). During the gravitational collapse and the early stage of post-bounce phases (\(\lesssim 30\) ms after core bounce), electron-type neutrinos (\(\nu_{e}\)) are produced abundantly through charged-current processes. They are dominant carriers of energy and lepton number from the core to the outside of star. At \(t\gtrsim 30\) ms, other species of neutrinos, electron-type anti-neutrinos (\(\bar{\nu}_{e}\)) and \(\mu\)- and \(\tau\) neutrinos and anti-neutrinos (collectively denoted as \(\nu_{x}\)), are also produced and escape from the hot and dense proto-neutron star (PNS) surface. The neutrino radiation from the CCSN core continues to cool down the PNS, dictating the thermal- and chemical evolutions towards the cold neutron star.
The neutrinos emitted from the PNS surface can undergo interaction with the intervening matter before escaping from the post-shock region. Some of \(\nu_{e}\) and \(\bar{\nu}_{e}\) are absorbed via charged-current reactions by nucleons, which transfers the neutrino energy to matter, a process known as neutrino reheating, which revitalizes the stagnated shock wave, being also enhanced by multi-dimensional fluid instabilities. According to recent detailed CCSN simulations, the shock revival has been observed rather commonly for a wide range of progenitor mass; see, e.g., ([1; 2; 3; 4; 5; 6; 7]). Nowadays, there is emerging a consensus among different CCSN groups that the delayed neutrino heating mechanism accounts for a majority of CCSN explosions.
One thing we should mention, however, is that there remain many unresolved issues in both micro- and macroscopic physical processes even in up-to-date multi-dimensional models. One of the greatest uncertainties in any modeling of CCSNe is the neutrino quantum kinetics including flavor conversions. When neutrinos are dense, which is typically the case in the core region of CCSNe, neutrino self-interactions can induce collective flavor oscillations [8]. Since the self-interactions are intrinsically non-linear processes, the resultant flavor conversion has distinct features from those in vacuum and/or in matter. Fast neutrino-flavor conversion or FFC is an example [9; 10; 11; 12; 13; 14; 15; 16] and is currently attracting much attention. It can evolve with the timescale of picosecond in the CCSN core, which is much shorter than any relevant timescale of other physical processes, exhibiting the potential to greatly change the neutrino radiation field. It is also worth noting that multi-dimensional fluid instabilities facilitate the occurrence of FFC in the post-shock region. In fact, whereas angular crossings in the neutrino
flavor-lepton-number (NFLN), corresponding to the necessary and sufficient condition for FFC (see, e.g., [17]), are unlikely to appear in spherically symmetric CCSN models [18], they have been observed rather commonly in multi-dimensional ones [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. This has motivated the detailed studies of non-linear evolution of FFC [30; 31; 32; 33; 34; 35; 36; 37; 38; 39], including their interplay with neutrino-matter collisions [40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. Some recent studies also demonstrate that neutrino radiation fields with FFC in the CCSN core are qualitatively different from those obtained from the classical neutrino transport [50; 51; 52], indicating that fluid dynamics, nucleosynthesis, and neutrino signals could be significantly impacted by FFC.
There is a caveat, however to the occurrence of FFC in the optically thick region, though. The angular distributions of \(\nu_{e}\) and \(\bar{\nu}_{e}\) are both almost isotropic in this region and NFLN angular crossings hardly occur unless the number densities of \(\nu_{e}\) and \(\bar{\nu}_{e}\) are very close to each other. According to the recent studies by [21; 23], the convective region in PNS can offer a possibility, having almost the same \(\nu_{e}\) and \(\bar{\nu}_{e}\) number densities. However, these regions fluctuate in time violently with the dynamical timescale of PNS (see, e.g., [53]), and more importantly, they are usually very narrow, which may limit the impact of FFC on CCSN dynamics.
Johns [54] recently pointed out that flavor conversions can be driven by the disparity in collision rates between different neutrino flavors and called it the collisional flavor instability or CFI. Thier properties in both linear- and non-linear regimes have been investigated from many different points of view [48; 55; 56; 57; 58; 59; 60]. There are mainly two noticeable features in CFI: (1) the timescale of CFI becomes shorter with increasing neutrino-matter interactions as well as the number density of neutrinos; (2) the instability can take place even if neutrino distributions are isotropic. These properties indicate a possibility that CFI occurs widely in the optically thick region, which is a clear advantage over FFC. In fact, the authors in [57] investigated both the linear- and non-linear properties of CFI for a few given CCSN fluid background snapshots. They found that CFIs can occur ubiquitously (including optically thick regions), and the resultant neutrino radiation field is very different from those modeled by the classical neutrino transport.
On the other hand, the authors in [61] also carried out recently similar simulations of the classical neutrino transport for a fluid profile taken from a CCSN model in [62]. Their results are not consistent with those in [57], however. The CFI regions in their simulations are much narrower than those in [57], and CFI is overwhelmed by FFC and even by slow mode (another type of collective neutrino oscillations; see, e.g., [11]). There are many potential sources of the differences between the two works: for instance the neutrino radiation field is derived by the multi-energy-group treatment in [57] while it is obtained with the gray approximation in [61]; the hydrodynamical background employed in [57] is taken from a 18 \(M_{\odot}\) CCSN model with muons in [63], while that in [61] is extracted from a 18.6 \(M_{\odot}\) CCSN model without muons; while both works consider spherical symmetric CCSN models, [61] in addition employs a mixing length scale scheme generating multi-dimensional convection effects favoring FFC. As such, each work has its own pros and cons, hampering us from drawing robust conclusions about the occurrence of CFI in CCSN.
In this study, we attempt to settle the dispute by carrying out a systematic study on the occurrence of CFI with modern spherically symmetric CCSN models derived with the full Boltzmann neutrino transport. We pay an attention also to the growth rate of CFI if it is detected. We employ four different progenitor models, 11.2, 15, 27, and 40 \(M_{\odot}\), and we explore the occurrence of CFI from core bounce to the late post-bounce phase (\(>400\)ms) by sampling a matter profile every \(\sim 1\)ms. This study is the first comprehensive survey of the possible occurrence of CFI in CCSN, which has the ability to answer some intriguing questions: how common or rare is CFI?; when does CFI first appear in the post-bounce phase?; are there any progenitor-dependent features or universality?; does the unstable region appear persistently or intermittently? We also make an attempt to find correlation study between the occurrence of CFI with some matter properties, which potentially allows us to assess the possibility of CFI easily without conducting a detailed linear stability analysis. As shall be discussed below, however, the non-local effects of heavy-leptonic neutrinos tend to smear out the correlation, and consequently our analysis can provide only an empirical condition for the occurrence of CFI. It should be also mentioned that this work will be a stepping stone for our forthcoming work to explore CFI in multi-dimensional CCSN models (Akaho et al. in prep).
This paper is organized as follows. In Sec. II, we describe our method of linear stability analysis for CFI. After summarizing our CCSN models briefly in Sec. III, we present our results in Sec. IV. Finally, we conclude the present study with future perspects in Sec. V. Throughout this paper, we use the metric signature of \(+---\). Unless otherwise stated, the natural unit \(c=\hbar=1\) is employed where \(c\) and \(\hbar\) are the light speed and the reduced Planck constant, respectively, when we discuss the governing equations for CFI (in Sec. II).
## II Linear stability analysis of collisional flavor instability
The most straightforward way to assess the occurrence of CFI is to solve the dispersion relation, obtained by linearizing the quantum kinetic equation. In our previous paper [59], we developed a numerical scheme to solve the dispersion relation and provided analytic formulae to quantify growth rates of CFI approximately. These analytic formulae reduces computational costs and allows us to carry out a systematic survey of the occurrence of CFI in the post-bounce phase for multiple progenitor
models. In this section, we summarize the essence of our method with the description of some approximations and assumptions to derive the analytic formulae. We also refer readers to our previous paper [59] for more complete explanations of our stability analysis.
In this study, we work in the two flavor framework, which gives the same dispersion relation as that in the three flavor one, for the case with \(\nu_{\mu}=\nu_{\tau}\). This is consistent with our CCSN models, in which all heavy leptonic neutrinos are assumed to be identical, and they are collectively denoted by \(\nu_{x}\). We express the neutrino quantum kinetics in terms of the neutrino flavor density matrix,
\[\rho(x,P)=\left(\begin{array}{cc}f_{\nu_{e}}&S_{ex}\\ S_{xe}&f_{\nu_{x}}\end{array}\right), \tag{1}\]
where the arguments \(x\) and \(P=(E,\mathbf{v})\) are the spacetime position and the 4-momentum of neutrinos, respectively; \(E\) and \(\mathbf{v}\) denote the neutrino energy and velocity, respectively, and the four velocity of neutrinos is \(v\equiv(1,\mathbf{v})\) in the relativistic limit. For convenience, the flavor isospin convention is used hereafter, so that negative energy quantities are meant for antineutrinos: \(\rho(E)=-\bar{\rho}(-E)\) for \(E<0\). The time evolution of the neutrino flavor density matrix is governed by the quantum kinetic equation
\[\mathrm{i}v\cdot\partial\rho=[H,\rho]+\mathrm{i}C, \tag{2}\]
where \(H\) represents the neutrino oscillation Hamiltonian and \(C\) is the collision term. The Hamiltonian has the vacuum (\(H_{\mathrm{vac}}\)), matter (\(H_{\mathrm{mat}}\)), and neutrino self-interaction (\(H_{\nu}\)) contributions, which can be expressed as
\[\begin{split}& H=H_{\mathrm{vac}}+H_{\mathrm{mat}}+H_{\nu},\\ & H_{\mathrm{vac}}(x,P)=\frac{M^{2}}{2E},\\ & H_{\mathrm{mat}}(x,P)=\sqrt{2}G_{\mathrm{F}}v\cdot\mathrm{diag}( j_{e}(x),j_{x}(x)),\\ & H_{\nu}(x,P)=\sqrt{2}G_{\mathrm{F}}v\cdot\int\mathrm{d}P^{\prime} \rho(x,P^{\prime})v^{\prime},\end{split} \tag{3}\]
where \(M^{2}\) is the neutrino mass-squared matrix; \(j_{\alpha}(x)\) is the lepton number 4-current of the charged lepton species \(\alpha\); the integral over 4-momentum is abbreviated as
\[\int dP=\int_{-\infty}^{\infty}\frac{E^{2}dE}{2\pi^{2}}\int\frac{d\mathbf{v}}{ 4\pi}. \tag{4}\]
The collision term \(C\) is written in the relaxation approximation as
\[C(x,P)=\frac{1}{2}\{\mathrm{diag}(\Gamma_{\nu_{e}}(x,P),\Gamma_{\nu_{x}}(x,P)),\rho_{\mathrm{eq}}-\rho\}, \tag{5}\]
where the curly bracket denotes the anti-commutator; \(\Gamma_{\nu_{\alpha}}(x,P)\) is the collision rate for the neutrino of flavor \(\alpha\); \(\rho_{\mathrm{eq}}\) denotes the density matrix for the equilibrium state of the collisional processes. In this study, we take into account all emission and absorption interactions employed in our CCSN models (see Sec. III for more details), but scattering processes are neglected. This assumption is in line with our approach deriving analytic formulae to estimate the growth rates of CFI (see below for more details). It should be mentioned that pair processes, e.g., electron-positron pair creation, nucleon-nucleon bremsstrahlung, and their inverse processes, can not be written in the form of Eq. 5, and the exact expression involves neutrino-momentum integrals (see [64]). Handling them exactly is hence computationally expensive, so we utilize an approximate treatment here. Our CCSN models provide the total absorption neutrino opacities of each energy and species of neutrinos, in which the pair processes are also included. We estimate \(\Gamma\) from them for each species of neutrinos, assuming that the collision term has the same form as Eq. 5. Although this prescription is pragmatic, it is a reasonable approximation not only in the optically thick but also in the semi-transparent regions where CFI would play an important role. We also note that the pair processes contribute to the collision term of \(\nu_{x}\), indicating that \(\Gamma_{\nu_{x}}\) is nonzero.
Assuming \(|S_{i}|\ll f_{\nu_{j}}\), the off-diagonal element of Eq. 2 can be linearized as
\[\begin{split}& v\cdot(\mathrm{i}\partial-\Lambda_{0e}+\Lambda_{0x})S _{ex}\\ &+(f_{\nu_{e}}-f_{\nu_{x}})\sqrt{2}G_{\mathrm{F}}\int\mathrm{d}P^ {\prime}v\cdot v^{\prime}S_{ex}(P^{\prime})\\ &+\frac{1}{2E}\sum_{\zeta=e,x}(M_{e\zeta}^{2}S_{\zeta x}-S_{e \zeta}M_{\zeta x}^{2})+\mathrm{i}\Gamma_{ex}S_{ex}=0,\end{split} \tag{6}\]
where \(\Lambda_{0z}=\sqrt{2}G_{\mathrm{F}}[j_{z}(x)+\int\mathrm{d}Pf_{\nu_{x}}(x,P)v]\) and \(\Gamma_{\nu_{ex}}(E)=\left[\Gamma_{\nu_{e}}(E)+\Gamma_{\nu_{x}}(E)\right]/2\). We then take a plane-wave ansatz as
\[S_{ex(xe)}(x,P)=S_{ex(xe)}(k,P)e^{\mathrm{i}k\cdot x}, \tag{7}\]
where \(k=(\omega,\mathbf{k})\) is the 4-wavevector. By inserting Eq. 7 into Eq. 6, the general form of dispersion relation can be obtained. However, it is computationally demanding to solve the dispersion relation, while we can obtain simple analytic formulae by approximating Eq. 6.
First, we set \(H_{\mathrm{vac}}=H_{\mathrm{mat}}=0\). This condition is in accordance with the purpose of this study that we quantify the growth rates of pure CFI mode. Next, we only focus on the so-called \(\mathbf{k}=\mathbf{0}\) mode, which usually offers the maximum growth rate of CFI [59]. Third, we apply the stability analysis to angular-integrated neutrino distributions in momentum space. This prescription is motivated by the fact that the anisotropy of neutrino distributions plays a subdominant role for CFI [59]. This approximation is also in line with our treatment of the collision term, in which the scattering processes are not included. Since in and out scatterings are exactly cancelled if neutrinos angular distributions are isotropic, these processes can be safely ignored. Finally, we use a monochromatic assumption. As discussed in our previous paper [59], the growth rate of CFI for a non-monochromatic energy distribution is almost identical to the one for the monochromatic distribution, if we substitute the number density
and their mean collision rates in the former for the counterparts in the latter.
Given these conditions, we can solve the dispersion relation analytically, and the solutions can be written as
\[\omega_{\pm}=-A-\mathrm{i}\gamma\pm\sqrt{A^{2}-\alpha^{2}+\mathrm{i}2G\alpha}, \tag{8}\]
for the isotropy-preserving modes and
\[\omega_{\pm}=\frac{A}{3}-\mathrm{i}\gamma\pm\sqrt{\left(\frac{A}{3}\right)^{2} -\alpha^{2}-\mathrm{i}\frac{\mathrm{i}}{3}G\alpha}, \tag{9}\]
for the isotropy-breaking modes (see [59] to derive these formulae). In the above equations, the following notations are introduced:
\[G=\frac{\mathfrak{g}+\bar{\mathfrak{g}}}{2},\ A=\frac{\mathfrak{g}-\bar{ \mathfrak{g}}}{2},\ \gamma=\frac{\Gamma+\bar{\Gamma}}{2},\ \alpha=\frac{\Gamma-\bar{\Gamma}}{2}, \tag{10}\]
where \(\mathfrak{g}=\sqrt{2}G_{\mathrm{F}}\left(n_{\nu_{\varepsilon}}-n_{\nu_{ \varepsilon}}\right)\) and \(\Gamma=(\Gamma_{\nu_{\varepsilon}}+\Gamma_{\nu_{\varepsilon}})/2\). The same applies to the barred quantities for antineutrinos. The number density of neutrinos and mean collision rates are computed by
\[\begin{split}& n_{\nu_{i}}=\int dP\,f_{\nu_{i}}(P),\\ &\Gamma_{\nu_{i}}=\langle\Gamma\rangle_{\nu_{i}}=\frac{1}{n_{\nu _{i}}}\int dP\Gamma_{\nu_{\varepsilon x}}(P)f_{\nu_{i}}(P).\end{split} \tag{11}\]
In this paper, we use Eqs. 8 and 9 to estimate the growth rate of CFI. We note that the maximum growth rate is usually given from the isotropy-preserving branch.
It is note-worthy that flavor conversions associated with neutrino-self interactions play important roles in the CCSN dynamics only if they overwhelm the collision rate. In other words, regions where the inequality \(\mathfrak{g}_{\nu_{i}}\gg\Gamma_{\nu_{i}}\) is satisfied are of our interest. Assuming the inequality, Eqs. 8 and 9 can be rewritten in a more concise form,
\[\max\left[\mathrm{Im}\,\omega\right]=\begin{cases}-\gamma+\frac{|G\alpha|}{| A|},&\text{if }A^{2}\gg|G\alpha|,\\ -\gamma+\sqrt{|G\alpha|},&\text{if }A^{2}\ll|G\alpha|,\end{cases} \tag{12}\]
for the isotropy-preserving branch and
\[\max\left[\mathrm{Im}\,\omega\right]=\begin{cases}-\gamma+\frac{|G\alpha|}{| A|},&\text{if }A^{2}\gg|G\alpha|,\\ -\gamma+\frac{\sqrt{|G\alpha|}}{\sqrt{3}},&\text{if }A^{2}\ll|G\alpha|, \end{cases} \tag{13}\]
for the isotropy-breaking branch. It should be also mentioned that although the obtained growth rates are not exactly the same as those obtained by directly solving dispersion relation, we confirm that the error is within a factor around unity even in extreme cases. These concise expression also helps us to see if the resonance-like CFI occurs in our models [58; 59], which will be discussed in Sec. IV.2.
## III Ccsn models
As described in Sec. II, CFI hinges on not only neutrino distributions but also collision rates, suggesting that an accurate radiation-hydrodynamic modeling of CCSN is required to make a robust and reliable analysis of CFI. We utilize our up-to-date CCSN models, in which all necessary data for our stability analysis, matter and neutrino distributions and collision rates, are provided. Below, we briefly summarize our CCSN models.
Details on our CCSN code can be found in [65; 66; 67]. Although it has the ability to perform multi-dimensional simulations (see a series of our previous papers: [68; 69; 70; 4; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 22; 232; 24; 241; 242; 25; 253; 26; 261; 262; 273; 288; 290; 221; 224; 28; 281; 282; 283; 284; 285; 286; 287; 288; 289; 291; 289; 292; 293; 294; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 320; 321; 322; 333; 334; 341; 342; 343; 35; 361; 362; 363; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 41; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 58; 59; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 79; 80; 81; 829; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 87; 88; 89; 91; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 1209; 131; 140; 115; 116; 117; 18; 190; 192; 103; 104; 106; 107; 109; 111; 118; 119; 121; 122; 123; 124; 125; 126; 127; 128; 129; 133; 141; 120; 133; 142; 134; 135; 136; 137; 138; 138; 1439; 150; 161; 179; 180; 193; 194; 195; 196; 197; 197; 198; 199; 201; 210; 222; 233; 240; 25; 262; 27; 28; 293; 298; 310; 299; 30; 311; 320; 333; 342; 35; 363; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 56; 57; 58; 59; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 73; 74; 75; 76; 77; 78; 79; 80; 83; 84; 85; 86; 87; 88; 89; 93; 94; 95; 96; 97; 98; 99; 100; 111; 112; 113; 114; 115; 116; 17; 182; 199; 199; 202; 211; 233; 241; 25; 26; 27; 28; 29; 30; 32; 33; 34; 35; 36; 37; 38; 39; 41; 40; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 56; 57; 58; 59; 63; 59; 70; 51; 59; 50; 52; 57; 53; 58; 59; 64;
sured from core bounce) and radius, while different panels distinguish CCSN models. To guide the eye, we show the shock radius by the red solid line, gain radius by the purple solid line, isodensity radii of \(\rho=10^{10},\ 10^{11},\ 10^{12}\), and \(10^{13}\,\mathrm{g\,cm^{-3}}\) (\(\rho\) denotes the mass density of baryons) as the dashed red lines. As another remark, we find that CFI can become unstable in the pre-shock regions. However, the growth rate is too small \(\lesssim 10^{-9}\,\mathrm{cm^{-1}}\) to give any impacts on CCSN dynamics. For this reason, we set a minimum value of the growth rate at \(10^{-9}\,\mathrm{cm^{-1}}\) in these plots.
As shown in this figure, CFI is commonly detected in the region of \(10^{10}\,\mathrm{g\,cm^{-3}}\lesssim\rho\lesssim 10^{12}\,\mathrm{g\,cm^{-3}}\) and they can exist rather stably regardless of progenitors, suggesting that CFI will take place universally in post-shock regions. We also find that CFI can occur from the early phase (\(\sim 20\,\mathrm{ms}\)) and subsequently expands its region with time for both in and outward radial directions. In the late phase, however, the CFI region with \(>10^{-9}\,\mathrm{cm^{-1}}\) shrinks with time. One might think that this is due to the failure of shock revival in our CCSN models, i.e., artifacts due to spherical symmetry. It should be noted, however, that CFI is dictated by neutrino-matter interactions in addition to the neutrino distributions and hence the time evolution of the CFI region is associated with matter distributions. As observed also in multi-dimensional successful explosion models, the CCSN core shrinks with time, to form a neutron star, suggesting that the contraction of the CFI region in the late phase is mainly a consequence of this shrinkage of the core and will be
Figure 1: Radius-time diagram for the maximum growth rate of CFI (\(\max\left[\mathrm{Im}\,\omega\right]\)) estimated from Eq.8 and 9 for progenitors of 11.2 (panel a), 15 (panel b), 27 (panel c), and 40 \(M_{\odot}\) (panel d). In each panel, red and purple solid lines portray time trajectories of shock- and gain radius, respectively. The red dashed lines traces the isodensity radii of mass density: \(10^{13},\ 10^{12},\ 10^{11},\ \mathrm{and}\ 10^{10}\,\mathrm{g\,cm^{-3}}\).
generic even in multi-dimensional models.
We also find that CFI is not detected in very optically thick regions (\(\rho\gtrsim 10^{13}\,\mathrm{g\,cm^{-3}}\)). The absence of CFI inside the PNS (\(r\lesssim 15\) km) is rather obvious, since \(\nu_{e}\) is highly degenerate, leading to an extreme disparity of number densities between \(\nu_{e}\) and \(\bar{\nu}_{e}\). In addition to this, \(\nu_{x}\) is also much less populated than \(\nu_{e}\) due to low matter temperatures inside PNS. This implies that \(A\) becomes nearly equal to \(G\), resulting in the suppression of CFI (see Eqs. 12 and 13). On the other hand, the matter temperature is higher in the regions of \(r\gtrsim 15\) km (since they experienced shock heating), indicating that both \(\bar{\nu}_{e}\) and \(\nu_{x}\) can be populated. As shown below, \(\nu_{x}\) plays an important role on suppressing CFI in the region of \(\rho\gtrsim 10^{13}\,\mathrm{g\,cm^{-3}}\).
In the following discussion, we focus on \(40M_{\odot}\) model, since we confirm that other progenitor models have the same trend. Fig. 2 displays a 2D color map for the ratio of \(G\) to \(A\). One noticeable feature in this figure is that \(|G/A|\) can be smaller than unity in the inner region (colored blue). This is a clear indication that \(\nu_{x}\) becomes abundant. We note that \(|G/A|\) should be higher than unity if there are no \(\nu_{x}\) and their anti-partners (see Eq. 10). We also note that \(G\) becomes smaller when \(\nu_{x}\) appears, whereas \(A\) remains constant under the condition of \(\nu_{x}=\bar{\nu}_{x}\). As shown in Fig. 2, \(|G/A|\) distributions
Figure 3: Same as Fig. 2 but for \(G/G_{0}\) (panel a) and maximum growth rate of CFI estimated from Eq.8 and 9 with \(G_{0}\) (\(n_{\nu_{x}}\) is assumed to be zero); see the text for more details.
Figure 2: Same as Fig. 1 but for \(G/A\) (panel a) and \(\sqrt{2}G_{\mathrm{F}}n_{\nu_{x}}\) (panel b). In this figure, we only focus on results of \(40M_{\odot}\) model. The dashed black line in each figure corresponds to the boundary of CFI.
have a clear correlation with the \(n_{\nu_{x}}\) distribution.
Let us corroborate our claim that CFI is suppressed by \(\nu_{x}\) in the optically thick region. In Fig. 3, we portray a color map of \(G/G_{0}\), where \(G_{0}\) represents \(G\) with \(n_{\nu_{x}}\) assumed to be zero. The deviation of \(G/G_{0}\) from unity hence corresponds to the contribution of \(\nu_{x}\) to \(G\). As clearly seen in the figure, the ratio in the region at \(\rho\gtrsim 10^{13}\,\mathrm{g\,cm^{-3}}\) with \(r\gtrsim 15\) km is remarkably smaller than unity, indicating that \(n_{\nu_{x}}\) plays an important role. To strengthen this discussion, we carry out the same stability analysis of CFI by replacing \(G\) with \(G_{0}\). The result of the growth rate is shown in Fig. 3. As shown clearly in the figure, the inner boundary of the CFI region is located at much smaller radii than the case with \(\nu_{x}\neq 0\) (see panel d in Fig. 1). This provides strong evidence that \(\nu_{x}\) hampers an occurrence of CFI in the optically thick region.
### Resonance-like CFI
One of the unique properties of CFI is a resonance-like feature, in which the growth rate can be remarkably higher than the typical non-resonance value. If the resonance-like CFI occurs, the complete flavor swaps between \(\nu_{e}\) and \(\nu_{x}\), may take place [60], which potentially leads to a radical change in the neutrino radiation fields.
We do not find the resonance-like CFI in our CCSN models, however. We reached this conclusion by the following analysis. Before going into details, we briefly summarize the property of resonance-like CFI from Eqs. 12 and 13. The growth rate of CFI is comparable to the collision rate when \(A^{2}\ll G|\alpha|\). In the region, where the neutrino self-interaction potential is larger than the collision rate, the condition \(A^{2}\gg G|\alpha|\) is usually satisfied. However, if the number densities of \(\nu_{e}\) and
Figure 4: Radial profiles of \(G/A\) (red), \(A/\alpha\) (blue), and \(A^{2}/(G|\alpha|)\) (gold) at times \(t=100,\ 200,\ 400\,\mathrm{ms}\) after bounce, while the line type denotes the time. Each panel distinguishes progenitor model.
approach each other, \(A\) becomes lower and \(A^{2}\ll G|\alpha|\) may be realized. Then the growth rate is proportional to \(\sqrt{G\alpha}\). Since \(G\) is much larger than \(\alpha\), the growth rate of CFI can be significantly larger, which accounts for the resonance-like feature.
In Fig. 4, we show the radial profiles of \(A^{2}/G|\alpha|\) and its associated quantities for all progenitor models. We find that \(A^{2}/G|\alpha|\) is much greater than unity in the entire post-bounce region, suggesting that the resonance condition is hardly achieved. It is interesting to note that \(G/A\) can be larger than unity in the optically thick region, which facilitates the occurrence of the resonance-like CFI. As shown in Fig. 4, however, \(A/|\alpha|\) is significantly higher than \(G/A\), leading to \(A^{2}/G|\alpha|\gg 1\). This analysis suggests that \(A\) needs to be at least \(\sim\alpha\) for the resonance-like CFI. On the other hand, the self-interaction potential is usually several orders of magnitude higher than the collision rate, implying that \(A\) needs to be essentially zero for the resonance-like CFI, which is not realized in our CCSN models.
To strengthen our discussion, we also make a plot for \(G|\alpha|/|A|\gamma\) for the \(40M_{\odot}\) model in Fig. 5. Note that the isotropy-preserving branch gives the maximum growth rate for the CFI out of resonance. This ratio of \(G|\alpha|/(|A|\gamma)\) is associated with the growth rate of non-resonance CFI (see Eq. 12) and needs to be greater than unity for the occurrence of CFI. As clearly shown in this figure, the region of \(G|\alpha|/|A|\gamma>1\) matches exactly that of CFI, indicating that CFI is not resonance-like.
An important remark must be made here. No detection of the resonance-like CFI in our models may be an artifact of spherical symmetry. As shown in multi-dimensional CCSN models, PNS convections accelerate deleptonization of the inner core [53] and reduces the degeneracy of \(\nu_{e}\) and leads to \(n_{\nu_{e}}\sim n_{\bar{\nu}_{e}}\), i.e., \(A\sim 0\). On the other hand, \(\nu_{x}\) seems to be populated also in this region, implying that \(G\) becomes also lower (see Sec. IV.1) and the growth rate of the resonance-like CFI, if any, will be suppressed. These considerations indicate a need of detailed and quantitative analyses of CFI based on realistic multi-dimensional CCSN models. We defer this intriguing study to a future work.
### Correlation between matter properties and CFI
Our stability analysis suggests that CFI commonly occurs in the post-shock region of CCSN. The CFI region straddles the transition layer between the optically thick and the semi-transparent regions, where neutrinos and matter are interacting with each other frequently. One then expects that the CFI region has correlations with some local matter properties. The investigation of such correlations is highly beneficial to those assessing CFI in the phenomenological CCSN models, in which the neutrino radiation field may not be well modeled. The result can also be used to narrow down searching regions before performing a detailed stability analysis, which will reduce the computational cost of the search, in particular for multi-dimensional CCSN models.
As mentioned already, the matter density seems to be a good indicator; CFI tends to occur in \(10^{10}\,\mathrm{g\,cm^{-3}}\lesssim\rho\lesssim 10^{12}\,\mathrm{g\,cm^{-3}}\). We also explore possible correlations of CFI with two other thermodynamic quantities: the electron fraction (\(Y_{e}\)) and the electron neutrino degeneracy (\(\eta_{\nu_{e}}=\mu_{\nu_{e}}/T\)), where \(\mu_{\nu_{e}}\) denotes the chemical potential of \(\nu_{e}\), which is defined as \(\mu_{p}-\mu_{n}+\mu_{e}\) (\(p,n\), and \(e\) denote free protons, neutrons, and electrons, respectively). We selected these quantities because they are the quantities to characterize \(\nu_{e}\) and \(\bar{\nu}_{e}\) in the optically thick region. Previous works showed positive correlations between the two quantities and FFC (see, e.g., [21; 26]).
In Fig. 6, we show the \(Y_{e}\) and \(\eta_{\nu_{e}}\) distributions for the \(40M_{\odot}\) model. Although positive correlations can be seen, it seems hard to assess the occurrence of CFI only with these quantities. For instance, although we find CFI in the regions of \(Y_{e}\lesssim 0.4\), stable regions are also existent there. A similar trend is also found in \(\mu_{\nu_{e}}\): CFI can occur in the regions of \(|\mu_{\nu_{e}}|\lesssim 0.5\), but not always.
Much of the complexity comes from \(\nu_{x}\). As is well known, the \(\nu_{x}\) distributions do not affect the occurrence of FFC (as long as \(\nu_{x}=\bar{\nu}_{x}\)), but they affect the occurrence of CFI (see Sec. IV.1). We also note that \(\nu_{x}\) does not have charged-current reactions with matter and their energy sphere is located at smaller radii than those of \(\nu_{e}\) and \(\bar{\nu}_{e}\). This indicates that the \(\nu_{x}\) number density can not be determined by the local equilibrium condition. This consideration suggests that the local matter property may not provide a sufficient condition. Nevertheless, our result suggests that conditions of \(10^{10}\,\mathrm{g\,cm^{-3}}\lesssim\rho\lesssim 10^{12}\,\mathrm{g\,cm^{-3}}\), \(Y_{e}\lesssim 0.4\), and \(|\mu_{\nu_{e}}|\lesssim 0.5\) need to be satisfied for the occurrence of CFI in the optically thick and semi-transparent regions, which is useful to locate possible CFI regions. As an important remark,
Figure 5: Same as Fig. 2 but for \((G|\alpha|)/(|A|\gamma)\). We display the result of \(40M_{\odot}\) model as a representative case.
one needs to keep in mind that this condition could be altered in multi-dimensional cases, though.
### Comparison with previous studies
As shown above, we find that CFI can occur rather commonly regardless of progenitors at \(10^{10}\,\mathrm{g\,cm^{-3}}\lesssim\rho\lesssim 10^{12}\,\mathrm{g\,cm^{-3}}\) and forms a durable layer. This location corresponds to the transition layer for \(\nu_{e}\) and \(\bar{\nu}_{e}\) from the optically thick to the semi-transparent regions. This result supports the claim in [57] that CFI can occur in the post-shock regions. The obtained growth rate is also qualitatively consistent with theirs.
On the other hand, resonance-like CFIs are not detected in our CCSN models (although this may be artifacts of spherical symmetry), implying that the growth rate of CFI is of the same order of magnitude as the collision rate. This also means that CFI is overwhelmed by FFC, if NFLN crossings appear, which is in line with suggestions by [52; 61]. We note in addition that the growth rate of slow mode could be higher than that of CFI as suggested by [61]. However, the slow modes seem to be hampered by high mass densities in the post-shock region during the accretion phase [80; 81; 82; 83]. Some (unknown) mechanisms are required then to trigger the slow instability.
## V Conclusion
In this paper, we carry out a systematic study on the occurrence of CFI based on the spherically symmetric CCSN models developed with the modern input physics and full Boltzmann neutrino transport. We use the approximate analytic formulae obtained in our previous study [59] to quantify the growth rate of CFI. They are reasonable accurate and also much more computationally efficient than the numerical solution of the dispersion relation. We find that CFI occurs universally in the post-shock region regardless of progenitors.
The CFI region is roughly bounded by isodensity lines of \(10^{12}\) and \(10^{10}\,\mathrm{g\,cm^{-3}}\). It continues to exist rather stably at \(\gtrsim 20\)ms after bounce. We also find that \(\nu_{x}\) plays an important role to hamper CFI in the optically thick region, which is a feature of CFI that distinguishes it from FFC. This implies the importance of accurate modeling of the \(\nu_{x}\) distribution, and \(\nu_{x}\) should be taken into account appropriately in the CFI analyses.
It is worth noting that our results do not show any signs of resonance-like CFI. As a consequence the growth rate of CFI is of the same order as the collision rate. However, this result seems to be artifacts of spherical symmetry we imposed and may be qualitatively changed in multi-dimensional models. This is because convection in PNS layers enhances deleptonization there and will facilitate the occurrence of resonance-like CFI as well as FFC. On the other hand, it has been also suggested that the PNS convection will amplify the \(\nu_{x}\) diffusion and may suppress CFI. These competing effects to CFI inherent in the multi-dimensional models should be investigated in detail with sophisticated CCSN models, which is the top priority in our future works.
The present study also reveals a difficulty of assessing CFI only by matter properties. We find that \(\nu_{x}\) is mainly responsible for the complexity. It is attributed to the fact that the \(\nu_{x}\) diffusion is greater than that of \(\nu_{e}\) and \(\bar{\nu}_{e}\) due to the lack of charged-current reactions. However, we find that CFI tends to occur in regions of \(10^{10}\,\mathrm{g\,cm^{-3}}\lesssim\rho\lesssim 10^{12}\,\mathrm{g\,cm^{-3}}\), \(Y_{e}\lesssim 0.4\), and \(\eta_{\nu_{e}}\lesssim 0.5\), suggesting that these conditions are useful to narrow down the possible location of CFI.
Our systematic study of CFI provides independent and robust information, useful to settle the dispute in the previous works on CFI in the CCSN environments (see [61, 57]). Our study supports the results of [57] that CFI occurs in a wide spatial range across the optically thick and semi-transparent regions. On the other hand, no occurrences of resonance-like CFI suggests that CFI would be overwhelmed by FFC or even by the slow instability (although the latter seems unlikely to be developed) if they occur [61]. This is also consistent with our previous study [52], in which the flavor conversions in the non-linear phase are mainly characterized by FFCs. It should be stressed again, however, that these results may be altered in multidimensions, where FFC will likely take place. We, hence, should postpone the final conclusion on the impacts of CFI on CCSN dynamics until we can carry out detailed study of CFI and other flavor conversions based on multi-dimensional models.
###### Acknowledgements.
This research used the K and Fugaku supercomputers provided by RIKEN, the FX10 provided by Tokyo University, the FX100 provided by Nagoya University, the Grand Chariot provided by Hokkaido University, and Oakforest-PACS provided by JCAHPC through the HPCI System Research Project (Project ID: hp130025, 140211, 150225, 150262, 160071, 160211, 170031, 170230, 170304, 180111, 180179, 180239, 190100, 190160, 200102, 200124, 210050, 210051, 210164, 220047, 220213, 220047, 220223 and 230033), and the Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan (NAOJ). This work is supported by Grant-in-Aid for Scientific Research (19K03837, 20H01905,21H01083) and Grant-in-Aid for Scientific Research on Innovative areas "Gravitational wave physics and astronomy:Genesis" (17H06357, 17H06365) and "Unraveling the History of the Universe and Matter Evolution with Underground Physics" (19H05802 and 19H05811) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. For providing high performance computing resources, Computing Research Center, KEK, JLDG on SINET of NII, Research Center for Nuclear Physics, Osaka University, Yukawa Institute of Theoretical Physics, Kyoto University, Nagoya University, and Information Technology Center, University of Tokyo are acknowledged. This work was supported by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Toward a unified view of the universe: from large scale structures to planets, JPMXP1020200109) and the Particle, Nuclear and Astro Physics Simulation Program (Nos. 2020-004, 2021-004, 2022-003) of Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK). RA is supported by JSPS Grant-in-Aid for JSPS Fellows (Grant No. 22J10298) from MEXT. HN is supported by Grant-in Aid for Scientific Research (23K03468). MZ is supported by JSPS Grant-in-Aid for JSPS Fellows (No. 22KJ2906) from MEXT. SY is supported by Institute for Advanced Theoretical and Experimental Physics, Waseda University, and the Waseda University Grant for Special Research Projects (project No. 023C-141).
|
2308.10171 | A Method to Measure Photometries of Moderately-Saturated UVOT Sources | For bright transients such as Gamma-Ray Bursts (GRBs), the
Ultra-Violet/Optical Telescope (UVOT) operates under event mode at early
phases, which records incident positions and arrival time for each photon. The
event file is able to be screened into many exposures to study the early light
curve of GRBs with a high time resolution, including in particular the rapid
brightening of the UV/Optical emission. Such a goal, however, is hampered for
some extremely bright GRBs by the saturation in UVOT event images. For
moderately saturated UVOT sources, in this work we develop the method proposed
in Jin et al. (2023) to recover their photometries. The basic idea is to assume
a stable point spread function (PSF) of UVOT images, for which the counts in
the core region (i.e., an aperture of a radius of 5 arcsec) and the wing region
(i.e., an annulus ranging from 15 arcsec to 25 arcsec) should be a constant and
the intrinsic flux can be reliably inferred with data in the ring. We
demonstrate that in a given band, a tight correlation does hold among the
background-removed count rates in the core and the wing. With the new method,
the bright limit of measuring range for UVOT V and B bands increases ~ 1.7 mag,
while only ~ 0.7 mag for U band due to the lack of bright calibration sources.
Systematic uncertainties are ~ 0.2 mag for V, B and U bands. | Hao Zhou, Zhi-Ping Jin, Stefano Covino, Yi-Zhong Fan, Da-Ming Wei | 2023-08-20T05:48:07Z | http://arxiv.org/abs/2308.10171v1 | # A Method to Measure Photometries of Moderately-Saturated UVOT Sources
###### Abstract
For bright transients such as Gamma-Ray Bursts (GRBs), the Ultra-Violet/Optical Telescope (UVOT) operates under event mode at early phases, which records incident positions and arrival time for each photon. The event file is able to be screened into many exposures to study the early light curve of GRBs with a high time resolution, including in particular the rapid brightening of the UV/Optical emission. Such a goal, however, is hampered for some extremely bright GRBs by the saturation in UVOT event images. For moderately saturated UVOT sources, in this work we develop the method proposed in Jin et al. (2023) to recover their photometries. The basic idea is to assume a stable point spread function (PSF) of UVOT images, for which the counts in the core region (i.e., an aperture of a radius of 5 arcsec) and the wing region (i.e., an annulus ranging from 15 arcsec to 25 arcsec) should be a constant and the intrinsic flux can be reliably inferred with data in the ring. We demonstrate that in a given band, a tight correlation does hold among the background-removed count rates in the core and the wing. With the new method, the bright limit of measuring range for UVOT V and B bands increases \(\sim 1.7\) mag, while only \(\sim 0.7\) mag for U band due to the lack of bright calibration sources. Systematic uncertainties are \(\sim 0.2\) mag for V, B and U bands.
Astronomical techniques (1684) -- Flux calibration (544) -- Photometry (1234)
## 1 Introduction
Ultra-Violet/Optical Telescope (UVOT) onboard _Swift_ satellite is designed to capture the early light of afterglow of Gamma-Ray Bursts (GRB) or other rapid transients (Roming et al., 2005). Once the satellite has been settled, UVOT will take a 150 s exposure under event mode to make a finding chart for possible transients, and the typical start time is \(\sim\)100 s after Burst Alert Telescope (BAT) trigger time.
Since the finding chart is taken under event mode, one can get arrival time and incident positions on detector for each photon and screen1 the event file with user specified time bins to derive the early light curve of the afterglow, e.g. GRB 080319B (Racusin et al., 2008; Page et al., 2013), GRB 081203A (Kuin et al., 2009), GRB 130427A (Maselli et al., 2014) and GRB 220101A (Kuin et al., 2022; Jin et al., 2023). However, some afterglows are so bright that they saturate in UVOT's images, e.g. GRB 080319B and GRB 130427A. Efforts have been made to measure photometries of highly saturated sources with readout streaks, and such a method had been applied to GRB 080319B and GRB 130427A to recover their early photometries (Page et al., 2013; Maselli et al., 2014). However, the readout streak method introduces very large uncertainties if the source is moderately or barely saturated.
Footnote 1: Split the evt file into sub exposures with user defined time bins or other criteria. The description of the raw evt file is “unscreened evt file” in archive and the command in UVOT FTOOLS is “screen”, the article uses “screen” instead of “split” or other similar words.
In Jin et al. (2023), we proposed to use the photons collected at the tail of the Point Spread Function (PSF) to infer the intrinsic emission of moderately saturated UVOT sources (see e.g. Su et al., 2022, for a similar approach in the infrared band). The fundamental assumption is that the PSF remains stable, and there exists a constant ratio
between photons received in the core region (a circular region with a radius of 5 arcsec for UVOT) and the wing region (an annulus ranging from 15 arcsec to 25 arcsec). Please refer to Figure 1 for an illustration depicting the definitions of the core and wing regions. The method used in this study is referred to as the PSF method to distinguish it from the readout streak method discussed in existing literature. Jin et al. (2023) have just briefly presented how to correct observations in White and V bands. In this work we extensively examine the corrections in UVOT V, B and U bands. For the even bluer and White bands, the corrections are challenging because of the lack of calibration sources/observations.
In Section 2, the principle of UVOT detection method is briefly introduced and the coincidence loss correction is described in detail. The principle and calibration of the PSF method is discussed in Section 3, where photometric zeropoints of V and B bands are calibrated with Tycho-2 and Gaia Synthetic Photometry Catalogue (GSPC) sources and U-band zeropoint is calibrated with GSPC sources. In Section 4, there are simple demonstrations for the PSF method. The procedure and the valid range for the PSF method are summaried in section 5. In this article, photometry results are reported in AB system and the pixel scale for images is 0.502 arcsec/pixel, unless stated otherwise.
## 2 Brief Introduction to the UVOT
### Detection principle and saturation count rate
The final physical imaging equipment of UVOT is a small CCD of 385\(\times\)288 physical pixels (256\(\times\)256 for scientific usage, corresponding to 17\(\times\)17 arcmin\({}^{2}\)) but with high readout speed. Under event mode, every \(\sim\)11 ms, the CCD reads out an image to analyze positions of incident photons. With optoelectronic devices, incident photons are converted into electronic clouds, which will illuminate a phosphor screen to create many new photons. Photons created from the phosphor screen are collected by the CCD via fibers connected on the anode of phosphor screen, and the Full Width Half Maximum (FWHM) of photons collected by the CCD is \(\sim\)1 physical pixel (Fordham et al., 2000). There is an algorithm to restore positions of incident photons with images obtained by the CCD, and the accuracy for centroiding can reach 1/8 physical pixel (Roming et al., 2005), hence the size of final image is 2048\(\times\)2048 pixels. If positions of 2 or more incident photons are too close in one frame (e.g., distances between peaks of photon clusters collected by the CCD are less than \(\sim 1\) physical pixel), the centroiding algorithm only restores one photon, which induces the so called COIncidence loss (COI, Fordham et al., 2000). For UVOT, most photons from point sources are concentrated within a circular region with a radius of 5 arcsec (corresponding to 1.25 physical pixels), and this region is defined as the optimal photometric aperture as well as the COI region for point sources, where the count rate is used to calculate the COI factor for point sources (Poole et al., 2008).
The saturation occurs when at least 1 incident photon falls within the point source COI region per frame, since the centroiding algorithm can only record 1 incident photon. In principle, the typical COI dominated region is a circular region with a radius of \(\sim\)2.25 physical pixels. While in practice, moderately saturated sources exhibit a square pattern on source with a typical side length of \(\sim 20\) arcsec (corresponding to a 5\(\times\)5 adjacent physical pixels), where the count rate in the region between the point source COI region and the square is nearly 0 count/s, and this square region represents the real COI dominated region of moderately saturated sources. Nevertheless, the region dominated by the COI expands as the degree of saturation increases. Figure 1 shows saturation patterns for moderately and strongly saturated sources.
When the raw total count rate in the COI region is \(>\)10 count/s for point sources, the coincidence loss is not negligible. The saturation limit of UVOT is 1 count per frame (\(\sim\)90 count/s), since the coincidence loss makes any count rate greater than 1 count per frame becomes 1 count per frame in the COI dominated region.
### Coincidence loss correction
In order to correct for coincidence loss, it is necessary to designate an appropriate region for calculating the COI factor (COI region), because coincidence loss is an area effect and the shape of the COI region varies depending on source types. For point sources, regardless of the radius of the photometric aperture (e.g., 5 or 3 arcsec), a circular region with a radius of 5 arcsec, centered at the same position as the photometric aperture, is considered an appropriate COI region (Poole et al., 2008). Equations in Poole et al. (2008) are applied to compute the COI factor for point sources:
\[\dot{N}^{\rm COI}_{\rm theory}(\dot{N}^{\rm COI}_{\rm raw}) =\frac{-\ln(1-\alpha\dot{N}^{\rm COI}_{\rm raw}f_{t})}{\alpha f_ {t}} \tag{1}\] \[f(\dot{N}^{\rm COI}_{\rm raw}) =1+a_{1}\dot{N}^{\rm COI}_{\rm raw}f_{t}+a_{2}(\dot{N}^{\rm COI} _{\rm raw}f_{t})^{2}+a_{3}(\dot{N}^{\rm COI}_{\rm raw}f_{t})^{3}+a_{4}(\dot{N} ^{\rm COI}_{\rm raw}f_{t})^{4}\] \[{\rm COI}(\dot{N}^{\rm COI}_{\rm raw}) =f(\dot{N}^{\rm COI}_{\rm raw})\ \dot{N}^{\rm COI}_{\rm theory}(\dot{N}^{\rm COI}_{\rm theory })\ /\ \dot{N}^{\rm COI}_{\rm raw}\]
where \(\dot{N}^{\rm COI}_{\rm raw}\) is the raw/observed count rate of the point source COI region, \(\alpha\) is the dead time correction factor 0.9842 (i.e. exposure time is 98.42% of full frame time), \(f_{t}\) is the full frame time 0.0110329 s. \(\dot{N}^{\rm COI}_{\rm theory}\) is the theoretical value of the COI corrected count rate in the point source COI region. Function \(f(x)\) is an empirical polynomial correction for the differences between theoretical and observed values. Parameters \(a_{1}\) to \(a_{4}\) are 0.0669, -0.091, 0.029 and 0.031, respectively. If the optimal photometric aperture of the UVOT is used to measure photometries, \(\dot{N}^{\rm COI}_{\rm raw}\) equals the raw total count rate in the aperture \(\dot{N}^{\rm tot}_{\rm raw}\), since the point source COI region is same as the optimal photometric aperture. While for extended sources, Kuin et al. (2015) pointed out that rectangles with a typical area of \(\sim 400\) pixels are appropriate regions to calculate the COI factor with Equation (1) for slitless spectra with different grism configurations obtained by UVOT.
However, neither of a circular or a rectangular region is suitable to calculate the COI factor for the wing. Referring the method used by UVOT FTOOLS (Margutti et al., 2014) command _uvotsource_ to calculate COI factors for backgrounds, which multiplies the raw background count rate density with the area of the point source COI region as the input value of Equation (1) to calculate the COI factor, we want to find a region that has same/similar area as the point source COI region, where the raw count rate will be used for calculating the COI factor for the wing. Therefore, the entire annulus is divided into 16 sector annuli, with each sector annulus having an opening angle of \(22.5^{\circ}\) and an area of \((25^{2}-15^{2})\pi/16=25\pi\) arcsec\({}^{2}\). Figure 2 shows a kind of segmentation. Nevertheless, it is obvious the boundaries of these sector annuli can rotate around the center of the entire annulus with the assumption that counts distribute isotropically in the wing. Hence, it is recommended to use the mean value of raw total count rate of these sector annuli (\(\dot{N}^{\rm COI,\,tot}_{\rm raw,\,wing}\)) as the input value of Equation (1) to calculate the COI factor for the wing:
\[\dot{N}^{\rm COI,\,tot}_{\rm raw,\,wing}=\frac{25\pi[\rm arcsec^{2}]}{\rm area \,\,of\,\,unmasked\,\,wing}\times\dot{N}^{\rm tot,\,UM}_{\rm raw,\,wing} \tag{2}\]
where \(\dot{N}^{\rm tot,\,UM}_{\rm raw,\,wing}\) is the raw total count rate of unmasked wing region. Please pay much attention to the mask shape for the wing: Equation (2) only works correctly when masks are sector annuli that overlapped with sources in the wing, instead of shapes of sources (please refer to the last panel of Figure 7 for an example), because direct scaling measured count rate with area ratio is not correct for nonuniform sources.
However, Equation (1) and (2) are only approximations for the true COI correction. Photons in the wing (especially inner edge) will be misrecognized with photons in the region surrounded by the wing (central photons later in short) when the wing is bright enough, but the COI correction employed in this article can not handle the case. In another words, when the region heavily influenced by the COI effect from central photons expands to the wing, the PSF method can not be applied to restore photometries any more.
### Additional correction for extended sources
The method employed by UVOT FTOOLS command _uvotsource_ can not completely correct the coincidence loss for backgrounds (Breveld et al., 2010). In other words, to obtain the true incident count rate for uniform extended sources, the raw count rate needs to be multiplied with an additional correction factor, which is named as the Extended (EXT) correction factor. The EXT correction essentially serves as an additional correction for the coincidence loss, due to the inadequate COI factor for extended sources. Therefore, the equivalent raw count rate, scaled with the area ratio of the point source COI region to the measuring region for uniform extended sources (\(\dot{N}^{\rm COI}_{\rm raw,\,ext}\)), is also used to calculate the EXT factor, which is fitted phenomenologically using data points from Figure 6 of Breveld et al. (2010) and a smoothed broken power-law model:
\[\rm EXT=(1+(\dot{N}^{\rm COI}_{\rm raw,\,ext}/\dot{N}^{\rm COI}_{0,\,ext})^{ a})^{b} \tag{3}\]
Best fitted parameters are \(\dot{N}^{\rm COI}_{0,\,\rm ext}=160.115922\,\)count/s, \(a=1.518061\) and \(b=2.446816\). Figure 3 shows the difference between true incident and COI corrected count rate, and the fitting result for the EXT factor.
For typical background values of UVOT (\(0-0.05\,\)count/s/pixel, Breveld et al., 2010), the background is brightened by \(\sim 7\%\) at most. Its effect on photometries for point sources is so small that can be neglected safely, because the EXT factor is only multiplied with the raw background count rate, which is usually much smaller than the count rate of the point source. While for extended sources, the EXT correction becomes important since both the wing and the background need to be corrected. The non-uniformity is worthy being taken into account, but actually, it only has little influence on the PSF method and can be neglected safely. Please refer to Appendix A for the detailed discussion.
### Large scale structure and sensitivity correction
Large Scale Structure (LSS) and SENSitivity (SEN) correction should be applied to COI corrected count rates to get the final corrected count rate that can be used for the photometry. The LSS depends on the source position on the detector and the bandpass, and the sensitivity correction (SEN) only depends on the time when the observation was executed. LSS and SEN can be found with UVOT FTOOLS, e.g. photometry table generated by _uvotsource_ command. Hence, the corrected source count rate in the wing (\(\dot{N}_{\rm wing}\)) can be derived with:
\[\begin{split}\dot{N}_{\rm CE,\,wing}^{\rm tot}&= \frac{\rm area\,\,of\,\,\rm entire\,\,wing}{\rm area\,\,of\,\,\rm unmasked\,\, wing}\times\dot{N}_{\rm raw,\,\rm wing}^{\rm tot,\,\rm UM}\,\rm COI_{\rm wing}^{\rm tot}\, \rm EXT_{\rm wing}^{\rm tot}\\ \dot{N}_{\rm CE,\,wing}^{\rm bkg}&=400\pi\,[\rm arcsec ^{2}]\times CRD_{\rm raw}^{\rm bkg}\,\rm COI_{\rm wing}^{\rm bkg}\,\rm EXT_{\rm wing }^{\rm bkg}\\ \dot{N}_{\rm wing}&=(\dot{N}_{\rm CE,\,wing}^{\rm tot }\,-\,\dot{N}_{\rm CE,\,wing}^{\rm bkg})\times\rm LSS\times SEN\end{split} \tag{4}\]
where \(\dot{N}_{\rm CE,\,wing}^{\rm tot}\) and \(\dot{N}_{\rm CE,\,wing}^{\rm bkg}\) are the COI and EXT corrected total and background count rates in the wing. At this stage, it can be concluded that the inadequate correction for the COI (when the COI effect from central photons is important), and the intrinsic non-uniformity of the wing contribute to the theoretical systematic uncertainty of the PSF method. However, it is hard to quantify the exact value of them, hence the systematic uncertainty of the PSF method is estimated with the fluctuation of calibration residuals, i.e., the comprehensive systematic uncertainty involving both theoretical (measuring principle) and practical (calibration data set) ones. Please refer to Section 3.4 and Table 1.
## 3 Saturated correction method
### The principle of the calibration
If the profile of the PSF is known, the point source count \(C_{\rm src}\) can be derived by fitting Growth Curve (GC) of the PSF: \(C_{\rm src}GC(r)=C_{\rm src}\int_{0}^{2\pi}\int_{0}^{r}PSF(r^{\prime},\theta) \times r^{\prime}d\theta dr^{\prime}\), where \(PSF(r,\theta)\) is the normalized profile of the PSF (i.e., \(\int_{0}^{2\pi}\int_{0}^{\infty}PSF(r,\theta)\times rd\theta dr=1\)), and \(C_{\rm src}\) is the source count. Usually, integrating to a standard or optimal radius instead of infinity is more practical, since there would be many sources in large integrating area that will interfere the integration. In addition, if the PSF is stable, the ratio of count rate in standard aperture to true count rate keeps same, i.e. \(\int_{\rm std}PSF(r,\theta)\times rd\theta dr/\int_{\rm inf}PSF(r,\theta) \times rd\theta dr=constant\). As a result, flux calibration can base on count rate in standard aperture. For UVOT, the radius of standard photometric aperture suggested in Poole et al. (2008) is \(5\,\rm arcsec\). For saturated stars, the PSF profile is destroyed near \(r=0\), but the wing of the PSF becomes bright enough to make relatively accurate measurement, and the count rate in the wing can be converted to count rate in the standard region by simply multiplying a factor \(k=\int_{\rm std}PSF(r,\theta)\times rd\theta dr/\int_{\rm wing}PSF(r,\theta) \times rd\theta dr\).
Figure.5 in Poole et al. (2008) shows the PSF profile of UVOT, assuming it is isotropic. The wing of the PSF extends to a radius of \(\sim 50\) pixels, and as shown in Figure 1, the COI strongly influences the area within \(\sim 15\,\rm arcsec\) radius. Hence, the wing of UVOT's PSF is defined as an annulus with an inner radius of 30 pixels (\(15\,\rm arcsec\)) and an outer radius of 50 pixels (\(25\,\rm arcsec\)). If the PSF of UVOT is stable, \(\dot{N}_{\rm std}\) should be proportional to \(\dot{N}_{\rm wing}\), i.e. \(\dot{N}_{\rm std}=k\dot{N}_{\rm wing}\), where \(\dot{N}_{\rm wing}\) and \(\dot{N}_{\rm std}\) represent the count rate in wing and the count rate in the standard photometric region, respectively. The \(\dot{N}_{\rm std}\) can be converted to the AB magnitude with the equation \(M^{\rm AB}=-2.5\rm lg(\dot{N}_{\rm std}[\rm count/s])+ZP_{\rm std}^{\rm AB}\), where the \(ZP_{\rm std}^{\rm AB}\) is the photometric zeropoint in AB system if a standard photometric aperture is used. For UVOT, \(ZP_{\rm U,std}^{\rm AB}=19.36\pm 0.02\), \(ZP_{\rm B,std}^{\rm AB}=18.98\pm 0.02\), \(ZP_{\rm V,std}^{\rm AB}=17.88\pm 0.01\)(Poole et al., 2008; Breeveld et al., 2011). Hence the AB magnitude of a source can be calculated with \(\dot{N}_{\rm wing}\) by equation:
\[\begin{split} M^{\rm AB}&=-2.5\rm lg(\dot{N}_{\rm wing }[\rm count/s])+ZP_{\rm std}^{\rm AB}\\ &=-2.5\rm lg(\dot{N}_{\rm wing}[\rm count/s])+ZP_{\rm wing}^{\rm AB }\end{split} \tag{5}\]
where \(ZP_{\rm wing}^{\rm AB}=ZP_{\rm std}^{\rm AB}-2.5\rm lg(\dot{k})\). \(ZP_{\rm wing}^{\rm AB}\) is the only free parameter of Equation (5) needed to be calibrated.
### AB magnitudes of saturated UVOT sources converted from other catalogues
The crucial problem is how to get reliable \(M^{\rm AB}\) of saturated UVOT sources. One solution is using the color transformation between the UVOT filter system and other filter systems to get converted \(M^{\rm AB}\) for saturated UVOT sources.
#### 3.2.1 Gaia Synthetic Photometry Catalogue
The Gaia Synthetic Photometry Catalogue (GSPC) provides synthetic photometris for some widely used filter systems (e.g., the Johnson-Kron-Cousin system defined by Bessell and Murphy, 2012), which are generated from spectra with high signal-to-noise ratio(\(>30\)) in Gaia Data Release 3, and contains \(\sim 220\) million sources (Gaia Collaboration et al., 2022). As shown in Figure 4, the effective transmission curve of UVOT VBU bands (VBU\({}_{\rm UVOT}\)) are very similar to that of Johnson-Kron-Cousin VBU bands (JKC in following article, VBU\({}_{\rm JKC}\)), hence the synthetic photometries of VBU\({}_{\rm JKC}\) are selected to be converted to VBU\({}_{\rm UVOT}\). Effective transmission curves of JKC system are downloaded from the ftp of CDS2, and effective transmission curves of UVOT system are taken from the _Swift_ CALDB website3. Since photometries of Johnson-Kron-Cousin system are usually reported in Vega system, transformation between VBU\({}_{\rm JKC}\) and VBU\({}_{\rm UVOT}\) is derived under Vega system. The Vega magnitude of a star is derived with the following equation:
Footnote 2: [https://cdsarc.cds.unistra.fr/viz-bin/cat/J/PASP/124/140](https://cdsarc.cds.unistra.fr/viz-bin/cat/J/PASP/124/140)
Footnote 3: [https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/uvot/uvot_caldb_filtertransmission_03.pdf](https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/uvot/uvot_caldb_filtertransmission_03.pdf)
\[M^{\rm Vega}=\frac{\int T(\lambda)\lambda F_{\lambda}(\lambda)d\lambda}{\int T (\lambda)\lambda F_{\lambda}^{\rm Vega}(\lambda)d\lambda}+ZP^{\rm Vega}, \tag{6}\]
where \(\lambda\) is wavelength, \(T(\lambda)\) is the effective transmission curve, \(F_{\lambda}(\lambda)\) and \(F_{\lambda}^{\rm Vega}(\lambda)\) are flux density of observed source and flux density of Vega in unit of flux per wavelength, respectively. \(M^{\rm Vega}\) represents this magnitude is expressed in the Vega system and \(ZP^{\rm Vega}\) is the zeropoints (i.e., the magnitude of Vega in this system). For UVOT, \(ZP^{\rm Vega}_{\rm UVOT}=0^{4}\), however, in GSPC, \(ZP^{\rm Vega}_{\rm GSPC}=0.03\)(Bohlin, 2014; Gaia Collaboration et al., 2022), since the reference spectrum \(F_{\lambda}^{\rm Vega}(\lambda)\) is not completely same as the most accurate observed spectrum. We measured synthetic photometrires of stars in Pickles (1998) stellar library, except M type stars, because the scatter of M type stars is much large than other stars, which is also found by Page et al. (2013) and Esa (1997). Since there are 3 colors, i.e. U\({}_{\rm JKC}\) - B\({}_{\rm JKC}\), U\({}_{\rm JKC}\) - V\({}_{\rm JKC}\) and B\({}_{\rm JKC}\) - V\({}_{\rm JKC}\), could be converted to VBU\({}_{\rm UVOT}\) - VBU\({}_{\rm JKC}\), every possible transformation was fitted with a piecewise linear function by least square method. However, only the fit with the smallest residual is selected as the final color transformation. For example, there are 3 possible transformations for U\({}_{\rm UVOT}\) - U\({}_{\rm JKC}\), i.e. U\({}_{\rm UVOT}\) - U\({}_{\rm JKC}\) versus U\({}_{\rm JKC}\) - B\({}_{\rm JKC}\), U\({}_{\rm JKC}\) - V\({}_{\rm JKC}\) or B\({}_{\rm JKC}\) - V\({}_{\rm JKC}\), and Root Mean Squares (RMSs) are 0.0316, 0.0327 and 0.03861, respectively. The RMS of U\({}_{\rm UVOT}\) - U\({}_{\rm JKC}\) versus U\({}_{\rm JKC}\) - B\({}_{\rm JKC}\) is the smallest, hence it is selected as the final color transformation for U band. The color transformation is concluded in following equations and shown in Figure 5.
\[\begin{split}\rm U_{UVOT}-U_{\rm JKC}&=\begin{cases}0. 189(U_{\rm JKC}-B_{\rm JKC})-0.054,\ \rm U_{JKC}-B_{\rm JKC}\leq 0.079\\ 0.018(U_{\rm JKC}-B_{\rm JKC})-0.040,\ \rm U_{JKC}-B_{\rm JKC}>0.079\\ \end{cases}\\ \rm B_{UVOT}-B_{\rm JKC}&=\begin{cases}0.007(B_{\rm JKC}-V_{\rm JKC})-0.031, \rm B_{\rm JKC}-V_{\rm JKC}\leq 1.192\\ 0.085(B_{\rm JKC}-V_{\rm JKC})-0.123,\ \rm B_{\rm JKC}-V_{\rm JKC}>1.192\\ \end{cases}\\ \rm V_{UVOT}-V_{\rm JKC}&=\begin{cases}0.011(B_{\rm JKC}-V_{\rm JKC})-0.036, \rm B_{\rm JKC}-V_{\rm JKC}\leq 0.167\\ 0.038(B_{\rm JKC}-V_{\rm JKC})-0.041,\ B_{\rm JKC}-V_{\rm JKC}>0.167\\ \end{cases}\\ \end{split} \tag{7}\]
RMSs for V, B, and U bands are 0.0032, 0.0056, and 0.0316, respectively, which are treated as the systematic uncertainties of color transformation. The color transformation is in Vega system. With the definition of AB and Vega magnitude, the AB magnitude of a source equals the Vega magnitude of it plus the AB magnitude of Vega in the same band, i.e. \(M^{\rm AB}_{\rm Band}=M^{\rm Vega}_{\rm Band}+M^{\rm AB}_{\rm Vega,Band}\). According to Breeveld et al. (2011), \(M^{\rm AB}_{\rm Vega,V_{\rm UVOT}}=-0.01\), \(M^{\rm AB}_{\rm Vega,B_{\rm UVOT}}=-0.13\), \(M^{\rm AB}_{\rm Vega,U_{\rm UVOT}}=1.02\).
#### 3.2.2 Tycho-2 catalogue
In the Tycho-2 catalogue, there are two-color photometric data of 2.5 million brightest stars (Hog et al., 2000). Page et al. (2013) studied the color transformation between Tycho-2 and UVOT, and used Tycho-2 sources to calibrate the readout streak method to increase the measurement range of UVOT for V and B bands. Hence, Tycho-2 sources are also used to calibrate the PSF method. The color transformation between Tycho-2 and UVOT is taken from Page
et al. (2013):
\[\rm V_{UVOT}=V_{T}-0.032-0.073(B_{T}-V_{T}),\,B_{T}-V_{T}>0 \tag{8}\]
\[\rm B_{UVOT}=B_{T}+0.036-0.270(B_{T}-V_{T}),\,B_{T}-V_{T}>0.4\]
where \(\rm V_{T}\) and \(\rm B_{T}\) are magnitudes in Tycho-2 system. Note that this transformation is also in Vega system. RMSs are 0.006 and 0.009 and for V and B bands, respectively.
Maiz-Apellaniz (2005) pointed out that magnitudes recorded in Tycho-2 catalogue are slightly brighter than true values, for \(\rm V_{T}\) is 0.058\(\pm\)0.009 mag and for \(\rm B_{T}\) is 0.078\(\pm\)0.009 mag, which also is found in our sample: UVOT magnitudes converted from Tycho-2 sources are slightly brighter than that converted from GSPC sources, \(<\rm V_{UVOT}^{Tycho2}-V_{UVOT}^{GSPC}>=-0.0485\) and \(<\rm B_{UVOT}^{Tycho2}-B_{UVOT}^{GSPC}>=-0.0576\), where \(M_{\rm UVOT}^{\rm CAT}\) represents UVOT magnitudes converted from sources of "CAT" catalogue. The bias can be significantly reduced to 0.008 and 0.015 and for V and B bands if Tycho-2 magnitudes were corrected with values from Maiz-Apellaniz (2005) before the color transformation.
### Generation of calibration data
#### 3.3.1 Process of UVOT data
First, the UVOT observation catalogue _swiftwulog_ (up to March, 2023) is searched with parameters: obs_segment=000, operation_mode=EVENT and asp_corr=Y. For the first criterion, obs_segment means the number of times a specific target has been exposed (e.g., usually, 000 represents the first exposure sequence). For the second criterion, we hope our sample was operated under event mode, and the data set will not be too large. In addition, UVOT usually works under event mode at the early stage of burst events like Gamma-Ray Bursts (typically several hundred seconds since the trigger), so there is image mode data in the selected sample as well. For the last one, the asp_corr represents whether the aspect correction was applied to the data, in other words, it can be simply understood as whether the sky image is well aligned with the WCS. There are total 1851, 1411 and 1373 observations were selected for V, B and U bands, respectively.
Next, sky and exposure images for each observation were downloaded and a Pre-Process (PP) algorithm was applied to them. There are 3 main purpose for the pre-process algorithm: 1) drop bad extensions, e.g. duplicate extension names and nonuniform exposure/elapsed time. 2) classify extensions by their operation modes and bin factor into 4 groups, i.e. event 1x1, event 2x2, image 1x1 and image 2x2. 3) stack all good extensions for a deep exposure of the observation (called as stacked PP image in following article).
#### 3.3.2 Process of Tycho-2 catalogue and GSPC data
Tycho-2 and GSPC catalogue are cross matched with selected UVOT observation list with a search radius of 12 arcmin. For Tycho-2 catalogue, there are 12041 and 6873 sources in V and B bands. For GSPC, there are 2094, 2002 and 1954 sources for V, B and U bands. Then, all sources are filtered with each extension of corresponding PP images by 3 criteria:
1) Does the color of the source fall in the valid range for color transformation?
* \(-0.5<\rm B_{JKC}-V_{JKC}<2\) for GSPC V-band sources
* \(-0.5<\rm B_{JKC}-V_{JKC}<2\) for GSPC B-band sources
* \(-1.5<\rm U_{JKC}-B_{JKC}<2\) for GSPC U-band sources
* \(0<\rm B_{T}-V_{T}<2\) for Tycho-2 V-band sources
* \(0.4<\rm B_{T}-V_{T}<2\) for Tycho-2 B-band sources
2) Is the wing of the source intact in the field of UVOT?
3) Is the wing of the source exposed uniformly?
Meanwhile, for each source that passes the 3 criteria, 4 files are generated: photometry information for the source (including the converted \(M^{\rm AB}\) for saturated UVOT sources and will be used to generate the catalogue of filtered good stars), ds9 region file, filtered images and a "stack&phot" script for stacking and photometry. Finally, for Tycho-2 catalogue, there are [1821, 166, 1079] and [1039, 27, 591] stars left for [event 1x1, image 1x1, image 2x2] modes in V and B bands. For GSPC, there are [419, 33, 249], [326, 2, 176] and [357, 251, 177] stars left for [event 1x1, image 1x1, image 2x2] modes in V, B and U bands. The last step is to measure \(\dot{N}_{\rm wing}\) for each saturated source. Figure 6 shows the flow chart for all steps mentioned in Section 3.3.1 and 3.3.2.
#### 3.3.3 Measuring the source count rate of the wing
The "stack&phot" script will run UVOT FTOOLS to stack filtered images and make photometry with the standard photometric aperture of UVOT. LSS and SEN for each source are read from the fits table of photometry results. With the stacked PP image, sources in the field of the stacked filtered image are marked, so the property of the background could be estimated.
Since the wing extends up to 25 arcsec, there could be other sources in the wing. How to mask possible sources in the wing is an important problem, because normal source detection algorithm can not detect point sources in the wing efficiently. Hence, the pattern of the wing should be subtract before the source detection. Images of about 30 isolated stars are stacked to derive the template of the Count Density Distribution (CDD, counts per unit area at a specific radius) of the wing for V, B and U bands. The 0th, 1st and 2rd orders of Chebyshev polynomials are used to fit the template of CDD, for a reference, [0th, 1st, 2rd coefficients] are [2.193856, -0.070483, -0.000234], [0.965346, 0.064040, -0.001984] and [2.756965, -0.128014, 0.000472] for V, B and U bands, respectively. The CDD template is only a proper approximation for cases \(\dot{N}^{\rm tot}_{\rm raw,\,wing}<\sim 60\) count/s. For cases \(\dot{N}^{\rm tot}_{\rm raw,\,wing}>\sim 60\), the wing pattern would be over subtracted, because the CDD depends on the \(\dot{N}^{\rm tot}_{\rm raw,\,wing}\) due to the coincidence loss. Therefore, fitted CDDs of saturated sources can not be used to calculate \(\dot{N}_{\rm wing}\) directly. Figure 2 shows the V-band stacked images and Figure 14 shows fitted templates of CDDs.
Once the pattern of the wing is subtracted, the sigma clip method is applied to the residual image to identify possible sources and generate a bad pixel map. Pixels rejected by the sigma clip method is marked as 1 and others are 0 in the bad pixel map. Then the bad pixel map is smoothed with a uniform kernel (\(3\times 3\), and each element is 1/9), and only pixel with value greater than 0.5, (i.e., at least 5 pixels are rejected in regions of \(3\times 3\) pixels), are marked as true sources to avoid rejection of true extreme background values. Finally, a sector annulus with an opening angle of \(10^{\circ}\) and same inner and outer radius of the wing is used to scan the entire wing region. If there is any rejected pixel in the sector annulus, the sector annulus will be marked as mask region, instead of the shape of the source itself. With Equation (1-3, and 4) and the mask map, \(\dot{N}_{\rm wing}\) finally can be measured. An example for the measurement is shown Figure 7.
### Calibrating photometric zeropoints of the PSF method
Before further analysis, all data with signal-to-noise ratio less than 10 is dropped. For GSPC data, \(M^{\rm AB,\,GSPC}_{\rm UVOT}\) has obvious lower and upper boundaries, but these does not affect the result of \(ZP^{\rm AB}_{\rm wing}\) a lot, because both lower and upper boundaries are complete. While for Tycho-2 data, \(M^{\rm AB,\,Tycho2}_{\rm UVOT}\) has a valid but not obvious lower boundary, and \(\dot{N}_{\rm wing}\) reaches \(\sim 700\) count/s, which is far beyond the range the PSF method can be applied. Hence, for V band, only Tycho-2 sources with \(15\,{\rm count/s}<\dot{N}_{\rm wing}<100\) count/s are fitted and for B band is \(25\,{\rm count/s}<\dot{N}_{\rm wing}<100\) count/s. In addition, because uncertainties of GSPC data (\(\sim 0.005\) mag) are much smaller than that of Tycho-2 data (\(\sim 0.2\) mag for low \(\dot{N}_{\rm wing}\) and \(\sim 0.05\) for high \(\dot{N}_{\rm wing}\)) in V and B bands, there are additional scaling factors for GSPC data when calculating their cost to ensure that Tycho-2 and GSPC sources have similar effects on the fitting, see Figure 8. To summary, all filtered GSPC data and a part of filtered Tycho-2 data were fitted with Equation (5).
Another import thing is that there are outliers for the fitting, especially those points significantly lower than the most data. To pick up outliers, we first fitted all points with the least square method and used the criterion to select candidates of outliers: residuals greater than 3 times RMS of residuals or quantiles of cost values less than 10% (for points brighter than the fitted model) and greater than 80% (for points fainter than the fitted model) of all cost values. Images of all candidates were checked and we found there is no physical reason to mark most bright candidates as outliers (only a few, 7 of 111 checked sources for Tycho-2 V-band event 1x1 data), while most faint candidates should be marked as outliers (\(\sim 100\) of 156 checked sources for Tycho-2 V-band event 1x1 data). Faint outliers are mainly marked for 2 reasons: 1) the source is actually unresolved binary stars by UVOT (but resolved in Tycho-2 catalogue or GSPC). 2) the wing of the source is contaminated by halo ring, diffraction spikes or other extended sources (Page et al., 2014). Hence, it is confident to mark these faint points as outliers. In addition, if the source is in a very crowed field or its sky coordinate is not accurate, it is marked as an outlier as well (e.g., most bright outliers). Total [1241, 111, 673], [648, 11, 348] and [250, 213, 145] stars are finally fitted for [event 1x1, image 1x1, image 2x2] modes in V, B and U bands. Fitting results without the consideration of different operation modes and bin factors for images are shown in Figure 9-11. Calibrated \(ZP^{\rm AB}_{\rm wing}\) are summarized in Table 1. Operation modes and bin factors for images do not influence the photometries obtained by the PSF method. For calibration of different operation modes and image bin factors, please refer to Appendix B.
The PSF method only can be applied in valid ranges for \(\dot{N}_{\rm wing}\). As mentioned above, for wings brighter than upper boundary of the valid range, the COI effect from central photons on the wing becomes important and can not be neglected for V and B bands. Figure 9 and Figure 10 show that when \(\dot{N}_{\rm wing}>\sim 120\) count/s, the calibration data points deviate from the model, and the reason is Equation 1 and 2 can not correct the COI effect from central photons properly, as a result, their \(\dot{N}_{\rm wing}\) are smaller than true value. The evidence can be identified from the image. As the panel (b) in Figure 1 shows, the wing is influenced by the COI effect from central photons (the small white spot at the lower left part of the wing is exactly contaminated by the COI pattern from central photons). While the upper boundary for U band is limited by the brightest calibration sources. The lower boundary is set by the faintest calibration sources for all bands. For V band, \(10\,\)count/s \(<\dot{N}_{\rm wing}<100\,\)count/s. For B band, \(20\,\)count/s \(<\dot{N}_{\rm wing}<100\,\)count/s. The a bit higher lower threshold (\(15\,\)count/s and \(25\,\)count/s for V and B bands) is set to avoid the influence of the lower boundary of the fitted data set, hence the lower limit of valid range could be slightly lower. For U band, \(12\,\)count/s \(<\dot{N}_{\rm wing}<40\,\)count/s. Residuals of fitted data are divided into several bins according to their \(\dot{N}_{\rm wing}\), and quantile values of \(68.26\%\), \(95.44\%\) and \(99.74\%\) of absolute residuals are regarded as 1-\(\sigma\), 2-\(\sigma\) and 3-\(\sigma\) uncertainties of \(ZP_{\rm wing}^{\rm AB}\) for each bin. The mean value of 1-\(\sigma\) uncertainties of \(ZP_{\rm wing}^{\rm AB}\) for each bin (except bins have less than 25 data points or influenced by the lower/upper boundary of fitted data set) is regarded as the 1-\(\sigma\) systematic uncertainty of \(ZP_{\rm wing}^{\rm AB}\) for the entire valid range of \(\dot{N}_{\rm wing}\) (\(\sim 0.2\) mag for V, B and U bands, please see Table 1 for exact numbers).
However, only \(\sim 30\%\) images of all fitted data are checked carefully, and there are still unmarked outliers in the fitted data set. In addition, most outliers are lower than the fitted model, hence the fitted \(ZP_{\rm wing}^{\rm AB}\) is expected to be slightly fainter. Based on the fact that all fitted \(ZP_{\rm wing}^{\rm AB}\) became \(\sim 0.1\) mag brighter after removing outliers (especially some have extreme cost values) from fitted data, an estimation for the bias is \(<\sim 0.1\) mag.
There is no special restriction for the PSF method regarding the position of the saturated source. For a reference, the spatial distribution of Tycho-2 sources used to calibrate \(ZP_{\rm V,\, wing}^{\rm AB,\,evt1x1}\) is plotted in Figure 12. However, just keep sure the wing is exposed uniformly. We found there is a gradient trend for the elapsed time in the field of view by checking the exposure image in some observations (rare but indeed exist).
## 4 Demonstration
The PSF method is applied to restore photometries of the famous "naked eye" GRB 080319B early saturated UVOT observations in V band. The ground-based telescope RAPTOR-T (Wozniak et al., 2009) well observed GRB 080319B at early phase in V band, when it was being observed by UVOT under event mode in V band. The event file is screened into sub exposures that match with the exposure sequence of RAPTOR-T. Following steps described in Section 3.3.3, count rates of the wing \(\dot{N}_{\rm wing}\) are measured for each screened sub exposure and corresponding magnitudes are calculated with Equation (5) with the V-band zeropoint of the PSF method, see Table 1. The PSF method is also applied to the 4 time bins screened by Page et al. (2013). Table 2 lists measurements of the early V-band observations of GRB 080319B and Figure 13 shows the comparison between measurements of RAPTOR-T, UVOT pipeline, the readout streak method, and the PSF method. It is clearly to be seen, before \(400\,\)s after the trigger, most RAPTOR-T points are higher than PSF method points by \(\sim 0.1\) mag, while after \(400\,\)s, RAPTOR-T points are well matched with PSF method points and UVOT pipeline measurements. Here are 3 possible reasons to explain this: 1) residual outliers in the fitted data set, 2) the intrinsic difference between RAPTOR-T V and UVOT V, 3) UVOT measurements could be slightly fainter than true values when the source is barely saturated in V band. For the detailed explanation, please refer to Appendix C. Anyway, measurements of RAPTOR-T are consistent with that of the PSF method within 1-\(\sigma\) uncertainty and indeed the \(ZP^{\rm AB}\) derived with our sample is slightly underestimated at a level of \(<\sim 0.1\) mag, but it can be solved by using better saturated sources (e.g., isolated and have spectra) for the calibration. Photometries restored by the readout streak method are slightly brighter than \(\sim 0.1\) mag, but also consistent with RAPTOR-T points within 1-\(\sigma\) uncertainty. It can be found uncertainties of the PSF method are smaller than that of the readout streak method for barely saturated sources, hence the PSF method can be applied to images with shorter exposure time. However, the readout streak method can restore strongly saturated sources, to which the PSF method can not be applied.
GRB 210702A has a bright optical counterpart and 315 seconds after the BAT trigger, the UVOT took a \(45\,\)s exposure in U band. The preliminary U-band photometry result reported by _Swift_/UVOT team is 11.37\(\pm 0.05\) mag in Vega system (Kuin et al., 2021). However, the saturation magnitude of U-band is 11.91th mag (Vega), hence the optical transient actually saturated in the 45s U-band image. We used the source to test the PSF method. The \(\dot{N}_{\rm wing}\) is 33.75\(\pm 1.42\,\)count/s and the corrected saturated photometry is \(12.34\pm 0.05\pm 0.17\)th mag (AB), which corresponds
to \(11.32\pm 0.18\)th mag (Vega) and consistent with the value \(11.37\pm 0.05\) given by _Swift_/UVOT team. When measuring \(\dot{N}_{\rm wing}\), there is a faint smoke ring (Page et al., 2014) at the bottom right corner of the annulus region, and a sector annulus region with an opening angle of \(90^{\circ}\) was used to mask it.
## 5 Conclusion
Although the imaging principle of UVOT is different from usual optical telescopes with a CCD, e.g. HST, it is still able to derive photon counts of saturated sources with photon counts in wing regions of the PSF. However, as shown in Figure 1, the region dominated by the coincidence loss is very large, hence the wing of the PSF defined in this paper is relatively large (an annulus ranging from 15 arcsec to 25 arcsec) to avoid the influence from the coincidence loss of the central core, which makes the measurement of \(\dot{N}_{\rm wing}\) challenging, especially there are sources in the wing. However, by subtracting the wing pattern with derived CDD template, sources in the wing region can be marked efficiently with smoothed the bad pixel map generated with the sigma clip method, which makes measuring \(\dot{N}_{\rm wing}\) of a relatively large sample possible. Tycho-2 and GSPC sources are used to calibrate photometric zeropoints of the PSF method. Just to be sure, the calibration data set for each band is divide into 3 sub sets by different operation modes and image bin factors for calibration, to check if they have influence on photometric zeropoints of the PSF method. As shown in Table 1, UVOT operation modes and image bin factors have no influence on photometric zeropoints of the PSF method, hence just using the zeropoint for a specific band is reasonable.
For a reference, the steps to use the PSF method is summaried:
1. Keep sure the wing is exposed uniformly by checking the exposure map.
2. If there any source or artifact in the wing, use serveral small sector annuli overlapped with all sources as the mask, see last panel in Figure 7 for a reference.
3. Measure the raw total count rate of the unmasked wing (\(\dot{N}_{\rm raw,\,wing}^{\rm tot,\,UM}\)) and the raw background rate density (CRD\({}_{\rm raw}^{\rm bkg}\)).
4. Calculate \(\dot{N}_{\rm wing}\) by correcting raw count rate with Equation (1-4) with COI, EXT, LSS, and SEN factors.
5. If the \(\dot{N}_{\rm wing}\) is in the valid range for the PSF method, calculate the AB magnitude of the saturated source with zeropoints listed in the last column of Table. 1 and Equation (5).
For V and B bands, the maximum corrected \(\dot{N}_{\rm wing}\) for PSF method is \(\sim\)100 count/s, i.e. \(\sim\)1.7 mag brighter than saturation limit, corresponding to \(\sim 9.78\)th mag and \(\sim 10.87\)th mag, respectively. However, due to lack of bright U-band sources, currently the maximum corrected \(\dot{N}_{\rm wing}\) for U band is only \(\sim\)40 count/s, i.e. \(\sim\)0.7 mag brighter than saturation limit, corresponding to \(\sim\)12.17th mag. However, there seems pretty room to improve the range for the U band, since the saturation limit is caused by readout speed of CCD, which has nothing to do with filters. In other words, in principle, the PSF method can be applied to sources with \(\dot{N}_{\rm wing}<\)100 count/s as well.
This work based on color transformation between different filter systems and indeed there exist some factors contributing to the systematic uncertainty and the bias of the PSF method. The typical value of the systematic uncertainty induced by the measuring principle is \(\sim 0.1\) mag, while the large scatter of converted UVOT magnitudes from other catalogues contribute to the comprehensive/equivalent systematic uncertainty mostly. Hence, well observed sources with spectra can be used to calibrate the PSF method, which would reduce the uncertainty induced by the calibration data set and possibly solve the difference found between directly measured UVOT magnitudes and converted values of nearly saturated UVOT sources (Appendix C).
## 6 Acknowledgement
This work was supported in part by NSFC under grants of No. 12225305, No. 11921003, No. 12233011, No. 11933010 and No. 12073080, the China Manned Space Project (NO.CMS-CSST-2021-A13), Major Science and Technology Project of Qinghai Province (2019-ZJ-A10), Key Research Program of Frontier Sciences (No. QYZDJ-SSW-SYS024). SC has been supported by ASI grant I/004/11/0.
This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley, 2023).
|
2303.04276 | Duality and Macdonald difference operators | This note summarizes certain properties common to Macdonald, Koornwinder and
Arthamonov-Shakirov $q$-difference operators, relating to the duality or
bi-spectrality properties of their eigenfunctions. This results in Pieri
operators which, in the $q$-Whittaker limit, are relativistic difference Toda
type Hamiltonians which have a related quantum cluster algebra structure known
as the quantum Q-system. The genus-2 result explained here is new. | Philippe Di Francesco, Rinat Kedem | 2023-03-07T22:55:46Z | http://arxiv.org/abs/2303.04276v1 | # Duality and Macdonald difference operators
###### Abstract
This note summarizes certain properties common to Macdonald, Koornwinder and Arthamonov-Shakirov \(q\)-difference operators, relating to the duality or bi-spectrality properties of their eigenfunctions. This results in Pieri operators which, in the \(q\)-Whittaker limit, are relativistic difference Toda type Hamiltonians which have a related quantum cluster algebra structure known as the quantum Q-system. The genus-2 result explained here is new.
## 1 Introduction
This contribution puts together certain results associated with the duality or bi-spectrality property which is common to the (genus 1) spherical double affine Hecke algebras (sDAHAs) of classical types and the recently introduced genus-2 DAHA [1]. The functional representation of the spherical DAHA contains a set of distinguished generators which are the (generalized) Macdonald/Koornwinder \(q\)-difference operators. In genus 1, Macdonald operators are naturally related to the affine root system of \(A_{N}^{(1)}\), while Koornwinder's operators are related to \(BC_{N}\) type affine root systems. In genus-2, only the rank-1 DAHA is defined [1], and we propose a candidate which takes the place of the affine root system in defining the difference operators and duality. The duality relates the Macdonald or Koornwinder \(q\)-difference operators with the Pieri rule operators, which are difference operators in the "weight variables" \(\lambda\).
In the \(q\)-Whittaker limit when \(t\to\infty\), we have shown [1] that in the genus-1 case, the Dehn twists of the distinguished generators of the sDAHAs are \(A\)-type |
2308.06713 | LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts | Thanks to the rapid development of diffusion models, unprecedented progress
has been witnessed in image synthesis. Prior works mostly rely on pre-trained
linguistic models, but a text is often too abstract to properly specify all the
spatial properties of an image, e.g., the layout configuration of a scene,
leading to the sub-optimal results of complex scene generation. In this paper,
we achieve accurate complex scene generation by proposing a semantically
controllable Layout-AWare diffusion model, termed LAW-Diffusion. Distinct from
the previous Layout-to-Image generation (L2I) methods that only explore
category-aware relationships, LAW-Diffusion introduces a spatial dependency
parser to encode the location-aware semantic coherence across objects as a
layout embedding and produces a scene with perceptually harmonious object
styles and contextual relations. To be specific, we delicately instantiate each
object's regional semantics as an object region map and leverage a
location-aware cross-object attention module to capture the spatial
dependencies among those disentangled representations. We further propose an
adaptive guidance schedule for our layout guidance to mitigate the trade-off
between the regional semantic alignment and the texture fidelity of generated
objects. Moreover, LAW-Diffusion allows for instance reconfiguration while
maintaining the other regions in a synthesized image by introducing a
layout-aware latent grafting mechanism to recompose its local regional
semantics. To better verify the plausibility of generated scenes, we propose a
new evaluation metric for the L2I task, dubbed Scene Relation Score (SRS) to
measure how the images preserve the rational and harmonious relations among
contextual objects. Comprehensive experiments demonstrate that our
LAW-Diffusion yields the state-of-the-art generative performance, especially
with coherent object relations. | Binbin Yang, Yi Luo, Ziliang Chen, Guangrun Wang, Xiaodan Liang, Liang Lin | 2023-08-13T08:06:18Z | http://arxiv.org/abs/2308.06713v1 | # LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts
###### Abstract
Thanks to the rapid development of diffusion models, unprecedented progress has been witnessed in image synthesis. Prior works mostly rely on pre-trained linguistic models, but a text is often too abstract to properly specify all the spatial properties of an image, e.g., the layout configuration of a scene, leading to the sub-optimal results of complex scene generation. In this paper, we achieve accurate complex scene generation by proposing a semantically controllable Layout-A Ware diffusion model, termed LAW-Diffusion. Distinct from the previous Layout-to-Image generation (L2I) methods that only explore category-aware relationships, LAW-Diffusion introduces a spatial dependency parser to encode the location-aware semantic coherence across objects as a layout embedding and produces a scene with perceptually harmonious object styles and contextual relations. To be specific, we delicately instantiate each object's regional semantics as an object region map and leverage a location-aware cross-object attention module to capture the spatial dependencies among those disentangled representations. We further propose an adaptive guidance schedule for our layout guidance to mitigate the trade-off between the regional semantic alignment and the texture fidelity of generated objects. Moreover, LAW-Diffusion allows for instance reconfiguration while maintaining the other regions in a synthesized image by introducing a layout-aware latent grafting mechanism to recompose its local regional semantics. To better verify the plausibility of generated scenes, we propose a new evaluation metric for the L2I task, dubbed Scene Relation Score (SRS) to measure how the images preserve the rational and harmonious relations among contextual objects. Comprehensive experiments on COCO-Stuff and Visual-Genome demonstrate that our LAW-Diffusion yields the state-of-the-art generative performance, especially with coherent object relations.
## 1 Introduction
Recently, astounding advances have been achieved in generative modeling due to the emergence of diffusion models [34, 13, 28, 42, 1, 6]. Despite the stunning generative performance in simple cases, _e.g._, single object synthesis, how to generate a complex scene composed of multiple visual concepts with their diverse relationships remains a challenging problem. A straightforward solution is to translate the scene into a text description and then resort to the state-of-the-art text-to-image (T2I) generative models [28, 6, 7, 31, 26]. However, text-to-image diffusion models, _e.g._, Stable Diffusion and its variants [28, 6, 7, 31, 26] fall short when it comes to the spatial composition of multiple objects in a scene. An underlying reason is that properly specifying all the spatial properties in an abstractive sentence is laborious and less accurate, usually resulting in unsatisfactory generated results. In addition, the linguistic model used in T2I model is incapable of accurat
Figure 1: Illustration of complex scene generation by Stable Diffusion [28] (text-to-image model) and our LAW-Diffusion (layout-to-image model). Stable Diffusion relies on linguistic model and generates an unsatisfactory scene: the boat on the water is missed and the generated building and mountain are placed with undesired spatial relation according to the input description. By contrast, LAW-Diffusion introduces a spatial dependency parser to encode the spatial semantic coherence and produces the scene image with consistent contextual relations adhere to the layout configuration.
spatial relations whereas only providing a coarse-grained linguistic understanding from the text description. An example is shown in Fig. 1, in which we extract a sentence description from a scene layout configuration and compare the generated results of Stable Diffusion [28] and our model. From the result generated by Stable Diffusion in Fig. 1(a), we can observe that the spatial properties are not well preserved (_e.g._, the generated mountain is besides the building while it should be behind the building) and some desired objects are missed (_e.g._, the boat and its reflection). By contrast, our method generates the scene image by directly parsing the spatial dependency in the layout configuration.
Layout-to-image generation (L2I) is a very important task of controllable image synthesis, which takes a configuration of visual concepts (_i.e._, objects' bounding boxes with their class labels in a certain spatial layout) as the input. The scene layout precisely specifies each object's size, location and its association to other objects. The key challenge for L2I lies in encoding the spatial dependencies among co-existing objects at each position, _i.e._, the location-aware semantic composition, which is vital to eliminate the artifacts of spurious edges between spatial adjacent or overlapped objects [11]. Existing studies on L2I are usually developed based on the generative adversarial networks (GAN) [9, 37, 11, 38, 44]. These methods render the realism of image contents with instance-specific style noises and discriminators, and thus suffer from the lack of overall harmony and style consistency among things and stuffs in the generated scene. They have made a few attempts to capture the class-aware relationships in the generator by adopting LSTM [44] or attention mechanism [11]. Another type of approaches is based on transformer [16, 41], which reformulates the scene generation task as a sequence prediction problem by converting the input layout and target image into a list of object tokens and patch tokens. The transformer [40] is then employed to sequentially predict the image patches, which actually capture the sequential dependencies rather than scene coherence. Recently, generic T2I diffusion models, _e.g._, LDM [28] and Frido [6] have been demonstrated that they can be extended to L2I by tokenizing the layout into a sentence-like sequence of object tokens and encoding them by linguistic model, following their standard T2I paradigm. Such brute-force solutions share some shortcomings inherent to the T2I diffusion models, _e.g._, the aforementioned object leakage and unawareness of spatial dependencies in 1(a). But in fact, prior methods mainly exploit the location-insensitive relationships while overlooking the fine-grained location-aware cross-object associations.
To address the above issues, we present a novel diffusion model-based framework for L2I, termed _LAW-Diffusion_, for synthesizing complex scene images with mutually harmonious object relations. Unlike the traditional L2I methods treating each object separately, our LAW-Diffusion learns a layout embedding with rich regional composition semantics in a delicate manner for better exploring the holistic spatial information of objects. Concretely, we first instantiate each object's regional semantics as an object region map that encodes the class semantic information in its bounding box. Then, we split those region maps into fragments and propose a location-aware cross-object attention module to perform per-fragment multi-head attention with a learnable aggregation token to exploit the location-aware composition semantics. By regrouping those aggregated fragments according to their original spatial locations, we obtain a layout embedding encapsulating both class-aware and location-aware dependencies. In this way, when synthesizing a local fragment of image, such composed semantics faithfully specify whether objects are possibly overlapped at the certain location. Inspired by the effectiveness of text-to-image diffusion models [26, 31, 24], we employ the form of classifier-free guidance [14] to amplify the regional control from our layout embedding. To avoid losing objects' texture details when leveraging a large guidance scale, we further propose an adaptive guidance schedule for the sampling stage of LAW-Diffusion to maintain both layout semantic alignment and object's texture fidelity by gradually annealing the guidance magnitude. Furthermore, LAW-Diffusion allows for instance reconfiguration, _e.g._, adding/removing/restyling an instance in a generated scene via layout-aware latent grafting. Specifically, we spatially graft an exclusive region outside a bounding box from the diffusion latent of the already generated image onto the target latent guided by a new layout at the same noise level. By alternately recomposing the local regional semantics and denosing these grafted latents, LAW-Diffusion can reconfigure an instance in a synthesized scene image while keeping the other objects unchanged.
The existing evaluation metrics for the L2I task basically focus on measuring the fidelity of generated objects while ignoring the coherence among objects' relations in the scene context. Thus, we propose a new evaluation metric called Scene Relation Score (SRS) to measure whether the generated scenes preserve the rational and harmonious relations among contextual objects, which would facilitate the development of L2I research. We conduct both quantitative and qualitative experiments on Visual Genome [17] and COCO-Stuff [2], and the experimental results demonstrate that our LAW-Diffusion outperforms other L2I methods and achieves the new state-of-the-art generative performance, particularly in preserving reasonable and coherent object relations.
## 2 Related Work
**Diffusion Models** Diffusion models [34, 13, 21, 20, 32] recently emerges as powerful image generators due to their
impressive generative performance. By training a noise estimator, the generative process of diffusion model is formulated as iteratively denoising from an image-level noise [13, 4]. With the introduction the techniques of classifier guidance [4] and classifier-free guidance [14], diffusion models are enabled to incorporate different types of conditional information during the sampling stage. Most recent progresses [1, 42, 6, 30, 8, 7, 31, 26] are made in the field of text-to-image (T2I) generation because the prevalence of Stable Diffusion [28]. However, those T2I diffusion models always fall short when it comes to the complex spatial semantic composition of multiple objects in a scene. In this paper, we manage to present a layout-aware diffusion model for complex scene image generation, by mining the spatial dependencies among co-existing objects in the scene layout.
**Layout-to-Image Generation** Image generation from a layout configuration (L2I) is a specific task of conditional image generation, whose input is a set of bounding boxes and class labels of the objects in a scene. It liberates people from cracking their brains to formulate an accurate but complicated language description of a complex scene and rather provides a more flexible human-computer interface for scene generation. Layout2Im [44] generated objects' features from noises and class labels, and fused them by LSTM [15]. LostGAN [37] further introduced mask prediction as an intermediate process and proposed an instance-specific normalization to transform the object features. OC-GAN [38], Context-L2I [11] and LAMA [19] followed their training schemes and further improved objects' representations and the quality of mask generation. Transformer based methods [16, 41] converted the layout and image into object tokens and patch tokens, which reformulating L2I as a sequence prediction task. Recently, T2I diffusion models [28, 6] are extended to L2I through encoding the list of object tokens by linguistic model and then regarding it as a T2I task. However, prior approaches merely mine the category-aware relationships while overlooking the location-aware cross-object associations. In this work, we present LAW-Diffusion by explicitly encoding the location-aware semantic compositions for the visual concepts in the scene.
## 3 LAW-Diffusion
### Preliminaries
**Diffusion Models** Diffusion model is a type of likelihood-based generative models, consisting of a forward diffusion process and a backward denoising process. Formally, given an image sample \(x_{0}\sim q(x_{0})\), the forward process is defined as a Markov chain with Gaussian transitions:
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{\alpha_{t}}x_{t-1},(1-\alpha_{t}) \mathbf{I}), \tag{1}\]
where \(\{\alpha_{t}\in(0,1)\}_{t=1}^{T}\) is a deceasing sequence of the noise magnitudes in each step. From the property of Gaussian noise and Markov chain, we can directly derive the transition from \(x_{0}\) to any latent variable \(x_{t}\):
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha}_ {t})\mathbf{I}), \tag{2}\]
where \(\bar{\alpha}_{t}=\prod_{s=1}^{T}\alpha_{s}\). By re-parameterization, \(x_{t}\) can be written as the weighted sum of \(x_{0}\) and a noise \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\):
\[x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon. \tag{3}\]
A simple conclusion is that if the length of the Markov chain \(T\) is large enough, \(\bar{\alpha}_{T}\approx 0\) and \(x_{T}\) will approximately follow a standard Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\).
Figure 2: An overview of LAW-Diffusion. Given an input layout \(\Gamma\), each object’s region map \(v_{i}\) is generated as its regional semantics by filling its class embedding into the region specified by its bounding box. The object region maps are split into patches of region fragments. For the region fragments at the location \(j\), the location-aware cross-object attention module is used to aggregate them as \(\mathcal{L}_{j}\) via multi-head attention. In this way, \(\mathcal{L}_{j}\) encodes the spatial dependencies among objects at this location. Furthermore, the layout embedding \(\mathcal{L}\) is obtained by collecting all aggregated fragments and used to control the generation of LAW-Diffusion with an adaptive guidance schedule: the guidance magnitude \(\omega_{t}\) gradually anneals from \(\omega_{\max}\) to \(\omega_{\min}\) during denoising process. Best viewed in color.
The generative process of diffusion model is defined as iteratively denoising from the Gaussian prior, _i.e._, \(x_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Due to the intractability of the reverse transition \(q(x_{t-1}|x_{t})\), another Markov process parameterized by \(\theta\), _i.e._, \(p_{\theta}(x_{t-1}|x_{t})\) is learned to serve as its approximation and generate the denoised results \(\{x_{T},x_{T-1},...,x_{0}\}\):
\[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)). \tag{4}\]
Denoising diffusion probabilistic models (DDPM) [13] reveal that \(\mu_{\theta}(x_{t},t)\) derives from a noise estimator \(\epsilon_{\theta}(x_{t},t)\):
\[\mu_{\theta}(x_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}-\frac{1-\alpha_{ t}}{1-\tilde{\alpha}_{t}}\epsilon_{\theta}(x_{t},t)\right). \tag{5}\]
By optimizing the re-weighted variational lower-bound (ULB) on \(\log p_{\theta}(x_{0})\)[13], the noise estimator \(\epsilon_{\theta}(x_{t},t)\) is trained to predict the noise \(\epsilon\) in Eq. (3) and enables diffusion models to produce image samples:
\[L_{\text{ULB}}(\theta)=\mathbb{E}_{t\sim[1,T],x_{0}\sim q(x_{0}),\epsilon\sim \mathcal{N}(\mathbf{0},\mathbf{I})}\left[\left\|\epsilon_{\theta}(x_{t},t)- \epsilon\right\|^{2}\right]. \tag{6}\]
Conditional Diffusion ModelsClassifier-guidance [4] provides a way for diffusion model to achieve conditional generation by using the gradient of a separately trained classifier \(p(y|x_{t})\) during sampling. As a more efficient technique, classifier-free guidance [14, 24] replaces the noise estimator by a combination of conditional and unconditional model, without requirement of \(p(y|x_{t})\):
\[\tilde{\epsilon}_{\theta}(x_{t},t|y)=\omega\epsilon_{\theta}(x_{t},t|y)+(1- \omega)\epsilon_{\theta}(x_{t},t|\emptyset), \tag{7}\]
where \(y\) is the class label or text embedding from language model [24], \(\omega\geq 1\) denotes the guidance scale and trivially increasing \(\omega\) will amplify the effect of conditional input.
With the help of large-scale pre-trained CLIP [25] and other language models [31], diffusion models produce impressive results on text-to-image generation. However, their performance of complex scene generation are always unsatisfactory because the text embeddings from the linguistic models can not accurately capture the spatial properties, _e.g._, objects' locations, sizes and their implicit spatial associations. Distinct from text prompts, we focus on the task of generating complex scene images from the structured layout configurations (L2I) and further propose a diffusion model-based method with flexibility and compositionality.
### Layout-aware Diffusion Model
In this section, we propose a Layout-**A**W**are diffusion model (LAW-Diffusion) to parse the spatial dependencies among co-existing objects and generate photo-realistic scene images with regional semantic alignment. The overview of our LAW-Diffusion is illustrated in Fig. 2 and we will elaborate the details following.
Layout-to-Image GenerationComplex scene image synthesis from layout configuration, also known as layout-to-image generation, is specified by synthesising an image \(x\in\mathbb{R}^{H\times W\times 3}\) satisfying a layout configuration \(\Gamma\) consisting of \(N\) objects \(\mathcal{O}=\{o_{1},o_{2},...,o_{N}\}\). Each object \(o_{i}\) is equipped with its bounding box \(b_{i}=[r_{x}^{i},r_{y}^{i},h_{i},w_{i}]\) and category \(c_{i}\), where \((r_{x}^{i},r_{y}^{i})\) is the left-top coordinate and \((h_{i},w_{i})\) represents the object size.
Spatial Dependency ParserUnlike existing diffusion-based L2I solutions that depends on linguistic models [28, 6], LAW-Diffusion explores a distinctive way to explicitly harvest both location-aware and category-aware object dependencies in the compositional configurations by a spatial dependency parser. The parsing process is detailed below.
Aiming at condensing each object's spatial localization and class information, we first instantiate the regional semantics of object \(o_{i}\) as an object region map \(v_{i}\in\mathbb{R}^{H\times W\times d_{c}}\), which shares the same spatial resolution as image \(x\) for spatial location alignment. Concretely, the rectangular region in \(v_{i}\) specified by the bounding box \(b_{i}\) is filled with a learnable class embedding \(c_{i}\in\mathbb{R}^{d_{c}}\) (for brevity, symbol \(c_{i}\) is reused here), while the exclusive area is filled by a learnable background embedding \(c_{bg}\in\mathbb{R}^{d_{c}}\). Since the number of objects \(N\) varies in different layout configurations, the set of region maps \(\{v_{i}\}_{i=1}^{N}\) is padded to \(\{v_{i}\}_{i=1}^{N_{\text{max}}}\) using a learnable null region map \(v_{\emptyset}\in\mathbb{R}^{H\times W\times d_{c}}\), where \(N_{\text{max}}\) denotes the maximum number of objects.
In order to fully exploit the spatial dependencies among objects at each position, we propose a location-aware cross-object attention module to aggregate those disentangled object region maps \(\{v_{i}\}_{i=1}^{N_{\text{max}}}\) by their location-aware semantic composition. We split each object region map \(v_{i}\) into \(N_{p}\) patches of region fragments \(\{v_{i}^{j}\}_{j=1}^{N_{p}}\), \(v_{i}^{j}\in\mathbb{R}^{P\times P\times d_{c}}\) and perform multi-head self-attention (MHSA) for the set of region fragments at the same location. Formally, for the position of the \(j^{th}\) patch, we formulate \(\{v_{i}^{j}\}_{i=1}^{N_{\text{max}}}\) as an unordered set of \(N_{\text{max}}\) objects' \(j^{th}\) region fragments and feed them into the stacked \(L\) multi-head attention [40] layers with a learnable aggregation token \(v_{\text{[Agg]}}\in\mathbb{R}^{P\times P\times d_{c}}\):
\[z_{j}^{0}=\text{concat}([v_{\text{[Agg]}},v_{1}^{j},v_{2}^{j},...,v_{N_{ \text{max}}}^{j}]; \tag{8}\]
\[\tilde{z}_{j}^{l}=\text{MHSA}(\text{LN}(z_{j}^{l-1}))+z_{j}^{l-1},l=1,...,L; \tag{9}\]
\[z_{j}^{l}=\text{MLP}(\text{LN}(\tilde{z}_{j}^{l}))+\tilde{z}_{j}^{l},l=1,...,L; \tag{10}\]
\[\mathcal{L}_{j}=\text{LN}(z_{j}^{L})[0], \tag{11}\]
where \(\mathcal{L}_{j}\) is the composed regional semantics for the \(j^{th}\) patch. In this way, the per-fragment multi-head self attention in Eq. (9) serves as a location-specific permutation-equivariant interaction between different objects' representations. Furthermore, by regrouping \(\{\mathcal{L}_{j}\}_{j=1}^{N_{p}}\) according to their original spatial locations, we obtain the layout embedding \(\mathcal{L}\) with abundant spatial dependencies among objects.
**Layout Guidance** To develop a diffusion model with flexible control, we train LAW-Diffusion with the classifier-free guidance [14, 24] from the learned layout embedding \(\mathcal{L}\), which contains regional composition semantics. Similar to Eq. (7), LAW-Diffusion learns a noise estimator \(\tilde{\epsilon}_{\theta}(x_{t},t|\mathcal{L})\) conditioned on the layout embedding \(\mathcal{L}\):
\[\tilde{\epsilon}_{\theta}(x_{t},t|\mathcal{L})=\omega\epsilon_{\theta}(x_{t},t |\mathcal{L})+(1-\omega)\epsilon_{\theta}(x_{t},t|\emptyset), \tag{12}\]
where \(\omega\geq 1\) denotes the magnitude of the layout guidance.
According to the spatial inductive bias of the image-level noise \(x_{T}\) introduced by diffusion models, we concatenate the noised latent code \(x_{t}\) and the layout embedding \(\mathcal{L}\) to align their spatial information, _i.e._, \(\text{concat}([x_{t},\mathcal{L}])\in\mathbb{R}^{H\times W\times(D+3)}\) and use it as the input of the conditional noise estimator in Eq. (12):
\[\epsilon_{\theta}(x_{t},t|\mathcal{L})=\epsilon_{\theta}(\text{concat}([x_{t},\mathcal{L}]),t), \tag{13}\]
where \(\epsilon_{\theta}\) is implemented by a U-Net [29] and \(t\) is implemented as a sinusoidal time embedding following [13].
To this end, the layout embedding \(\mathcal{L}\) encapsulates location-aware semantic composition of the multiple visual concepts in the scene. By absorbing the nutrition from \(\mathcal{L}\) using the classifier-free guidance, LAW-Diffusion is able to generate a scene image with accurate regional semantics adhere to the input layout and coherent object relations.
### Adaptive Guidance Schedule
As previously discussed, classifier-free guidance [24] provides a effective way to improve the semantic control during the sampling stage. The vanilla classifier-free guidance uses a fixed guidance scale \(\omega\) in Eq. (12) for each denoising step \(t\) and has shown its effectiveness in a variety of application scenarios [24, 31, 26]. However, we empirically find in our experiment that the fixed \(\omega\) will result in a dilemma of the trade-off between the layout semantic alignment and the photo-realism of generated objects. As shown in Fig. 3, a fixed small guidance scale (\(\omega=1\)) offers insufficient semantic control, _e.g._, the cloud is missed, while a strong guidance (\(\omega=5\)) leads to an over-saturated image where the cloud and car have over-smooth textures. Based on these observations, we can intuitively conclude that a large \(\omega\) provides precise semantic compliance with the layout \(\Gamma\) while a small \(\omega\) encourages photo-realistic textures for objects. Inspired by the human's instinct of first conceiving the holistic semantics and then refining the details when drawing a picture, we propose an adaptive guidance schedule to mildly mitigate the aforementioned trade-off.
Specifically, our proposed adaptive guidance schedule is to gradually anneal the guidance magnitude \(\omega_{t}\) during the sampling process of LAW-Diffusion: the generation starts with an initially large guidance scale \(\omega_{T}=\omega_{\max}\) and it gradually anneals to a small magnitude \(\omega_{1}=\omega_{min}\) with the annealing function \(\phi(t)\) (\(t\) is decreasing from \(T\) to \(1\) in the sampling stage):
\[\omega_{t}=\omega_{\min}+\phi(t)(\omega_{\max}-\omega_{\min}). \tag{14}\]
For simplicity, here we specify \(\phi(t)\) as the cosine-form annealing, due to its concave property in the early denoising steps:
\[\omega_{t}=\omega_{\min}+\frac{1}{2}\left(1+\cos(\frac{T-t}{T}\pi)\right)( \omega_{\max}-\omega_{\min}). \tag{15}\]
In Fig. 3, it is evident that the adaptive guidance scale \(\omega_{t}\) annealing from \(\omega_{T}=5\) to \(\omega_{1}=1\) (denoted as \(\omega_{t}:5\diag 1\)) combines the benefits of the fixed guidance with \(\omega=5\) and \(\omega=1\), thus enabling both accurate layout semantic alignment and preservation of photo-realistic textures.
### Layout-aware Latent Grafting
To further explore the semantic controllability, we will showcase that LAW-Diffusion is capable of instance-level reconfiguration. Although LAW-Diffusion does not explicitly model each instance's style by an individual noise
Figure 3: Illustration of the generation processes from the same input layout \(\Gamma\) using different guidance scales. A fixed small scale \(\omega=1\) for each denoising step provides insufficient semantic control, and the cloud is missed in the first row. In the second row, using a fixed large scale \(\omega=5\) leads to over-saturation and distortion of object texture. In the third row, using the adaptive guidance scale \(\omega_{t}:5\diag 1\) which anneals from \(\omega_{T}=5\) to \(\omega_{1}=1\) maintains both semantic alignment and photo-realism. Best viewed in color.
like previous works [9, 37, 11, 38, 44], it allows for adding/removing/restyling an instance in the generated scene image by introducing a training-free layout-aware latent grafting mechanism. Fig. 4 illustrates the process.
Formally, suppose a scene image \(x_{0}\) has been synthesized from the layout configuration \(\Gamma\) by learning its layout embedding \(\mathcal{L}\), the process of instance reconfiguration can be formulated as generating an image \(x_{0}^{*}\) from another configuration \(\Gamma^{*}\) with layout embedding \(\mathcal{L}^{*}\), where an object \(o^{*}\) within a bounding box \(b^{*}\) is added/removed/restyled while preserving the other objects in \(x_{0}\). Inspired by the grafting technique used in horticulture [23, 22] which connects the tissue of a plant to another plant and make them grow together, we aim to spatially graft the exclusive region outside \(b^{*}\) from the latents \(\{x_{t}\}_{t=1}^{T}\) guided by \(\mathcal{L}\) onto the target latents \(\{x_{t}^{*}\}_{t=1}^{T}\) guided by \(\mathcal{L}^{*}\) at the same noise level. The reconfiguration process is performed by alternately grafting from \(x_{t}\) to \(x_{t}^{*}\) and denoising \(\hat{x}_{t}^{*}\) to \(x_{t-1}^{*}\):
\[\left\{\begin{array}{l}\hat{x}_{t}^{*}=x_{t}^{*}\odot M\oplus x_{t}\odot(1 -M),\\ x_{t-1}^{*}\sim p_{\theta}(x_{t-1}^{*}|\hat{x}_{t}^{*},\mathcal{L}^{*}),\end{array}\right. \tag{16}\]
where \(\odot\) and \(\oplus\) denotes element-wise multiplication and addition, \(M\) denotes a rectangular mask indicating the region within the bounding box \(b^{*}\), \(\hat{x}_{t}^{*}\) is the grafted latent, \(p_{\theta}(x_{t-1}^{*}|\hat{x}_{t}^{*},\mathcal{L}^{*})\) denotes the layout-aware denoising process guided by \(\mathcal{L}^{*}\), \(x_{T}^{*}\) is initialized as a Gaussian noise distinct from \(x_{T}\). Since \(x_{t}^{*}\) is guided by holistic semantics from \(L^{*}\) instead of only local control within \(b^{*}\), LAW-Diffusion is able to yield a reconfigured scene with coherent relations.
## 4 Experiments
### Experimental Settings
**Datasets** Following existing works on layout-to-image generation, our experiments are conducted on two benchmarks: COCO-Stuff [2] and Visual Genome (VG) [17]. COCO-stuff is an extension of the well known MS-COCO dataset with 80 _thing_ classes and 91 _stuff_ classes. Following [36, 44, 11], objects covering less than 2% of the image are disregarded and the images with 3 to 8 objects are used here (\(N_{\max}=8\)). Then we have 74,777 training and 3,097 validation images of COCO-stuff. Different from COCO-stuff, Visual Genome is a dataset specifically designed for complex scene understanding and provides information of object bounding boxes, object attributes, and relationships. Each image in VG contains 3 to 30 objects from 178 categories. Consistent with prior studies [19, 36], small and infrequent objects are removed, resulting in 62,565 images for training and 5,062 for validation in the VG dataset.
**Implementation Details** Following [13, 4], we use \(T=1000\) and the noise magnitudes \(\{\alpha_{t}\}_{t=1}^{T}\) of the diffusion process are set to linearly decrease from \(\alpha_{1}=1-10^{-4}\) to \(\alpha_{T}=0.98\). Our LAW-Diffusion is trained by jointly optimizing the spatial dependency parser that generates the layout embedding \(\mathcal{L}\), and the noise estimator \(\tilde{\epsilon}_{\theta}(x_{t},t|\mathcal{L})\) using the VLB loss defined in Eq. (6). We use the same diffusion training strategies and U-Net architectures as ADM [4]. Regarding the generation of layout embedding \(\mathcal{L}\), we set the dimension of class embedding to \(d_{c}=32\) and the patch size of region fragments to \(P=8\). Then a two-layer MHSA with 8 attention heads is implemented as the fragment aggregation function in Eq. (9). Following [14, 31], we implement the conditional model \(\epsilon_{\theta}(x_{t},t|\mathcal{L})\) and unconditional model \(\epsilon_{\theta}(x_{t},t|\emptyset)\) in Eq. (12) as a single conditional model with \(10\%\) probability of replacing the conditional input \(\mathcal{L}\) by a learnable null embedding \(\emptyset\). Due to the quadratic increase in computational overhead with the size of input images, directly generating \(256\times 256\) images can be prohibitively expensive. Hence, following [5, 28], we utilize a VQ-VAE to downsample \(256\times 256\) images to \(64\times 64\), and perform our LAW-Diffusion in the compressed latent space. For the \(64\times 64\) and \(128\times 128\) images, we maintain the diffusion training on image pixels. Regarding the hyper-parameters of our adaptive guidance in Eq. (15), we choose \(\omega_{\max}=3\) and \(\omega_{\min}=1\). Please refer to our supplementary materials for more implementation details.
**Evaluation Metrics** To comprehensively evaluate the performance of LAW-Diffusion, we adopt five metrics for quantitative comparison. Those metrics are: Inception Score (IS) [33], Frechet Inception Distance (FID) [12], Classification Accuracy Score (CAS) [27], Diversity Score (DS) [43], YOLO Score [19] and our proposed Scene Relation Score (SRS). IS assesses the overall quality of images based on the Inception model [39] pre-trained on ImageNet [3]. FID measures the distribution distance between the synthesized images and the real ones. CAS measures the discrminative ability of generated objects and whether they can be used to train a good classifier. A ResNet [10] is trained on the objects cropped from generated images (5 image samples are generated for each layout following [37])
Figure 4: Illustration of our layout-aware latent grafting mechanism for instance reconfiguration (adding an object is taken as an example). Given an image \(x_{0}\) generated from the layout \(\Gamma\), reconfigured \(x_{0}^{*}\) is obtained by alternately grafting the region outside the object bounding box from \(x_{t}\) to \(x_{t}^{*}\) (\(\hat{x}_{t}^{*}\) is produced), and denoising the grafted latent \(\hat{x}_{t}^{*}\) to \(x_{t-1}^{*}\) with the guidance of a reconfigured layout \(\Gamma^{*}\). Mask \(M\) indicates the region within the bounding box.
and the classification accuracy on the real objects is reported as CAS. DS reflects the diversity of generated samples. YOLO score evaluates the localization alignment between the generated objects and input bounding boxes.
Scene Relation ScoreHere we propose Scene Relation Score (**SRS**) as a new metric for L2I to evaluate the rationality and plausibility of the object relations in the generated image. It is reasonable that a competent scene generator should implicitly capture the relationships among objects and the correct relations can be discovered from the synthesized images. Due to the availability of objects' bounding boxes and labels, we use the predicate classification (PredCls) results predicted by a state-of-the-art scene graph generator to measure whether the correct relationships are captured by the image generator. Specifically, we resort to a publicly available scene graph generator, _i.e._, VCTree-EB[35] pre-trained on Visual Genome and report the mean Recall@K(mR@K) as our Scene Relation Score (SRS).
### Quantitative and Qualitative Comparisons
We compare our LAW-Diffusion with the state-of-the-art L2I methods, _i.e._, Layout2Im [44], OC-GAN [38], Context-L2I [11], LostGAN-V2 [37], LAMA [19], TwFA [41], LDM [28] and Frido [6]. Tab. 1 reports the quantitative comparisons for different sizes of images, in terms of FID, Inception Score (IS), Diversity Score (DS) and Classifica
Figure 5: Examples of the \(256\times 256\) images generated by different layout-to-image methods on COCO-Stuff [2] and Visual Genome [17]. The first row shows the visualizations of layout configurations and the sampled images in the same column share a common input layout.
tion Accuracy Score (CAS). Besides, Tab. 2 provides the YOLO score and the Scene Relation Score (SRS) of different methods. For fairness, we report the performance of the compared methods from their original papers.
With regards to the image fidelity, LAW-Diffusion significantly outperforms the existing L2I methods, achieving a new state-of-the-art performance. Especially, we observe great improvements of FID and IS scores on both COCO and VG. The noticeable improvements of the challenging CAS further verify the photo-realism of generated objects by LAW-Diffusion, so that they can be used to train a discriminative model. The comparison of SRS in Tab. 2 shows that LAW-Diffusion is capable of synthesizing plausible scene images by capturing the relationships among objects.
Qualitative comparisons on COCO-Stuff and Visual Genome can be observed in Fig. 5, where the samples synthesized by different models using identical layout are presented. It is impressive that LAW-Diffusion produces perceptually appealing images with clear texture details and coherent scene relationships. Moreover, the images generated by our method faithfully complies with the spatial configurations, even in the case of large number of objects.
### Instance-level Reconfiguration
As presented in Sec. 3.4, a trained LAW-Diffusion has flexible instance-level controllability, involving the abilities of adding/removing/restyling an instance in the generated scene while preserving the other contents. An example of these three types of reconfiguration is given in Fig. 6. The
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c} \hline \multirow{2}{*}{Resolutions} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{FID \(\downarrow\)} & \multicolumn{2}{c|}{Inception Score \(\uparrow\)} & \multicolumn{2}{c}{Diversity Score \(\uparrow\)} & \multicolumn{2}{c}{CAS\(\uparrow\)} \\ & & COCO & VG & COCO & VG & COCO & VG & COCO & VG \\ \hline \multirow{6}{*}{\(64\times 64\)} & Real Images & - & - & 16.30\(\pm\)0.40 & 13.90\(\pm\)0.50 & - & - & \multirow{6}{*}{27.32} & \multirow{6}{*}{23.25} \\ & Layout2Im [44] & 38.14 & 31.25 & - & - & 0.15\(\pm\)0.06 & 0.17\(\pm\)0.09 & & 27.32 & 23.25 \\ & OC-GAN [38] & 29.57 & 20.27 & 10.80\(\pm\)0.50 & 9.3\(\pm\)0.20 & - & - & - \\ & Context-L2I [11] & 31.32 & 33.91 & 10.27\(\pm\)0.25 & 8.53\(\pm\)0.13 & 0.39\(\pm\)0.09 & 0.40\(\pm\)0.09 & - & - \\ & LAMA [19] & 19.76 & 18.11 & - & - & 0.37\(\pm\)0.10 & 0.37\(\pm\)0.09 & 33.23 & 30.70 \\ & LAW-Diffusion & **17.14** & **16.44** & **14.81\(\pm\)**0.23 & **12.64\(\pm\)**0.32 & **0.45\(\pm\)**0.10 & **0.46\(\pm\)**0.10 & **35.29** & **33.46** \\ \hline \multirow{6}{*}{\(128\times 128\)} & Real Images & - & - & 22.30\(\pm\)0.50 & 20.50\(\pm\)1.50 & - & - & - & - \\ & LostGAN-V2 [37] & 24.76 & 29.00 & 14.20\(\pm\)0.40 & 10.71\(\pm\)0.27 & 0.45\(\pm\)0.09 & 0.42\(\pm\)0.09 & 31.98 & 29.35 \\ & OC-GAN [38] & 36.31 & 28.26 & 14.60\(\pm\)0.40 & 12.30\(\pm\)0.40 & - & - & - & - \\ & Context-L2I [11] & 22.32 & 21.78 & 15.62\(\pm\)0.05 & 12.69\(\pm\)0.45 & 0.55\(\pm\)0.09 & 0.54\(\pm\)0.09 & - & - \\ & LAMA [19] & 23.85 & 23.02 & - & - & 0.46\(\pm\)0.09 & 0.47\(\pm\)0.09 & 34.15 & 32.81 \\ & LAW-Diffusion & **20.36** & **15.44** & **19.89\(\pm\)**0.48 & **18.13\(\pm\)**0.44 & **0.58\(\pm\)**0.09 & **0.55\(\pm\)**0.08 & **36.80** & **35.22** \\ \hline \multirow{6}{*}{\(256\times 256\)} & Real Images & - & - & 28.10\(\pm\)1.60 & 28.60\(\pm\)1.20 & - & - & - & - \\ & LostGAN-V2 [37] & 42.55 & 47.62 & 18.01\(\pm\)0.50 & 14.10\(\pm\)0.38 & 0.55\(\pm\)0.09 & 0.53\(\pm\)0.09 & 30.33 & 28.81 \\ & OC-GAN [38] & 41.65 & 40.85 & 17.80\(\pm\)0.20 & 14.70\(\pm\)0.20 & - & - & - & - \\ & LAMA [19] & 31.12 & 31.63 & - & - & 0.48\(\pm\)0.11 & 0.54\(\pm\)0.09 & 30.52 & 31.75 \\ & LDM\({}^{\dagger}\)[28] & 40.91 & - & - & - & - & - & - \\ & Frido\({}^{\dagger}\)[6] & 21.67 & 17.24 & - & - & - & - & - \\ & TwFA [41] & 22.15 & 17.74 & 24.25\(\pm\)1.04 & 25.13\(\pm\)0.66 & **0.67\(\pm\)**0.00 & 0.64\(\pm\)0.00 & - & - \\ & LAW-Diffusion & **19.02** & **15.23** & **26.41\(\pm\)**0.96 & **27.62\(\pm\)**0.67 & 0.63\(\pm\)0.09 & **0.64\(\pm\)**0.09 & **37.79** & **36.82** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results on COCO-stuff [2] and Visual Genome (VG) [17]. The models denoted by \(\dagger^{\uparrow}\) are fine-tuned from the ones trained on a significantly larger dataset, Open-Image [18]. \(\cdot^{*}\) indicates the results are not provided in their papers.
\begin{table}
\begin{tabular}{c|c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{YOLO score \(\uparrow\)} & \multicolumn{2}{c}{Scene Relation Score (SRS) \(\uparrow\)} \\ & Methods & \multicolumn{2}{c|}{AP/AP\({}_{90}\)/AP\({}_{75}\)} & \multicolumn{2}{c}{MR@0 20\(\pm\)050\(\pm\)0100} \\ \hline \multirow{6}{*}{\(128\times 128\)} & Real Images & 33.11\(\pm\)0.77 & 36.95 & 0.162\(\pm\)0.1820 /1.0821 \\ & LostGAN-V2 [37] & 5.59\(\pm\)9.2 & 5.8 & 0.124\(\pm\)0.1307/0.1295 \\ & LAMA [19] & 7.9 / 12.0 / 8.9 & 0.1294\(\pm\)0.1482 /0.1489 \\ & LAMA-V2 [37] & **14.12\(\pm\)0.76** & **0.143\(\pm\)0.163** & **0.1631** \\ \hline \multirow{6}{*}{\(256\times 256\)} & Real Images & 42.9\(\pm\)0.62 & 48.2 & 0.170\(\pm\)0.1977 /0.1932 \\ & LostGAN-V2 [37] & 9.11\(\pm\)15.9 & 9.1241\(\pm\)0.1307/0.1295 \\ & LAMA [19] & 13.47\(\pm\)17.149 & 0.1260\(\pm\)0.1321 /0.1313 \\ & Prido\({}^{\dagger}\)[6] & -30.4 / - & 0.1375\(\pm\)0.1535 /0.1578 \\ & TwFA [41] & -28.2 / 20.1 & 0.1407\(\pm\)0.1474 /0.1487 \\ & LAW-Diffusion & **21.5\(\pm\)342** / **234** & **0.1485\(\pm\)0.1742 / **0.1750** \\ \hline \end{tabular}
\end{table}
Table 2: Comparisons of YOLO score and SRS.
\begin{table}
\begin{tabular}{c|c|c c c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{IS \(\uparrow\)} & \multicolumn{2}{c|}{Scene Relation Score (SRS) \(\uparrow\)} \\ & & mR@20
reconfigured images look plausible and well preserve the coherence in the scene, thus verifying the effectiveness of our proposed layout-aware latent grafting mechanism.
### Ablation Study
To verify the effectiveness of proposed location-aware cross-object attention and adaptive guidance schedule, we conduct ablation experiments on VG 128\(\times\)128 in Tab. 3. Here, we first introduce a baseline variant of LAW-Diffusion, dubbed LAW-Diffusion (w/o loc), which replaces the location-aware attention by a location-agnostic but class-aware attention used in prior works [11, 41]. Specifically, we use the MHSA layers similar to what we used in Sec. 3.2 to augment each object's class embedding \(c_{i}\) with only contextual class-aware information. Then the transformed object representations are filled into their bounding boxes and aggregated as the layout embedding using average pooling. In this way, it only captures class-aware relationships and is also used to guide the generation of diffusion model. Moreover, Tab. 3 shows the results of LAW-Diffusion and LAW-Diffusion (w/o loc) with different guidance strategies. For example, \(\omega_{t}:3\diagdown 1\) means LAW-Diffusion with cosine annealed guidance scale from \(\omega_{\max}=3\) to \(\omega_{\min}=1\), and \(\omega=3\)(w/o loc) denotes LAW-Diffusion (w/o loc) using a fixed guidance scale \(\omega=3\). Similarly, \(\omega_{t}:1\diagup 3\) and \(\omega_{t}:1\diagup 5\) denotes the increasing guidance scales.
By comparing IS and SRS between the variants of LAW-Diffusion and LAW-Diffusion (w/o loc) in Tab. 3, we can conclude that our location-aware cross-object attention can both improve the generated fidelity and capture the reasonable relations among objects. Besides, it is clear that our adaptive guidance schedule promotes the improvement of the IS scores of generated images. Considering both image fidelity and rationality of the object relations, we select LAW-Diffusion with cosine annealing guidance \(\omega_{t}:3\diagdown 1\) as our final model. Please refer to our supplementary materials for more ablation studies and human evaluations.
## 5 Conclusion
In this paper, we present a semantically controllable Layout-AWare diffusion model, termed LAW-Diffusion to generate complex scenes from compositional layout configurations. Specifically, we propose a location-aware cross-object attention module to learn a layout embedding encoding the spatial dependencies among objects. Further, an adaptive guidance schedule is introduced for the layout guidance to maintain both layout semantic alignment and object's texture fidelity. Moreover, we propose a layout-aware latent-grafting mechanism for instance reconfiguration on the generated scene. Furthermore, a new evaluation metric for L2I, dubbed Scene Relation Score (SRS) is proposed to measure how the images preserves rational relations. Extensive experiments show that our method yields the state-of-the-art generative performance, especially with coherent object relations.
**Limitation and future work** With regards to the limitation of our work, we only focus on the task of Layout-to-Image generation whose object categories are pre-defined,fixed, and closed-world. Additionally, current version fails to specify scene-level style and semantics with global scene description. In future, we aim to combine our LAW-Diffusion with RegionCLIP [45] to achieve open-vocabulary L2I generation, where the objects generated in the scene can belong to arbitrary novel categories and both object-level and scene-level fine-grained semantic controls can be achieved.
## 6 Acknowledgement
This work was supported in part by National Key R&D Program of China under Grant No.2021ZD0111601, National Natural Science Foundation of China (NSFC) under Grant No.61836012, U1811463, U21A20470, 62006255, 61876224, 62206314, GuangDong Basic and Applied Basic Research Foundation under Grant No.2017A030312006, 2022A1515011835, 2023A1515011374 (GuangDong Province Key Laboratory of Information Security Technology)
Figure 6: An example of instance-level reconfiguration by LAW-Diffusion. Three types of reconfiguration are shown in this figure (adding/removing/restyling a person in the generated image). Plausible results are obtained using layout-aware latent grafting. |
2305.06937 | Distance sets bounds for polyhedral norms via effective dimension | We prove that, for every norm on $\mathbb{R}^d$ and every $E \subseteq
\mathbb{R}^d$, the Hausdorff dimension of the distance set of $E$ with respect
to that norm is at least $\dim_{\mathrm{H}} E - (d-1)$. An explicit
construction follows, demonstrating that this bound is sharp for every
polyhedral norm on $\mathbb{R}^d$. The techniques of algorithmic complexity
theory underlie both the computations and the construction. | Iqra Altaf, Ryan Bushling, Bobby Wilson | 2023-05-11T16:14:38Z | http://arxiv.org/abs/2305.06937v2 | # Distance sets bounds for polyhedral norms
###### Abstract.
We prove that, for every norm on \(\mathbb{R}^{d}\) and every \(E\subseteq\mathbb{R}^{d}\), the Hausdorff dimension of the distance set of \(E\) with respect to that norm is at least \(\dim_{\mathrm{H}}E-(d-1)\). An explicit construction follows, demonstrating that this bound is sharp for every polyhedral norm on \(\mathbb{R}^{d}\). The techniques of algorithmic complexity theory underlie both the computations and the construction.
2020 Mathematics Subject Classification: 28A80, 03D32 B. Wilson was supported by NSF grant, DMS 1856124, and NSF CAREER Fellowship, DMS 2142064.
## 1. Introduction
Given a set \(E\subseteq\mathbb{R}^{d}\), let
\[\Delta(E):=\{\|x-y\|\in[0,\infty):\ x,y\in E\}\]
be the _distance set_ of \(E\), where \(\|\cdot\|\) is the Euclidean norm. The _Falconer distance problem_ is to find a lower bound on \(\dim_{\mathrm{H}}\Delta(E)\), the Hausdorff dimension of the distance set, in terms of the Hausdorff dimension of \(E\) itself. A frequent simplification is to fix a point \(x\in\mathbb{R}^{d}\) and instead investigate the _pinned distance set_
\[\Delta_{x}(E):=\{\|x-y\|\in[0,\infty):\ y\in E\}.\]
Since \(\Delta_{x}(E)\subseteq\Delta(E)\) whenever \(x\in E\), any lower bound on the size of \(\Delta_{x}(E)\) immediately implies the same bound on the size of \(\Delta(E)\). An interesting variant of the Falconer distance problem is to seek dimension bounds on distance sets relative to a different norm \(\|\cdot\|_{*}\). In this case, the corresponding distance and pinned distance sets are denoted \(\Delta^{*}(E)\) and \(\Delta^{*}_{x}(E)\), respectively.
Falconer [4] proves a simple estimate in the Euclidean case:
**Theorem 1.1**: **(Falconer [4]).** _If \(E\subseteq\mathbb{R}^{d}\) is any set, then_
\[\dim_{\mathrm{H}}\Delta(E)\geq\dim_{\mathrm{H}}E-(d-1).\]
The elementary proof presented by Falconer generalizes to other norms with little modification. However, the techniques of algorithmic complexity provided in [8] are highly applicable here, and we use this theory to give an alternate proof of the general case.
**Theorem 1.2**.: _Let \(\|\cdot\|_{*}\) be any norm on \(\mathbb{R}^{d}\) and let \(E\subseteq\mathbb{R}^{d}\). Then_
\[\dim_{\mathrm{H}}\Delta_{x}^{*}(E)\geq\dim_{\mathrm{H}}E-(d-1)\]
_for all \(x\in\mathbb{R}^{d}\)._
Polyhedral norms provide a class of finite-dimensional normed spaces which can be considered the "worst-case scenario" for the distance set problem, as the curvature of the norm ball plays a major role in precluding the possibility of a large number of points with equal pairwise distances. The relevance of nonvanishing curvature manifests itself in [7]: the authors provide the best known bound in the planar Euclidean case, and it is demonstrated that the essential characteristic of the Euclidean ball required for their argument is that it is a smooth norm ball with strictly positive curvature. A construction of Falconer [5] illustrates the marked differences in the context of polyhedral norms.
**Proposition 1.3** (Falconer [5]).: _Let \(\|\cdot\|_{P}\) be a polyhedral norm on \(\mathbb{R}^{d}\). Then there exists a compact set \(F\subset\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}F=d\) such that \(\mathcal{L}(\Delta^{P}(F))=0\)._
The relevance of curvature to the distance set conjecture is further explored in [1], in which Falconer's construction is generalized to the case in which at most countably many points of the surface of the unit ball do not lie on any open line segment contained in the surface of the unit ball (all but finitely many points on the surface of a polyhedral ball satisfy this condition) or for certain convex sets for which the measure representing the curvature field is singular with respect to surface measure. For more state-of-the-art estimates and information on the history of the Euclidean distance set conjecture, see [3].
The aforementioned results all concern sets of full Hausdorff dimension. Our second result shows that polyhedral norms witness the sharpness of the dimension bound in Theorem 1.2 in a more quantitative way. This, too, will be proven using complexity theoretic tools.
**Theorem 1.4**.: _Let \(\|\cdot\|_{P}\) be a polyhedral norm on \(\mathbb{R}^{d}\). For any \(s\in[d-1,d]\), there exists a compact set \(E\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}E=s\) such that_
\[\dim_{\mathrm{H}}\Delta_{x}^{P}(E)=s-(d-1)\]
_for some point \(x\in E\)._
Following a brief synopsis of the background material, we prove Theorem 1.2 in SS3. In SS4 we construct the examples described in Theorem 1.4.
## 2. Preliminaries
This section summarizes the requisite terminology and results from algorithmic complexity theory. See [8] for a more thorough exposition of this material. We denote by
\[\{0,1\}^{*}:=\bigcup_{n\in\mathbb{N}}\{0,1\}^{n}\]
the set of all (finite) binary strings, including the empty string \(\lambda\in\{0,1\}^{0}\).
**Definition 2.1**: **(Kolmogorov Complexity of a String).** _Let \(\sigma,\tau\in\{0,1\}^{*}\). The conditional Kolmogorov complexity of \(\sigma\) given \(\tau\) is the length of the shortest program \(\pi\) that will output \(\sigma\) given \(\tau\). More precisely,_
\[K(\sigma|\tau):=\min_{\pi\in\{0,1\}^{*}}\{\ell(\pi):\ U(\pi,\tau)=\sigma\},\]
_where \(U\) is a fixed universal prefix-free Turing machine and \(\ell(\pi)\) is the length of \(\pi\). The Kolmogorov complexity of \(\sigma\) is simply the conditional Kolmogorov complexity of \(\sigma\) given the empty string:_
\[K(\sigma):=K(\sigma|\lambda),\]
_where \(\lambda\) is the empty string._
Throughout, we work with a fixed _encoding_\(\{0,1\}^{*}\to\bigcup_{d\in\mathbb{N}}\mathbb{Q}^{d}\), under which the definitions above extend from strings to vectors over the rationals.
**Definition 2.2**: **(Kolmogorov Complexity of a Point).** _Let \(x\in\mathbb{R}^{d}\) and \(r\in\mathbb{Z}_{+}\). The Kolmogorov complexity of \(x\) at precision \(r\) is the length of the shortest program that outputs a point in \(\mathbb{Q}^{d}\) that approximates \(x\) to \(r\) bits of precision:_
\[K_{r}(x):=\min\{K(p):\ p\in B(x,2^{-r})\cap\mathbb{Q}^{d}\}.\]
_If also \(y\in\mathbb{R}^{d^{\prime}}\) and \(s\in\mathbb{Z}_{+}\), then the conditional Kolmogorov complexity of \(x\) at precision \(r\) given \(y\) at precision \(s\) is defined by_
\[K_{r,s}(x|y):=\max\big{\{}\min\{K_{r}(p|q):p\in B(x,2^{-r})\cap\mathbb{Q}^{d} \}:q\in B(y,2^{-s})\cap\mathbb{Q}^{d^{\prime}}\big{\}}.\]
_Note that, for \(x\in\mathbb{R}^{d}\), \(K_{r}(x)\) is always less than equal to \(dr+O(\log r)\). To simplify notation, we also denote \(K_{r,r}(x|y)\) by \(K_{r}(x|y)\) and, when \(r\in(0,\infty)\), \(K_{\lceil r\rceil}(x)\) by \(K_{r}(x)\)._
**Lemma 2.1**: **(Symmetry of Information [9]).** _For all \(m,n\in\mathbb{N}\), \(x\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{n}\), and \(r,s\in\mathbb{N}\) with \(r\geq s\):_
* \(\big{|}K_{r}(x|y)+K_{r}(y)-K_{r}(x,y)\big{|}\leq O(\log r)+O(\log\log\|y\|)\)_._
* \(\big{|}K_{r,s}(x|x)+K_{s}(x)-K_{r}(x)\big{|}\leq O(\log r)+O(\log\log\|x\|)\)_._
Using the notion of Kolmogorov complexity, one defines the concept of effective dimension as the asymptotic Kolmogorov complexity of a given point.
**Definition 2.3**: _The effective Hausdorff dimension of a point \(x\in\mathbb{R}^{d}\) is given by_
\[\dim(x):=\liminf_{r\to\infty}\frac{K_{r}(x)}{r}.\]
_In computability theory, a set \(A\subseteq\mathbb{N}\) is called an oracle. Heuristically, a Turing machine \(T\) can be said to have access to \(A\) if, in addition to its usual operations, it can inquire whether the number currently printed on the work tape belongs to \(A\). This operation takes one step, and the answer to the question is recorded as a state. The machine derived from \(T\) by allowing it access to \(A\) is denoted \(T^{A}\)._
The concepts of complexity and dimension defined above can be _relativized to an oracle_\(A\) by replacing the fixed universal Turing machine \(U\) with \(U^{A}\). These relativized concepts are denoted \(K^{A}\), \(\dim^{A}\), etc. For us, the utility of oracles is that they allow us to compute the Hausdorff dimension of a set from the effective dimensions the points it contains.
**Theorem 2.2**: **(Point-to-Set Principle [8]).** _For every set \(E\subseteq\mathbb{R}^{d}\),_
\[\dim_{\mathrm{H}}E=\min_{A\subseteq\mathbb{N}}\sup_{x\in E}\dim^{A}(x).\]
One can infer from the proof of the point-to-set principle that, for any oracle \(A\) and \(s\in(0,d]\), the set of points \(x\) that satisfy \(\dim^{A}(x)<s\) has \(s\)-dimensional Hausdorff measure zero. In particular, \(\dim^{A}(x)=d\) for all \(x\) outside of a Lebesgue-null set.
As in [10], we describe the complexity of a point \(x\in\mathbb{R}^{d}\) relative to a point \(y\in\mathbb{R}^{d^{\prime}}\) as the complexity of \(x\) relative to an oracle set \(A_{y}\) that encodes the binary expansion of \(y\) in a standard way. We define
\[K_{r}^{y}(x):=K_{r}^{A_{y}}(x).\]
Throughout the paper, we will often invoke the notion of "randomness". A point \(x\in\mathbb{R}^{d}\) is _random_ with respect to an oracle \(A\) if
\[\left|K_{r}^{A}(x)-rd\right|\leq O(\log r).\]
This also implies that each of the coordinates of \(x\) are random with respect to each other and have complexity \(1\) with respect to the oracle \(A\). More quantitatively, for \(t<r\), the symmetry of information 2.1 implies that
\[rd+O(\log r)=K_{r}(x) =K_{r,t}(x)+K_{t}(x)+O(\log r)\] \[\leq K_{r,t}(x)+td+O(\log r),\]
whence
\[\left|K_{r,t}(x)-(r-t)d\right|\leq O(\log r).\]
This means that the first \(t\) digits of the binary expansion of \(x\) "do not help much" in the calculating the remaining \(r-t\) digits, and that we would approximately need \((r-t)d\) bits to calculate them. In the proof of Theorem 1.4, we will have occasion to choose collections of strings of digits that are random with respect to each other, but we can see from this discussion that such a choice is always possible.
## 3. Distance set bounds for arbitrary norms
_Proof of Theorem 1.2._ The unit sphere in the norm \(\|\cdot\|_{*}\) has Hausdorff dimension \(d-1\). Therefore, by the point-to-set principle, there exists an oracle \(A\subseteq\mathbb{N}\) such that
\[\sup_{z\in\partial B_{*}(0,1)}\dim^{A}(z)=d-1.\]
For any \(y\in E\), if we write \(y=x+\|x-y\|_{*}z\) for some \(z\in\partial B_{*}(0,1)\), then the following holds: For any oracle \(B\subseteq\mathbb{N}\),
\[K_{r}^{A,B,x}(y) \leq K_{r}^{A,B,x}(y-x)+O(\log r)\] \[=K_{r}^{A,B,x}(\|x-y\|_{*}z)+O(\log r)\] \[\leq K_{r}^{A,B,x}(\|x-y\|_{*})+K_{r}^{A,B,x}(z)+O(\log r).\] \[\leq K_{r}^{A,B,x}(\|x-y\|_{*})+(d-1)r+O(\log r).\]
In particular, for every \(\varepsilon>0\), the point-to-set principle gives a point \(y\in E\) such that
\[\dim_{\mathrm{H}}E\leq\dim^{A,B,x}(y)+\varepsilon.\]
Dividing through by \(r\) and letting \(r\to\infty\) gives us that for every \(\varepsilon\), there exists a point \(t=\|x-y\|_{*}\in\Delta_{x}^{*}(E)\) such that
\[\dim_{\mathrm{H}}E-(d-1)-\varepsilon\leq\dim^{A,B,x}(t).\]
Taking the minimum over all oracles and letting \(\varepsilon\to 0\) then gives the desired inequality. \(\square\)
## 4. Sharp examples
In this section we construct the set \(E\subset\mathbb{R}^{d}\) proving Theorem 1.4. Fundamentally, the construction adapts [6] Example 7.8, which in turn works on the same principles as the "Venetian blinds" constructions that have been studied widely in geometric measure theory for the extreme features they exhibit, e.g., in the context of orthogonal projections, see [2]. As such, the use of complexity in _building_ our set \(E\) is secondary to its use in actually _proving_ that it has the asserted properties.
_Proof of Theorem 1.4._ Let \(s\in[d-1,d]\) and \(\alpha:=s-(d-1)\). Since \(\|\cdot\|_{P}\) is a polyhedral norm, there exists a finite set \(\{v^{1},...,v^{N}\}\subset\mathbb{R}^{d}\) of vectors--one for each face of the \(\|\cdot\|_{P}\)-ball--such that
\[\|x\|_{P}=\max\{|x\cdot v^{\ell}|\}_{\ell=1}^{N} \tag{4.1}\]
for all \(x\in\mathbb{R}^{d}\). Let \(c\) be a large constant to be specified later, and let \(\{m_{k}\}_{k=1}^{\infty}\) be a sequence of positive integers satisfying \(m_{1}=1\), \(2c<m_{2}\), and, for all \(k\in\mathbb{Z}_{+}\),
\[km_{k}\leq m_{k+1}. \tag{4.2}\]
We also define a sequence \(\{n_{k}\}_{k=1}^{\infty}\) by
\[n_{k}:=m_{k}+\alpha(m_{k+1}-m_{k}).\]
Note that, since \(\alpha\in[0,1]\) and \(m_{2}>2c\), we have \(n_{k}+c<m_{k+1}-c\) for all \(k\).
We construct \(E\subset[0,1]^{d}\) in blocks of \(N\) steps. Define
\[\begin{array}{rl}F_{k}:=&\Big{\{}\ x\in[0,1]^{d}:\ \lfloor 2^{j}(x\cdot v^{ \ell})\rfloor=0\ \text{for all}\\ &j\in(n_{k}+c,m_{k+1}-c],\ \text{where}\ \ \ell\equiv k\ (\text{mod}\ N)\ \Big{\}}\end{array} \tag{4.3}\]
and
\[E:=\bigcap_{k=1}^{\infty}F_{k}.\]
These definitions prescribe that the binary expansions of the points in \(E\) alternate between long strings of free digits and long strings of zeros, with the lengths of the strings rapidly approaching infinity. When a real number has two different binary expansions, we associate to it the expansion terminating in an infinite string of ones. This makes the sets \(F_{k}\) and thus \(E\) to be closed.
**Claim:** For every oracle \(A\subseteq\mathbb{N}\), there exists \(x\in E\) such that
\[\liminf_{r\to\infty}\frac{K_{r}^{A}(x)}{r}\geq s \tag{4.4}\]
and, for all \(t\in\Delta_{x}^{P}(E)\),
\[\liminf_{r\to\infty}\frac{K_{r}^{A}(t)}{r}\leq\alpha. \tag{4.5}\]
By the definition of \(\alpha\), this will imply that \(\Delta_{x}^{P}(E)\leq s-(d-1)\leq\dim_{\rm H}E-(d-1)\), and an application of Theorem 1.2 will entail that these hold with equality.
To obtain this point \(x\) and ensure that it belongs to \(E\), we iteratively select the digits in the binary expansions of its coordinates. In particular, assuming we have already chosen the digits in the places before the \(m_{k}\)th place, the digits in the places \([m_{k},m_{k+1})\) will be chosen in two steps. For each \(\ell=1,...,N\), fix any index \(1\leq i_{\ell}\leq d\) such that the \(i_{\ell}\)th coordinate of \(v^{\ell}\) is nonzero: \(v_{i_{\ell}}^{\ell}\neq 0\). Let \(\ell\equiv k\ (\text{mod}\ N)\), \(1\leq\ell\leq d\).
1. We define the digits in the binary expansion of \(x_{i}\) in the places \([m_{k},m_{k+1})\), for \(i\neq i_{\ell}\) and in the places \([m_{k},n_{k})\), for \(i=i_{\ell}\) such that the corresponding \(d\) strings (of which \(d-1\) have length \(m_{k+1}-m_{k}\) and one has length \(n_{k}-m_{k}\)) are random with respect to each other, the oracle \(A\) and the first \(m_{k}\) digits of \(x\).
2. We want to define the digits in the binary expansion of \(x_{i_{\ell}}\) in the places \([n_{k},m_{k+1})\) such that \[\left|2^{j}\left(\sum_{i=1}^{d}2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i}\rfloor v _{i}^{\ell}\right)\right|\equiv\begin{cases}0&\text{if }j\in(n_{k}+c,m_{k+1}-c],\\ 1&\text{if }j=m_{k+1}-c+1,\\ 0&\text{if }j=m_{k+1}-c+2,\end{cases}\] (4.6) where the equivalence holds modulo \(2\). Before showing that this is possible, we prove that Equation (4.6) implies \[\left\lfloor 2^{j}(x\cdot v^{\ell})\right\rfloor\equiv 0\ (\text{mod}\ 2)\quad\text{for every}\quad j\in(n_{k}+c,m_{k+1}-c),\] (4.7) provided \(c\) is chosen large enough. It will then follow from (4.3) that the point \(x\) satisfying (4.6) belongs to \(F_{k}\). In order to derive Equation (4.7) from Equation (4.6), we first observe that \[x\cdot v^{\ell}=\sum_{i=1}^{d}2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i}\rfloor v _{i}^{\ell}+\sum_{i=1}^{d}\big{(}x_{i}-2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i} \rfloor\big{)}v_{i}^{\ell}.\] (4.8)
Notice that, if (4.6) holds, then for some \(M\in\mathbb{Z}\) and \(t\leq\frac{1}{4}\), we have
\[\sum_{i=1}^{d}2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i}\rfloor v_{i}^{\ell}=2^{-n_{k}-c }M+2^{-m_{k+1}+c}\big{(}\tfrac{1}{2}+t\big{)}. \tag{4.9}\]
Choose \(c\) large enough such that
\[\max\left(\sum_{i=1}^{d}|v_{i}^{\ell}|,\,(v_{i_{\ell}}^{\ell})^{-1}\right)\leq 2 ^{c-3}\quad\text{for all}\quad\ell, \tag{4.10}\]
from which it follows that
\[\begin{split}\left|\sum_{i=1}^{d}(x_{i}-2^{-m_{k+1}}\lfloor 2^{m_{ k+1}}x_{i}\rfloor)v_{i}^{\ell}\right|&\leq\sum_{i=1}^{d}\big{|}x_{ i}-2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i}\rfloor\big{|}|v_{i}^{\ell}|\\ &\leq 2^{-m_{k+1}}\sum_{i=1}^{d}|v_{i}^{\ell}|\\ &\leq 2^{-m_{k+1}+c-3}.\end{split} \tag{4.11}\]
Combining (4.8), (4.9), and (4.11), we see that
\[2^{j}(x\cdot v^{\ell})=2^{j-n_{k}-c}M+2^{j-(m_{k+1}-c)}\big{(}\tfrac{1}{2}+t+t ^{\prime}\big{)},\]
where \(|t^{\prime}|\leq\frac{1}{8}\). Hence, for \(j\in(n_{k}+c,m_{k+1}-c]\), the second term above lies in \([0,1)\):
\[0<2^{j-(m_{k+1}-c)}\big{(}\tfrac{1}{2}+t+t^{\prime}\big{)}<1.\]
Therefore, we have shown that Equation (4.6) implies
\[\left\lfloor 2^{j}\left(\sum_{i=1}^{d}2^{-m_{k+1}}\lfloor 2^{m_{k+1}}x_{i} \rfloor v_{i}^{\ell}\right)\right\rfloor=\big{\lfloor}2^{j}(x\cdot v^{\ell}) \big{\rfloor} \tag{4.12}\]
for \(j\in(n_{k}+c,m_{k+1}-c]\), which immediately entails Equation (4.7).
It remains to see that we can choose the digits of \(x_{i_{\ell}}\) in the places \([n_{k},m_{k+1})\) such that (4.6) holds. The condition (4.6) determines the digits of \(x_{i_{\ell}}\cdot v_{i_{\ell}}^{\ell}\) in places \([n_{k},m_{k+1}-c+2)\), and we see from (4.10) that
\[2^{-c+3}\leq v_{i_{\ell}}\leq 2^{c-3}.\]
Therefore, we can indeed make such a choice for the digits of \(x_{i_{\ell}}\).
Repeating this process for all \(k\in\mathbb{Z}_{+}\) produces the desired point \(x\in E\).
Now we prove that \(\dim^{A}(x)=s\). Let \(r\in[m_{k},m_{k+1}]\) and \(k\equiv\ell\ (\text{mod}\ N)\). We estimate \(K_{r}^{A}(x)\) in two cases.
**Case 1: \(r\in(m_{k},n_{k}]\)**
By symmetry of information, we have
\[\begin{split} K_{r}^{A}(x)&=K_{r,m_{k}}^{A}(x)+K_{m _{k}}^{A}(x)+o(r)\\ &\geq(r-m_{k})d+K_{m_{k}}^{A}(x)-o(r).\end{split} \tag{4.13}\]
The second inequality is true because the digits of \(x\) in the \(m_{k}\)th through \(r\)th places are chosen randomly with respect to \(A\) and the digits in the places \(0\) through \(m_{k}\).
**Case 2:**\(r\in(n_{k},m_{k+1}]\)
By the symmetry of information, we have
\[K^{A}_{r}(x)=K^{A}_{r,n_{k}}(x)+K^{A}_{n_{k}}(x)-o(r).\]
It follows from (4.13) of Case 1 that
\[K^{A}_{n_{k}}(x) \geq(n_{k}-m_{k})d+K^{A}_{m_{k}}(x)-o(r)\] \[=\alpha(m_{k+1}-m_{k})d+K^{A}_{m_{k}}(x)-o(r).\]
Thus we have
\[\begin{split} K^{A}_{r}(x)&\geq K^{A}_{r,n_{k}}(x) +\alpha(m_{k+1}-m_{k})d+K^{A}_{m_{k}}(x)-o(r)\\ &\geq K^{A}_{r,n_{k}}(x_{1},...,x_{i_{j}-1},x_{i_{j}+1},...,x_{d} )+\alpha(m_{k+1}-m_{k})d+K^{A}_{m_{k}}(x)-o(r)\\ &\geq(r-n_{k})(d-1)+\alpha(m_{k+1}-m_{k})d+K^{A}_{m_{k}}(x)-o(r) \\ &=(r-m_{k})(d-1)+\alpha(m_{k+1}-m_{k})+K^{A}_{m_{k}}(x)-o(r).\end{split} \tag{4.14}\]
The third inequality above also follows from randomness. We put \(r=m_{k+1}\) in (4.14) to get
\[\begin{split} K^{A}_{m_{k+1}}(x)&=(m_{k+1}-m_{k})s +K_{m_{k}}(x)-o(r)\\ &\geq m_{k+1}\left(1-\frac{1}{k}\right)s+K_{m_{k}}(x)-o(r),\end{split}\]
where the inequality follows from (4.2). Thus for any arbitrarily small \(t>0\), we have
\[\frac{K^{A}_{m_{k+1}}(x)}{m_{k+1}}\geq s-t\quad\text{for large enough $k$}.\]
We will now justify that the expression \(K^{A}_{r}(x)/r\) is minimized when \(r=m_{k}\) for some \(k\). By (4.13) of Case 1,
\[\begin{split}\frac{K^{A}_{r}(x)}{r}&\geq\left(1- \frac{m_{k}}{r}\right)d+\frac{K^{A}_{m_{k}}(x)}{r}-\frac{o(r)}{r}\\ &=\left(1-\frac{m_{k}}{r}\right)d+\frac{K^{A}_{m_{k}}(x)}{m_{k}} \left(\frac{m_{k}}{r}\right)-\frac{o(r)}{r}\\ &\geq\left(1-\frac{m_{k}}{r}\right)d+(s-t)\frac{m_{k}}{r}-\frac{ o(r)}{r}\\ &=d-(\alpha-1+t)\frac{m_{k}}{r}-\frac{o(r)}{r}\end{split} \tag{4.15}\]
and similarly, by (4.14), we have in Case 2
\[\begin{split}\frac{K^{A}_{r}(x)}{r}&\geq\left(1- \frac{m_{k}}{r}\right)(d-1)+\frac{\alpha(m_{k+1}-m_{k})}{r}-\frac{o(r)}{r}\\ &\geq d-1+\alpha\frac{m_{k+1}}{r}+t\frac{m_{k}}{r}-\frac{o(r)}{r}.\end{split} \tag{4.16}\]
Recalling \(\frac{m_{k}}{r}\leq 1\) and \(\frac{m_{k+1}}{r}\geq 1\), we conclude (4.4) from (4.15) and (4.16) and therefore we have proved that \(\dim_{\rm H}E\geq s\).
Now let \(z=\|x-y\|\in\Delta_{x}^{P}(E)\), where \(x,y\in E\). Thus, \(z=|(x-y)\cdot v^{l}|\) for some \(\ell\). Thus for a given \(k\equiv\ell\ (\mathrm{mod}\ N)\), we have
\[\lfloor 2^{j}(x\cdot v^{\ell})\rfloor\equiv 0\ (\mathrm{mod}\ 2)\quad\text{for all}\quad j\in[n_{k}+c,m_{k+1}-c]\]
or
\[\lfloor 2^{j}(x\cdot v^{\ell})\rfloor\equiv 1\ (\mathrm{mod}\ 2)\quad\text{for all}\quad j\in[n_{k}+c,m_{k+1}-c].\]
Therefore,
\[K^{A}_{m_{k+1}-c,n_{k}+c}(z)\leq o(m_{k+1}).\]
This is because specifying a string of \(000\)'s or \(111\)'s having length less than \(m_{k+1}\) requires a program of length at most \(O(\log m_{k+1})\). We consequently have
\[K^{A}_{m_{k+1}-c}(z) \leq K^{A}_{m_{k+1}-c,n_{k}+c}(z)+K^{A}_{n_{k}}(z)+o(m_{k+1})\] \[\leq n_{k}+o(m_{k+1})\] \[\leq m_{k+1}\alpha\left(1-\frac{1}{k}\right)+o(m_{k+1}),\]
where the last inequality follows from (4.2). Taking \(k\to\infty\) gives (4.5), and, since this holds for all \(z\in\Delta_{x}^{P}(E)\), we conclude that \(\Delta_{x}^{P}(E)\leq\alpha\). The desired upper bound on \(\dim_{\mathrm{H}}E\) then follows immediately from Theorem 1.2:
\[s\leq\dim_{\mathrm{H}}(E)\leq\Delta_{x}^{P}(E)+(d-1)\leq\alpha+(d-1)=s.\]
As such, all the inequalities above are actually equalities, so the theorem is proved.
|
2302.03630 | Data-based optimal estimation of frequency bias: The case of Southwest
Power Pool | In this paper, we introduce a method to optimally estimate time-varying
frequency bias. Current industry practice is to assume that frequency bias is
changing only on annual basis. We suggest that this improved time-dependent
bias estimate can be used to reduce the cost of frequency regulation needed to
meet industry standards requested by the North American Electric Reliability
Corporation (NERC). Optimization of time-varying frequency bias is posed as a
parameter estimation (calibration) problem whose implementation utilizes online
system measurements. It is further shown how this result can be used to
estimate intra-dispatch load deviations. This knowledge is needed to estimate
more accurately regulation reserve needed, and to therefore reduce overall
regulation cost. Methods can be introduced to give incentives to demand
response to participate in frequency regulation. Overall, we show the
importance of incorporating knowledge of physics-based models for data-enabled
parameter estimation of physical systems. | Miroslav Kosanic, Marija Ilic, Daniel Baker, Harvey Scribner, Casey Cathey | 2023-02-07T17:34:25Z | http://arxiv.org/abs/2302.03630v1 | # Data-based optimal estimation of frequency bias: The case of Southwest Power Pool
###### Abstract
In this paper, we introduce a method to optimally estimate time-varying frequency bias \(\beta\). Current industry practice is to assume that \(\beta\) is changing only on annual basis. We suggest that this improved time-dependent bias estimate can be used to reduce the cost of frequency regulation needed to meet industry standards requested by the North American Electric Reliability Corporation (NERC). Optimization of time-varying frequency bias is posed as a parameter estimation (calibration) problem whose implementation utilizes online system measurements. It is further shown how this result can be used to estimate intra-dispatch load deviations. This knowledge is needed to estimate more accurately regulation reserve needed, and to therefore reduce overall regulation cost. Methods can be introduced to give incentives to demand response to participate in frequency regulation. Overall, we show the importance of incorporating knowledge of physics-based models for data-enabled parameter estimation of physical systems.
frequency bias estimation, automatic generation control (AGC), frequency regulation, regulation reserve, demand response.
## I Introduction
An AC interconnected electric power system is a complex system whose main function is to provide uninterrupted electricity service. Therefore, the system must be operated within pre-specified frequency deviations around nominal frequency by maintaining online balance of net generation and load. This is accomplished by ensuring that adequate resources are available to respond to power imbalances as they happen. Fast-responding power plants are usually used to do this in a feedback manner. The missed opportunity cost associated with scheduling more expensive polluting power plants to supply predictable system load has caused the cost of fast power plants which regulate frequency in response to system load deviations to be historically high [9].
### _Frequency stabilization and regulation in today's industry_
Frequency stabilization and regulation are inherent functions of today's hierarchical control [8]. Primary control of generator-turbine-governorr (GTG) is a fast local control responding to the deviations of frequency \(\omega(t)\) from the frequency set point \(\omega^{ref}[kTs]\) of their governors. Set points of power plants participating in secondary control comprise Automatic Generation Control (AGC) function which in an automated manner adjusts set points \(\omega^{ref}[kTs]\) on their governors so that the Balancing Authority (BA) level regulates frequency within the pre-specified limits. Each area \(I\) comprises several generators \(G_{j}^{I}\) participating in the BA frequency regulation. The AGC function operates in a feedback manner by responding to the total area imbalance known as the Area Control Error (ACE) on a minute-by-minute basis and it does not require the system operator to make decisions in near real-time. However, as the generation mix is rapidly changing, the natural response of each area \(I\) varies over time since, as reviewed later in the paper, the sensitivity of frequency deviations with respect to power imbalances depends on the GTG parameters of all power plants participating in AGC. Also, the ACE variations are becoming more dynamic and higher in amplitude as they reflect the net load deviations from the predicted system load, which routinely includes the effects of intermittent power generation and BA demand deviations from their historic patterns. All these changes have made it much more challenging to estimate the amount of regulation reserve, and this affects the overall cost of meeting frequency regulation standards.
### _Pricing regulation reserve in electricity markets_
Furthermore, in areas of the US interconnection in which generation is provided competitively through the evolving electricity markets the Federal Energy Regulatory Commission (FERC), under order 2000 has defined ancillary services as a means to balance power during both normal and abnormal conditions, the latter caused by the large equipment failures. By their definition, ancillary services maintain reliable operations of the interconnected transmission system. As it is stated on the FERC website [4], they encompass load following, reactive
Fig. 1: Map of North America independent system operators (ISOs)
power-voltage regulation, system protective services, loss compensation service, system control, load dispatch services, and energy imbalance services. The cost of balancing power during normal conditions, well understood in the regulated industry as the AGC cost, is included as part of the ancillary service cost. In the past, ACG costs were not considered to have a major effect on the market electricity prices. This is rapidly changing as this cost is increasing, and, therefore, it is also very important to provide better estimates of the regulation reserves typically purchased in day-ahead markets. As a result, market mechanisms for frequency regulation have taken on a new importance and the problem is undergoing its renaissance [5].
### _Growing concerns_
System operators in all BAs in the US, regulated or market supported, are concerned with the growing influx of intermittent power and its effects on the regulation reserves. Shown in Figure 1 is SPP BA whose measurement data we are using in this paper. SPP has had a huge growth in both wind and utility-scale power generation. Estimating net load deviations in the SPP area is important for planning and utilizing regulation reserves as these types of energy resources are deployed in larger amounts. To reduce the overall cost of electricity service, it is becoming increasingly important to estimate net system load deviations around their predicted patterns much more accurately. It is also becoming more important to provide incentives to demand response as a means of balancing these minute-by-minute fast deviations [1, 2].
Notably, the SPP net load volatility has increased since 2016 and is expected to further increase. Penetration of renewables, primarily wind generation, increased these changes in net load but also transmission congestion (Oklahoma has one of the single-largest wind farms in North America) as can be seen from quarterly presentation in 2022 [11]. SPP attempted to address the first issue as a way of keeping reliability through offering ramp products to systematically pre-position resources with ramp capability to manage net load variations and uncertainties [3] and provide transparent price signals to incentivize resource flexibility and future economic investment. Since then, there is an energy price increase [11].
### _Paper organization_
This paper is motivated by these overall growing concerns about the ability to regulate frequency at a reasonable cost as described above. The basic premise is that by systematic data mining, it is possible to estimate both time-varying natural response of a BA, SPP in particular, to consequently estimate more accurately minute-by-minute load deviations \(\Delta P_{L}[kTs]\) and assist system operators and markets in purchasing adequate regulation reserve. Section II sets an overview of AGC and Section III sets the physics-based model structure essential for effective estimation of natural BA response. The problem is posed as an inverse optimization problem of BA droop characteristic deviation from its physics-based model. Data is used to compute these deviations and to estimate BA droop characteristic, in particular BA frequency bias. In Section IV SPP data is used to demonstrate the time-varying frequency bias estimates around the constant SPP value given to us. Estimates of the potential saving on the amount of regulation reserve needed and the resulting regulation reserve cost are briefly described. Finally, in Section V we discuss and conclude with open questions for future research.
## II Automatic generation control
Automatic Generation Control (AGC) in the US interconnected power system or Load Frequency Control (LFC) in Europe, have been examples of the most ingenious large-scale feedback control schemes of complex man-made dynamical systems. They have worked amazingly well, despite their simplicity, and have been the key to regulating interconnection frequency. The main objective of each Control Area (CA) comprising a subsystem within a large-scale interconnected system, has been to implement frequency regulation by re-setting the set points of governor controllers \(\omega^{ref}[kT_{s}]\) of power plants participating in AGC so that the ACE is compensated by their supplemental generation \(P^{reg}[kT_{s}]\). This is done automatically in a feedback manner on a minute-by-minute basis \([kT_{s}]\) around feed-forward tertiary level generation scheduling in between dispatch times \([kT_{t}]\). Historically AGC was implemented using mainly fast power plants most suited to produce power fast and contribute their share of regulating power to cancel ACE of a CA. More recently, the US Control Areas (CAs) have merged into larger Balancing Authorities (BAs) shown in Figure 1. NERC BAL-003-1.1 standard requires that AGC for the interconnected system should regulate frequency within the pre-specified deviation limits of \(\pm 0.036Hz\) around the nominal \(60Hz\) frequency. This is done by each BA in a distributed manner according to its natural response published by NERC [10]. Electricity markets require that regulation reserves are price-based and, as such, the frequency regulation process has become more complex.
### _Physics-based AGC model_
For completeness, we briefly review the physics-based model used for implementing today's AGC. Tertiary level feed-forward scheduling of generation \(\hat{P}_{G}[kT_{t}]\) is done hourly at times \([kT_{t}]\) to supply predictable component of system load \(\hat{P}_{l}[kT_{t}]\) and the pre-agreed on Net Scheduled Interchange (NSI) with the neighboring BAs so that
\[\hat{P}_{G}[kTt]=NSI[kt]+\hat{P}_{L}[kTt] \tag{1}\]
Closer to real-time, power plants participating in frequency regulation \(P^{reg}[kT_{s}]\) adjust their governor set points \(\omega^{ref}[kT_{s}]\) to compensate actual load \(P_{L}[kT_{s}]\) and Net Actual Interchange \(NAI[kT_{s}]\) so that the sum of generation scheduled in a feed-forward manner and regulation power balance actual NAI and actual load each \([kT_{s}]\) time interval, typically minute-by-minute.
\[P_{G}[kTt]+P_{G}^{reg}[kTs]=NAI[kTs]+P_{L}[kTs] \tag{2}\]
Shown in Fig. 2 is the mismatch between predicted demand and generated and predicted generation for the SPP system. This mismatch is generally caused by:
1. not knowing the actual load \(P_{L}[kT_{s}]\) SPP incorrect prediction/measurement of the load as can be seen in Fig. 2), violating the power balance stated in Eqn. (2)
2. and/or by the deviations of net tie-line flow interchange \(\Delta F[kT_{s}]\) defined as \[\Delta F[kT_{s}]=NAI[kT_{s}]-NSI[kT_{t}]\] (3)
Frequency deviation from nominal is generally affected by the mismatch of production and generation, as well as by the net interchange deviations of \(NAI[kT_{s}]\) from the net scheduled interchange \(NSI[kT_{t}]\). Combining Eqn. (1) and Eqn. (2) and decomposing power imbalance into:
1. imbalance created by the deviations of actual load \(P_{L}[kT_{s}]\) from the predicted load \(\dot{P}_{L}[kT_{t}]\) denoted as \(ACE_{f}[kT_{s}]\)
2. power imbalance \(\Delta F[kT_{s}]\) between \(NSI[kT_{s}]\) and \(NAI[kT_{S}]\), denoted as \(ACE_{intexchange}[kT_{s}]\), we obtain: \[ACE[kT_{s}]=ACE_{f}[kT_{s}]+\Delta F[kT_{s}]\] (4) where \(ACE_{f}[kT_{s}]\) is the frequency part of the ACE and \(\Delta F[kT_{s}]\) is the interchange part of the ACE.
To compensate for internal load deviations, nonzero generation by the power plants participating in regulation is provided in a feedback manner by adjusting the set points of their governors \(\omega^{ref}[kT_{s}]\) as
\[P_{G}^{reg[kT_{s}]}=-10b\Delta f[kT_{s}]=\beta\Delta f[kT_{s}] \tag{5}\]
Determining by how much to change the set point of governors so that the \(P_{G}^{reg}[kT_{s}]\) is produced for frequency regulation purposes requires to know \(\beta\) of the BA. In what follows we briefly summarize the physical interpretation of this coefficient known as the BA frequency bias. The actual amount of \(P_{G}^{ref}[kT_{s}]\) that must be compensated by the BA is determined by the very hard to know \(\Delta P_{L}[kT_{s}]\). The amount of regulation reserve needed to produce this power cannot be known in a feed-forward way, say day-ahead or hour-ahead. However, by measuring frequency deviations \(\Delta f[kT_{s}]\) and using frequency bias \(\beta\) one can estimate at least its hourly bounds. This is one of the main reasons we seek a method for estimating time-varying \(\beta[kT_{t}]\), so that the regulation reserve is scheduled ahead of time and used in a feedback manner in between dispatch time intervals \([kT_{t}]\).
BA Loads are much more dynamic and harder and make it difficult to perform accurate estimation of frequency bias \(\beta\) in systems with a variety of intermittent distributed energy resources (DERs) such as wind and solar power plants, as well as the price-responsive demand. \(ACE_{frequency}\) portion of \(ACE\) is complex to obtain and control under the presently made assumptions. As AGC input is ACE there is the need for accurate frequency bias \(\beta\) which reflects CA droop, namely the sensitivity of the frequency deviations to deviations in power imbalances, \(\beta=10b\), units of \(\beta\) in \(\frac{MW}{Hz}\) or units of \(b\) in \(\frac{MW}{1Hz}\).
To compute \(ACE_{f}\) portion of \(ACE\) the knowledge of \(\beta\) is quite critical. Notably, the need for supplemental control is significantly smaller than it would be if it were not for the natural self-regulation by most of the loads. The load power consumption is greatly dependent on both voltage and frequency deviations, but these are not typically modeled [1]. In the changing industry with price-responsive demand it becomes quite important to account for this effect, a major open problem. The intermittent power outputs also require much more dynamic dispatch of units than in the past, and this directly affects the frequency bias of the CA.
## III Optimal Frequency Bias Estimation Method
In this section we first establish the basic structure of the BA model needed to perform data-enabled parameter estimation, and then propose an optimization method which draws on this structure.
### _Basic structure of the BA mode_
Basic structure of the BA model is obtained by utilizing the droop characteristics of individual regulating power plants and then aggregating them without taking into account the transmission grid. This is the most common assumption made for modeling bulk power system; for more detailed models, see [6, 7]. To explain, we first summarize structure of a single generator droop and then derive an aggregate droop comprising all regulating power plants.
#### Iii-A1 Droop characteristic of a single regulating power plant
We derive governor-turbine-generator (GTG) power plant droop by combining the dynamical equations of a generator and turbine, with the governor dynamical equation [6, 8]:
\[\dot{\theta}_{G} =\omega_{0}\omega_{G} \tag{6}\] \[J\dot{\omega}_{G}+D\omega_{G} =P_{T}+e_{T}a-P_{G}\] (7) \[T_{u}\dot{P}_{T} =-P_{T}+K_{t}a\] (8) \[T_{a}\dot{a} =-ra-\omega_{G}+\omega_{G}^{ref} \tag{9}\]
The mechanical power \(P_{T}\) applied to the turbine-rotor shaft is controlled by the valve position \(a\) according to Eqn. (8).
Fig. 2: SPP Generation and demand for 2017-10-14
Constant \(T_{u}\) is the turbine time constant while \(K_{t}\) is the control gain. Valve position \(a\) changes in feedback manner by responding to the deviations of measured frequency at time \(t\) from the set point of the governor at time \([kTs]\)\(\Delta\omega=(\omega(t)-\omega^{ref}[kT_{s}])\), according to Eqn.(9). The dynamics of frequency deviation \(\omega_{G}\) from nominal frequency \(\omega_{0}\) is determined by the imbalances between the mechanical power \(P_{T}\) and electrical power generated \(P_{G}\), and is further damped because of damping \(D\). Different power plant inertia \(J\) result in different frequency dynamics. The governor set point is changed in response to tertiary level economic dispatch commands each \([kT_{t}]\), and it is re-adjusted on power plants participating in AGC each \([kT_{s}]\). The primary control of the governor is tuned so that valve dynamics settles in response to fast small fluctuations to \(\omega_{G}^{ref}[kT_{s}]\). Parameters in this GTG dynamical model are derived by linearizing more complex nonlinear power plant dynamics [8]. Important for understanding droops is the definition of parameter \(r\) in Eqn.(9), which determines how well is valve control tuned. If done right, the fast dynamics evolving at continuous time settles and at times \([kT_{s}]\) results in the well-known droop of the GTG as follows [8]
\[\omega_{G}[k]=(1-\sigma D)\omega_{G}^{ref}[k]-\sigma P_{G}[k] \tag{10}\]
where GTG droop constant is defined as
\[\sigma=\frac{\delta\omega_{G}[k]}{\delta P_{G}[k]} \tag{11}\]
Understanding this three-way relationship given in Eqn. (10) is critical for understanding how frequency is controlled in today's bulk power systems by the conventional AGC.
#### Frequency bias and regulation of a single BA
It can be seen from Eqn. (10) that the steady-state frequency deviation of generator \(j\) in area \(I\)\(\omega_{G,j}^{I}=-\sigma_{G,j}^{I}P_{G,j}^{I}\) when the set point \(\omega_{G,j}^{I,ref}\) is not changed. Since the steady-state frequency is the same in the entire area \(\omega^{I}\), it can be derived by summing the droop characteristics of all regulating units \(j\in I\) a single aggregate area droop characteristic of the form
\[\omega^{I}[kTs]=\alpha^{I}[kTs]\omega^{I,ref}[kTs]-\sigma^{I}[kTs]P^{I}[kTs] \tag{12}\]
where the aggregate natural response is the sum of the natural responses of all units and \(\alpha^{I}[kT_{s}]=(1-\sigma^{I}D^{I})[kT_{s}]\)
\[\beta^{I}[kT_{s}]=\Sigma_{j\in I}\frac{1}{\sigma_{j}} \tag{13}\]
and \(\alpha=\) The aggregate reference frequency \(\omega^{I,ref}\) satisfies
\[(1-\sigma^{I}D^{I})\omega^{I,ref}=\Sigma_{j\in I}(1-\sigma_{j}D_{j})\omega_{j} ^{ref} \tag{14}\]
#### AGC in two interconnected BAs
Consider now an interconnected system comprising multiple BAs, as shown in Figure 1. Equation (12) can be -written for any area \(S\) as
\[\omega_{S}[kTs]=\alpha[kTs]\omega_{S}^{ref}[kTs]-\sigma[kTs]P_{S}[kTs] \tag{15}\]
However, in multi-area interconnected system an additional complexity arises because the net power generation \(P^{S}[kTs]\) of any area \(S\) must balance both its own load \(P_{L}^{S}[kT_{s}]\) and the actual net interchange with the neighbouring BAs \(NAIS^{S}[kT_{s}]\). This leads to the \(ACE^{S}[kT_{s}]\) which can be decomposed into two components, one caused by the internal load deviations and the other caused by the deviations in net tie line flows from schedules. As result, for the case of two-control areas one obtains
\[ACE^{S}[kT_{s}]=ACE^{S}_{f}[kT_{s}]+ACE^{S}_{interchange}[kT_{s}] \tag{16}\]
where \(ACE^{S}_{interchange}[kT_{s}]=\Delta F^{S}[kT_{s}]\), for any \(S\) comprising multi-area interconnected system.
### _Area frequency bias estimation method_
Eqn. (15) represents an aggregate model of single BA with a very distinct structure. We formulate next inverse optimization problem. Using provided SPP data, natural response of area \(\beta^{I}[kT_{t}]\) is estimated, by viewing it as a slower varying parameter at times \(kT_{t}\) given much faster state frequency state measurements \(\omega^{I}[kT_{s}]\) and area generation \(P_{G}^{I}[kT_{s}]\). Frequency is related to angular frequency as \(w=2\pi f\). Eq. ( 15) can be posed as the problem of parameter calibration through multiple linear regression. Coefficients of interest \(\alpha_{S}[kTs],\sigma_{S}[kTs]\) are obtained through minimization of the cost function
\[J(\alpha_{S}[k],\beta_{S}[k])=\sum_{k}{(f_{S}[k]-\alpha_{S}[k]f_{S}^{ref}[k]+ \sigma_{S}[k]P_{S}[k])^{2}} \tag{17}\]
This formulation of regression objective function, where optimization is performed with respect to L2 norm, is called ordinary least squares (OLS) multiple regression. OLS has a closed form solution
\[\begin{bmatrix}\alpha\\ \sigma\end{bmatrix}=\begin{pmatrix}[f_{s}^{ref}[k]\\ -P_{S}[k]\end{pmatrix}\begin{bmatrix}f_{s}^{ref}[k]\\ -P_{S}[k]\end{bmatrix}^{T})^{-1}\begin{bmatrix}f_{s}^{ref}[k]\\ -P_{S}[k]\end{bmatrix}f_{s}[k] \tag{18}\]
and through solving closed form, we obtain coefficients \(\sigma\) and \(\alpha\).
Decision variables \(\alpha\) and \(\sigma\) are changing, so minimizing over the whole day (24h or 1440 minutes) as in (Eq. 17) we use (Eq. 18, and observe changes of variables of interest, minute by minute in the next section.
## IV Numerical results using SPP system
In the data provided to us, we were given the value of frequency bias used for the adjustment of frequency part of the ACE. This value is constant, computed annually and provided by NERC. Thus, its crude and constant value (for the year of 2017, it was decided to be \(\beta=409MW/0.1Hz\)) cannot be used for estimation of the load \(P_{L}[kTs]\) at the timescale of the secondary frequency control. Results in this section show how frequency bias value changes through the day. We show incentive for adoption of time dependent frequency bias and its use for estimation of load deviations. Lastly, we show influence of interchange portion of ACE on the scheduling of reserves. Cooperative control and right information exchange between BAs, as we argue, would thus help in reducing costs of regulation reserves and give us better understanding of how much storage will be needed.
### _Data provided by the system operator_
SPP provided us with the data summarized in the table I. This data is sampled on a minute basis.
### _Optimal frequency bias daily values_
The droop (Eq. 10) has such a structure that poses itself naturally for calibration of its unknown parameters through different data methods. For our purposes, we've used OLS multiple linear regression. Optimal value of frequency bias indicates that this value changes through the day as can be seen on the (Fig. 3, 4 and 5). Opposite to the constant value of \(\beta\) we were provided by SPP, we observe that during the night, between 0-300 minutes (midnight-5am) frequency bias has a dip. From 5am onward, there is a rise, which settles around noon until 8pm, when trend of \(\beta\) goes down. Between (Fig. 3, 4 and 5) we see slight differences (e.g. morning rising slope), likely related to (Fig. 3 being Saturday, and later two Tuesday and Thursday.
## V Impact on secondary regulation reserves annual cost
Using the estimate of frequency bias, learned through the knowledge of past generation and frequency curves, one can obtain estimate of load deviations. Up to our knowledge this is one and only way.
\[\Delta P_{L}[kTs]=-10\beta[kTs]\Delta f[kTs] \tag{19}\]
in regulation up reserves. Tighter bounds, possible through information exchange between BAs and decentralized control could potentially land in annual $\(259.2\cdot 10^{6}\) of savings in previously mentioned months.
## VI Conclusion
Dynamic changes of frequency bias are happening throughout the day. Using the droop three-way relationship, we formulate optimization problem, which provides us with the optimal frequency bias value, along with the damping coefficient of the system. Both of these quantities change with time. Current practice where droop is constant should be revised, as there are clear economic incentives for change and participation of demand-response. Understanding wind penetration influence on regulation price reserves during the year incentivizes for cooperation between BAs so that tighter bounds on regulation reserves schedules can be imposed. In other words, depending on what you monitor, estimate, how information exchange is performed and if BAs help each other, one can integrate demand-response and in the process gain value.
Future research would go in the direction of better understanding cooperation, and information exchange between BAs and how taking into account for electrical distances inside BAs influences the estimation of the load.
|
2303.03519 | Balancing Cooperativeness and Adaptiveness in the (Noisy) Iterated
Prisoner's Dilemma | Ever since Axelrod's seminal work, tournaments served as the main benchmark
for evaluating strategies in the Iterated Prisoner's Dilemma (IPD). In this
work, we first introduce a strategy for the IPD which outperforms previous
tournament champions when evaluated against the 239 strategies in the Axelrod
library, at noise levels in the IPD ranging from 0% to 10%. The basic idea
behind our strategy is to start playing a version of tit-for-tat which forgives
unprovoked defections if their rate is not significantly above the noise level,
while building a (memory-1) model of the opponent; then switch to a strategy
which is optimally adapted to the model of the opponent. We then argue that the
above strategy (like other prominent strategies) lacks a couple of desirable
properties which are not well tested for by tournaments, but which will be
relevant in other contexts: we want our strategy to be self-cooperating, i.e.,
cooperate with a clone with high probability, even at high noise levels; and we
want it to be cooperation-inducing, i.e., optimal play against it should entail
cooperating with high probability. We show that we can guarantee these
properties, at a modest cost in tournament performance, by reverting from the
strategy adapted to the opponent to the forgiving tit-for-tat strategy under
suitable conditions | Adrian Hutter | 2023-03-06T22:04:51Z | http://arxiv.org/abs/2303.03519v1 | # Balancing Cooperativeness and Adaptiveness in the (Noisy) Iterated Prisoner's Dilemma
###### Abstract
Ever since Axelrod's seminal work, tournaments served as the main benchmark for evaluating strategies in the Iterated Prisoner's Dilemma (IPD). In this work, we first introduce a strategy for the IPD which outperforms previous tournament champions when evaluated against the 239 strategies in the Axelrod library, at noise levels in the IPD ranging from 0% to 10%. The basic idea behind our strategy is to start playing a version of tit-for-tat which forgives unprovoked defections if their rate is not significantly above the noise level, while building a (memory-1) model of the opponent; then switch to a strategy which is optimally adapted to the model of the opponent. We then argue that the above strategy (like other prominent strategies) lacks a couple of desirable properties which are not well tested for by tournaments, but which will be relevant in other contexts: we want our strategy to be _self-cooperating_, i.e., cooperate with a clone with high probability, even at high noise levels; and we want it to be _cooperation-inducing_, i.e., optimal play against it should entail cooperating with high probability. We show that we can guarantee these properties, at a modest cost in tournament performance, by reverting from the strategy adapted to the opponent to the forgiving tit-for-tat strategy under suitable conditions.
## 1 Introduction
Direct reciprocity is one of the main mechanisms explaining the emergence of cooperation between self-interested individuals (Nowak, 2006), and the Iterated Prisoner's Dilemma (IPD) is the primary model used for the study of direct reciprocity. Famously, tit-for-tat (TFT) won both of Robert Axelrod's seminal IPD tournaments (Axelrod, 1984). In any realistic situation resembling an IPD, intended actions can be executed imperfectly or otherwise have unintended consequences. The simplest way of modelling this in the IPD is to assume that each action has the opposite of the intended effect with some probability \(p_{\text{noise}}\)
It was soon recognized that the noisy IPD introduces fundamentally new challenges. In particular, for TFT a single noise event ends mutual cooperation (at least till the next noise event occurs), leading to alternating rounds of C/D, D/C instead (Axelrod and Hamilton, 1981). This motivated the development of alternatives to TFT which are able to cooperate more robustly in the presence of noise, including tit-for-two-tats (Axelrod, 1984), Generous TFT (Molander, 1985), Pavlov (_aka._ Win-Stay Lose-Shift) (Nowak and Sigmund, 1993), and Contrite TFT (Wu and Axelrod, 1995). Wu and Axelrod (1995) evaluated the latter three strategies against all 63 strategies submitted to Axelrod's second tournament (Axelrod, 1984) (using \(p_{\rm noise}=1\%\)), finding Generous and Contrite TFT to perform strongly, and Pavlov to perform poorly.
In celebration of the 20th anniversary of Axelrod's seminal work, Kendall et al. (2007) organized two IPD tournaments (both noise-free and noisy, with \(p_{\rm noise}=10\%\)) in 2004 and 2005. We briefly describe two of the best-performing strategies, both of which are designed specifically to account for noise. Omega TFT (Slany et al., 2007) modifies TFT in two ways: if it detects a deadlock loop (alternating rounds of C/D and D/C, which are characteristic for TFT) it attempts to break the loop by playing C twice; and it reverts to playing D for the remainder of the game if a measure of randomness of the opponent's play exceeds a threshold. DBS (Au and Nau, 2007) models the opponent as a memory-1 strategy.1 It attempts to describe the opponent using deterministic rules, and ignores occasional violations of these rules (which might be due to noise). It then optimizes its move against the model of the opponent using tree search to depth 5.
Footnote 1: I.e., the opponent is described by the probabilities of playing C after the 4 possible states \(\{\rm CC,CD,DC,DD\}\).
More recently, Harper et al. (2017) trained various model types (lookup tables, artificial neural networks, finite state machines, hidden Markov models), in both noisy and noise-free environments, against a large zoo of strategies from previous literature. They then ran a noisy (\(p_{\rm noise}=5\%\)) and a noise-free tournament of 176 strategies, including the trained strategies and all strategies mentioned so far. Remarkably, DBS was the best-performing human-designed strategy in both tournaments, ranking 1st in the noisy tournament and 12th in the noise-free tournament. Omega TFT ranked not far behind DBS (8th in the noisy tournament and 15th in the noisy one). The first ranked strategy in the noise-free tournament was EvolvedLookerUp2\(2\)2, a deterministic lookup table, which bases its decision on the first 2 actions of the opponent and the past 2 actions of itself and the opponent. All strategies participating in these tournaments (and more) are available in the Axelrod library (Knight et al., 2016), which we will use for evaluation purposes in the present work.
The goal of the present work is two-fold. Firstly, we want to create a strategy which is both able to robustly cooperate (like the forgiving variants of TFT) and highly adaptive (by building a simple model of the opponent and responding optimally to that, like DBS). These goals cannot be pursued at the same time, so we need rules for switching between the cooperative and the adaptive
state. Sec. 2 introduces such a strategy and evaluates it against all strategies in the Axelrod library. Secondly, we discuss properties which are desirable for a strategy to have, beyond doing well in tournaments. In Sec. 3, we argue that a strategy should ideally be _self-cooperating_ (cooperate with a clone) and _cooperation-inducing_ (incentivize the opponent to cooperate). We then show in Sec. 4 how to make the previously introduced strategy self-cooperating and cooperation-inducing by introducing additional rules for switching between the cooperative and the adaptive state.
## 2 Cooperate, then adapt to the opponent
The strategy introduced in this section, which we call _CooperateISO_, comprises two sub-strategies: a highly cooperative one, and a purely adaptive one, together with a rule for switching from the former to the latter. The cooperative strategy, _Longterm TFT_ focuses on allowing robust cooperation in the presence of noise without being exploitable. The adaptive strategy, which we call _ISO_ (Infinite Sum Optimizer), builds a memory-1 model of the opponent and responds optimally to that. The basic idea is illustrated in Fig. 1, and motivated and explained in more detail in the following subsections.
In the following, we use the standard notation for referring to the possible payoffs in the PD, \(T>R>P>S\), and use the conventional values \(T=5\), \(R=3\), \(P=1\), and \(S=0\).
Figure 1: The strategy introduced in this work comprises two sub-strategies, Longterm TFT (focused on allowing robust cooperation in the presence of noise) and ISO (focused on playing optimally against simple opponents), together with rules for switching between these. Sec. 2 introduces the two sub-strategies and the rule for switching from Longterm TFT to ISO; Sec. 4 introduces two rules for switching back, which are motivated in Sec. 3.
### Longterm TFT: a maximally forgiving version of TFT
Multiple existing strategies feature mechanisms for dealing with the effects of noise (e.g. Molander (1985); Nowak and Sigmund (1993); Wu and Axelrod (1995); Slany et al. (2007); Au and Nau (2007)). In this subsection, we introduce a strategy which, like Generous TFT (Molander, 1985), is designed to be maximally forgiving (of defections which can be due to noise) while still incentivizing the opponent to cooperate. However, while Generous TFT is a memory-1 strategy, our strategy takes (aggregate statistics of) the entire history of play into account; we thus call this strategy _Longterm TFT_.
Longterm TFT starts by playing standard TFT. It switches to always cooperating if the history of play is compatible (given the noise model) with the opponent always rewarding cooperation with cooperation. More precisely, let \(N_{C}\) be the number of times our agent has cooperated so far, not taking our agent's most recent action into account (since the opponent has not yet reacted to that). (Note that we care about the _actual_ action of our agent here, taking the effects of noise into account, not the _intended_ action.) Let \(N_{C\to D}\) be the number of times the opponent defected after our agent cooperated. Define the statistic
\[z=\frac{N_{C\to D}-p_{\rm noise}N_{C}}{\max\{1,\sqrt{p_{\rm noise}(1-p_{\rm noise })N_{C}}\}}. \tag{1}\]
Under the null hypothesis that the opponent always rewards cooperation with cooperation, \(N_{C\to D}\) is binomially distributed with mean \(p_{\rm noise}N_{C}\) and variance \(p_{\rm noise}(1-p_{\rm noise})N_{C}\). As a consequence of the Central Limit Theorem, \(z\) will thus follow a standard normal distribution under the null hypothesis, if \(p_{\rm noise}(1-p_{\rm noise})N_{C}\geq 1\) and \(N_{C}\) is "large enough". The statistic \(z\) thus quantifies how confidently we can reject the null hypothesis that the opponent always rewards cooperation with cooperation.
Longterm TFT always cooperates if \(N_{C}\geq 5\) and \(z<2\).2 For \(p_{\rm noise}N_{C}<1\), i.e., if we expect less than one unprovoked defection under the null hypothesis, the denominator in Eq. (1) becomes 1, and so Longterm TFT forgives at most 2 unprovoked defections.
Footnote 2: No attempt was made to optimize the precise threshold values, and similar thresholds in the rest of this work.
During evaluation, we will assume that \(p_{\rm noise}\) is known at the beginning of the IPD. If it is not known, it has to be estimated from agreement between intended and actual actions. Since Longterm TFT always plays identically to TFT while \(N_{C}<5\), it does not need an estimate of \(p_{\rm noise}\) during the first few rounds of the IPD.
By forgiving defections which are not significantly more frequent than one would expect based on the noise model, Longterm TFT prevents unnecessary chains of retaliation when playing against a cooperative but provocable strategy like TFT and its variations. It is thus able to robustly cooperate with such strategies (including itself) even in the presence of strong noise.
By being so forgiving, Longterm TFT invites occasional deliberate defections from the opponent. However, the number of defections which are forgiven only grows like \(O(\sqrt{N_{C}})\), and so the _rate_ of defections which are forgiven goes to zero like \(O(1/\sqrt{N_{C}})\). The "free" defections which Longterm TFT forgives are thus irrelevant in the steady state. In the steady state, Longterm TFT always cooperates against all strategies which are at least as cooperative as TFT: they may retaliate against Longterm TFT's (accidental) defections, but have to reward cooperation with cooperation.
### ISO: playing optimally against memory-1 opponents
Many of the most prominent strategies in the IPD are memory-1, i.e., their probability of cooperating on the next move only depends on the actions played by both players in the previous round. Such strategies are thus described by four probabilities.3 DBS (Au and Nau, 2007) builds a memory-1 model of the opponent and then optimizes its own actions against that model. The strategy described in this subsection, _ISO_ (Infinite Sum Optimizer), takes inspiration from DBS' remarkably strong performance in previous tournaments. ISO is both a simplification and a refinement of DBS.
Footnote 3: Plus a fifth probability describing the probability of cooperating in the very first round of the IPD, which is irrelevant for our purposes.
DBS follows a complex algorithm for building its opponent model, trying to establish deterministic rules (such as "the opponent always cooperates after mutual cooperation") while taking possible noise events into account. By contrast, ISO's opponent model is simply given by the (discounted) average rate of cooperation of the opponent in the four different states. We use a discount factor \(\gamma_{\rm past}=0.99\) in order to react more quickly to changes in the opponent's strategy than if a simple average were taken. In order to make the averages well-defined from the beginning, we assume that we have seen the opponent play the action dictated by TFT once for each of the four states. We also clamp all empirical rates of cooperation to the interval \([p_{\rm noise},1-p_{\rm noise}]\), since values outside of that interval are not feasible under the noise model. Let \(\vec{p}_{\rm opp}^{(n)}\) denote our model of the opponent obtained this way, i.e., the 4-vector of discounted average rates of cooperation in the four different states \((CC,CD,DC,DD)\). We use the superscript \((n)\) to denote that these rates of cooperation already include the effects of noise, and so might be different from the _intended_ rates of cooperation.
While DBS uses tree search to depth 5 in order to optimize its action against the opponent model, ISO optimizes the exact discounted sum of future payoffs. Let \(\vec{p}_{\rm self}\) denote the 4-vector of probabilities of cooperation of our agent, which we hope to optimize against \(\vec{p}_{\rm opp}^{(n)}\). In order to calculate how well a candidate \(\vec{p}_{\rm self}\) performs against a given \(\vec{p}_{\rm opp}^{(n)}\), we first calculate \(\vec{p}_{\rm self}^{(n)}\) by taking the effects of noise into account, i.e., applying \(p\mapsto(1-p_{\rm noise})p+p_{\rm noise}(1-p)\) to each entry in \(\vec{p}_{\rm self}\). Iterated play between two memory-1 strategies (ISO and the model of the opponent) induces a Markov process of order 1. We can then calculate the
4x4 transition matrix of this Markov process,
\[T=\left(\vec{p}_{\rm self}^{(n)}\odot\vec{p}_{\rm opp}^{(n)},\vec{p}_{\rm self}^{ (n)}\odot(1-\vec{p}_{\rm opp}^{(n)}),(1-\vec{p}_{\rm self}^{(n)})\odot\vec{p}_{ \rm opp}^{(n)},(1-\vec{p}_{\rm self}^{(n)})\odot(1-\vec{p}_{\rm opp}^{(n)})\right) \tag{2}\]
where \(\odot\) denotes point-wise multiplication. Let \(\vec{u}=(R,S,T,P)\) denote the vector of possible payoffs, \(\vec{s}_{0}\) a one-hot vector indicating the current state, and \(\gamma_{\rm future}<1\) a discount factor. The average expected discounted payoff per round for \(\vec{p}_{\rm self}\) is then given by
\[U(\vec{p}_{\rm self}) =\vec{s}_{0}^{T}\left(\sum_{k=1}^{\infty}\gamma_{\rm future}^{k}T ^{k}\right)\vec{u}/\left(\sum_{k=1}^{\infty}\gamma_{\rm future}^{k}\right)\] \[=(1-\gamma_{\rm future})\vec{s}_{0}^{T}T\left(1-\gamma_{\rm future }T\right)^{-1}\vec{u}. \tag{3}\]
If the IPD has a fixed ending probability \(p_{\rm end}\) per step, we can choose \(\gamma_{\rm future}=1-p_{\rm end}\). However, in the rest of this work, we will consider IPDs of fixed length, while assuming that the length was not known to the participants beforehand. We will use \(\gamma_{\rm future}=0.99\) in the rest of this work.
In order to find \(\vec{p}_{\rm self}\) which optimizes \(U(\vec{p}_{\rm self})\), we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.1 for 50 steps, using \((\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})\) as the starting point of the optimization.
A conceptually similar strategy, \(\mathrm{IP}_{0}\), was introduced by Lee et al. (2015). \(\mathrm{IP}_{0}\) optimizes payoffs in the stationary state against the memory-1 model of the opponent, which is equivalent to Eq. (3) in the limit \(\gamma_{\rm future}\to 1^{-}\).
### CooperateISO: start cooperatively, then adapt
Finally, _CooperateISO_ combines the two sub-strategies introduced so far. It starts by playing Longterm TFT, and then switches to playing ISO if it has collected sufficient data about the opponent's behavior (which gets expressed in the memory-1 model) and responding optimally to the opponent model promises higher payoffs than Longterm TFT has empirically achieved.
Note the trade-off involved in deciding when to start playing ISO. On the one hand, playing Longterm TFT too long can be wasteful if a clearly better response to the opponent exists. This includes opponents who do not retaliate against defections, but also highly defective strategies, against which Longterm TFT sub-optimally rewards each noise-induced cooperation. On the other hand, switching to ISO is risky because it can destroy a cooperative relationship without achieving anything better if our model of the opponent is inaccurate. This can happen because we have not seen the opponent react in all 4 possible states sufficiently many times, because the opponent's empirically observed behavior is actually untypical for them because of noise events, or because the opponent is simply not well described by a memory-1 model.
Note that in the presence of significant noise, maintaining cooperation with unforgiving strategies like Grim Trigger (Banks and Sundaram, 1990) is hopeless
irrespective of our behavior. This suggests that the threshold for switching to ISO should be lower in the presence of high noise. However, for simplicity we will use the same simple criterion for switching to ISO irrespective of the noise level.
Formally, let \(N_{c}\) be the number of rounds played so far (while playing Longterm TFT), \(U_{c}\) the average payoff per round achieved so far, \(\sigma_{c}\) the corresponding standard deviation, and \(U_{a}\) the expected average discounted payoff per round (from Eq. (3)) when playing ISO. We switch from playing Longterm TFT to playing ISO if all of the following conditions are met: \(N_{c}\geq 10\) (require a minimum of data about the opponent), \(U_{a}-U_{c}>2\sigma_{c}/\sqrt{N_{c}}\) (the expected gain from adapting is significant relative to the noise in the historical payoffs), and \(U_{a}-U_{c}>0.05(R-P)\) (the expected gain from adapting is non-negligible compared to possible payoff differences).
### Evaluation
The Axelrod library (Knight et al., 2016) contains (as of version 4.12.0) 239 strategies for the IPD from previous literature, including all strategies mentioned so far and a zoo of different model types trained through reinforcement learning (Harper et al., 2017). We use these strategies in order to evaluate CooperateISO and its two sub-strategies, as well as compare their performance with strategies which won previous tournaments or were otherwise prominently discussed in previous literature. See Fig. 2 for this evaluation.
Among the strategies _not_ introduced in the present work, we find Evolved-LookerUp2\(2\)2 to be the strongest strategy in the noise-free case, and DBS to be the strongest strategy in the presence of noise (with \(p_{\text{noise}}\) ranging from 1% to 10%). This matches the findings of Harper et al. (2017) (who used an older and smaller version of the Axelrod library).
Longterm TFT by itself is not competitive with the best-performing strategies, but outperforms the comparable strategies Generous TFT and Contrite TFT, in particular in the presence of strong noise.
CooperateISO outperforms all previous strategies at all noise levels. ISO by itself is clearly inferior to CooperateISO, in particular for zero or low noise rates, where CooperateISO is much better able to maintain cooperation. However, ISO outperforms its most similar strategy, DBS, despite doing simpler opponent modelling. ISO's performance also approaches that of CooperateISO for high noise levels, where even Longterm TFT has difficulties maintaining cooperation, and so the difference between ISO and CooperateISO becomes smaller.
## 3 Desiderata for IPD strategies: beyond tournaments
Tournaments provide valuable lessons about a strategy's performance against a wide range of opponents and served as the primary benchmark for evaluating
Figure 2: Evaluation of the strategies introduced in Sec. 2 (CooperateISO and its sub-strategies Longterm TFT and ISO) against the 239 strategies in the Axelrod library, evaluated at noise levels of 0%, 1%, 5%, and 10%. We use the conventional payoff values \(T=5\), \(R=3\), \(P=1\), and \(S=0\). Each IPD lasts 400 steps, and each evaluated strategy plays 5 IPDs against each opponent. Shaded regions show one standard error calculated as \(\sigma/\sqrt{5\cdot 400}\), where \(\sigma\) is the sample standard deviation of the average payoff per step.
strategies in the IPD since Axelrod's work in the 1980s (Axelrod, 1984). However, to the extend that we hope to draw wider-reaching lessons from studying strategies in the IPD, tournament performance is an imperfect measure of a strategy's benefits. Firstly, a strategy's performance in a tournament always depends on the pool of strategies it competes against. Secondly, it ignores aspects of the strategy that do not affect tournament performance but will become relevant in other contexts.4 In particular, performance in a single tournament does not take into account the effects of other parties analyzing our strategy and adjusting their strategy to ours.
Footnote 4: As a third reason, we note that the ultimate goal in a tournament – achieving a good rank – is not a perfect proxy for the goal in the IPD – achieving high payoff. For instance, the former incentives submission of groups of strategies, in which all but one strategies seek to push the lead strategy to a high rank (by always cooperating with it and always defecting against all other participants). Such group strategies indeed featured prominently in the 2004/2005 IPD tournaments (Kendall et al., 2007).
These considerations lead us to the following three (informal) desiderata, which we motivate in more detail below. We stress that we are considering scenarios in which we exclusively care about maximizing our strategy's payoff. In particular, we don't care whether we achieve higher or lower payoffs than the current opponent.5
Footnote 5: Achieving higher payoffs than the opponent can be crucial in population games, e.g. when seeking to resist an invading strategy (Lee et al., 2015).
1. The strategy should be _self-cooperating_, i.e., achieve high rates of cooperation when playing against a clone, including in the presence of significant noise. In particular, when playing against a clone our strategy should achieve an expected average payoff per step of \(R-O(p_{\mathrm{noise}})\) in the steady state. So a single noise event can only lead to \(O(1)\) defections in self-play, and must not lead to a longer-lasting breakdown of mutual cooperation.
2. The strategy should be _cooperation-inducing_. That is, optimal (payoff-maximizing) play against our strategy should lead to an expected average payoff per step of \(R-O(p_{\mathrm{noise}})\) in the steady state for our strategy.
3. The strategy should be _adaptive_ (w.r.t. some set \(\Omega\) of opponent strategies). That is, our strategy should (be able to adapt to and thus) achieve close-to-optimal expected payoffs against all strategies from some set \(\Omega\). We ware interested in sets \(\Omega\) which consist of all "sufficiently simple" strategies.
Let us briefly elaborate on why we chose these particular desiderata.
* Not being self-cooperating limits our strategy's payoff in any context in which it has a high chance of facing a clone. This can happen, for example, if a principal deploys multiple instances of the same strategy, or if other players imitate our strategy.
* If a strategy is not cooperation-inducing, it will do poorly as soon as an opponent is able to create a decent model of it and react to that. TFT is the simplest possible cooperation-inducing strategy, and the historically
strong performance of TFT and its generous variants lends pragmatic support to this desideratum.
* Note that there are strategies which satisfy a strictly stronger criterion than being cooperation-inducing: there are "extortionate" zero-determinant (ZD) strategies (Press and Dyson, 2012) which have the property that optimal (payoff-maximizing) play against them gives a payoff per step to them which is higher than \(R\). However, by construction in the IPD achieving a payoff higher than \(R\) per step implies that the opponent's reward per step is lower than \(R\), so such extortionate strategies cannot be self-cooperating.
* If a strategy is not adaptive w.r.t. some simple opponent, it is leaving "easy money" on the table. For example, a weakness of TFT is that it always cooperates against Cooperator, while a weakness of Pavlov is that it cooperates on every second step against Defector. At the every least, we want our strategy to be adaptive w.r.t. all memory-0 strategies. That is, our strategy should always defect against all strategies which cooperate with constant probability irrespective of past actions and thus provide no incentive to cooperate. DBS is adaptive w.r.t. all memory-1 strategies6, and its strong performance in tournaments lends pragmatic support to this desideratum. Footnote 6: It _tries_ to learn a memory-1 model of the opponent and react optimally to that. We leave aside the more difficult question of how reliably it achieves that in practice and in the presence of noise.
* there are no evolutionary stable strategies in the IPD (Selten and Hammerstein, 1984; Boyd and Lorberbaum, 1987; Farrell and Ware, 1989; Lorberbaum, 1994). Instead, if the set of strategies is not restricted, evolutionary dynamics will move populations between a variety of Nash equilibria with different levels of cooperation (Garcia and van Veelen, 2018).
Similar desiderata have been proposed by Neill (2001). They search for strategies which are "self-cooperating" (able to achieve mutual cooperation with their clone), "C-exploiting" (able to exploit unconditional cooperators), and "D-unexploitable" (able to resist exploitation by defectors). Using our terminology, the latter two correspond to being adaptive w.r.t. \(\Omega=\{\text{Cooperator},\text{Defector}\}\), the weakest non-trivial form of adaptiveness. We also note that being "D-unexploitable" is a corollary of being cooperation-inducing.
Similar desiderata have also been discussed in the context of population dynamics. IP\({}_{0}\)(Lee et al., 2015) is adaptive w.r.t. all memory-1 strategies (as discussed in Sec. 2.2), and self-cooperating by virtue of a noise-tolerant handshake mechanism. These properties make IP\({}_{0}\) both uninvadable and a strong invader when evaluated against memory-1 strategies. Knight et al. (2018) found that strategies which are best at invading populations of other strategies tend to be strategies trained to do well in tournaments; while the best resistors
of invasion invoke handshake mechanisms to cooperate with each other, but not with invaders.
Table 1 assesses fulfillment of the proposed desiderata by prominent strategies from previous literature as well as the strategies introduced in the present work. Fig. 3 empirically evaluates self-cooperativeness at \(p_{\text{noise}}=5\%\). CooperateISORevert1 and CooperateISORevert2 are introduced in the following section.
If we accept these desiderata, a natural next question is whether it is possible for a strategy to fulfill all of them; or, more precisely, what the "maximal" set \(\Omega\) is such that it is possible for a strategy to fulfill all of them. We note that it is not possible for a strategy to be both cooperation-inducing and adaptive w.r.t. all memory-1 strategies. This follows from the existence of extortionate ZD strategies (Press and Dyson, 2012), which are memory-1 strategies and have the property that optimally responding to them (in the sense of maximizing payoffs) yields an average payoff higher than \(R\) per step for them. So if a strategy is adaptive w.r.t. all memory-1 strategies, optimal play against that strategy can achieve an average payoff higher than \(R\) per step, implying less than \(R\) per step for the strategy. Let us define a strategy as _extortionate_ if the payoff-maximizing response to the strategy gives average payoffs higher than \(R\) to the strategy (which implies less than \(R\) for the opponent). The best we can hope for is thus a strategy which is self-cooperating, cooperation-inducing, and adaptive to all non-extortionate memory-1 strategies. The following section introduces such a strategy (to the best of our knowledge, the first such strategy).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Self-cooperating & Cooperation-inducing & Adaptive \\ \hline Pavlov & ✓ & ✗\({}^{\prime}\) & ✗ \\ TFT, Omega TFT & ✗ & ✓ & ✗ \\ Generous/Contrite TFT & ✓ & ✓ & ✗ \\ DBS & ✗ & ✗\({}^{8}\) & ✓\({}^{(1)}\) \\ IP\({}_{0}\) & ✓ & ✗ & ✓\({}^{(1)}\) \\ \hline Longterm TFT & ✓ & ✓ & ✗ \\ ISO, CooperateISO & ✗ & ✗ & ✓\({}^{(1)}\) \\ CooperateISORevert1 & ✓ & ✗ & ✓\({}^{(1)}\) \\ CooperateISORevert2 & ✓ & ✓ & ✓\({}^{(2)}\) \\ \hline \end{tabular}
\end{table}
Table 1: Fulfillment of the three proposed desiderata by prominent or newly introduced strategies. The second half of the table describes strategies introduced in the present work. For adaptiveness, \({}^{(1)}\) denotes being adaptive w.r.t. all memory-1 strategies, and \({}^{(2)}\) denotes being adaptive w.r.t. all _non-extortionate_ memory-1 strategies.
Figure 3: Average payoff per step achieved by various strategies in self-play (playing against a clone) in the IPD, using \(p_{\text{noise}}=5\%\). The top figure shows strategies from previous literature, the bottom figure shows strategies introduced in the present work. Each payoff is averaged over 100 self-play games. The numbers in parentheses after each strategy name show the average payoff per step achieved by that strategy (averaged over 1000 steps per game and 100 games). Dashed lines show three simple benchmarks: always cooperate, cooperate/defect randomly with 50% probability, always defect, while including the effects of noise.
Cooperate, adapt, and revert: fulfilling all desiderata
In order to create a strategy which is self-cooperating, cooperation-inducing, and adaptive to all non-extortionate memory-1 strategies, we start with CooperateISO and add two rules to it for reverting from the state in which it plays ISO to the state in which it plays Longterm TFT. The basic idea is illustrated in Fig. 1.
### Becoming self-cooperating: revert after poor performance
CooperateISO is adaptive, but neither self-cooperating nor cooperation-inducing. When playing against itself, it will initially be highly forgiving as long as it is in the Longterm TFT state. Optimal play against a forgiving opponent is to defect. Both clones will thus switch to playing ISO and always defecting. They will then update their model of the opponent to being highly defective, against which the optimal response is to defect further. (DBS goes through a similar sequence of updates when playing against a clone.)
In order to break this cycle, we add a rule to CooperateISO to revert from the ISO state to the Longterm TFT state if the former empirically performs worse than the latter. This ensures self-cooperativeness, and might also seem like a prudent choice when playing against other opponents. We call the resulting strategy _CooperateISORevert1_. In order to decide whether ISO's payoffs are lower than the ones of Longterm TFT, we use a standard significance test. Formally, let \(N_{c}\) the number of rounds for which we played Longterm TFT, \(U_{c}\) the average payoff per step, and \(\sigma_{c}\) the corresponding standard deviation; and let \(N_{a}\), \(U_{a}\), and \(\sigma_{a}\) be the analogous values while playing ISO. We revert from playing ISO to playing Longterm TFT if \(N_{a}\geq 10\) and
\[\frac{U_{c}-U_{a}}{\sqrt{\frac{\sigma_{c}^{2}}{N_{c}}+\frac{\sigma_{c}^{2}}{N _{a}}}}>2. \tag{4}\]
The effect of this additional rule can be seen in the bottom plot in Fig. 3. When CooperateISO plays against a clone, payoffs degrade to far below the level of mutual cooperation. For CooperateISORevert1 (and CooperateISORevert2, which will be introduced in the following subsection), payoffs degrade similarly for the first few dozen steps. They subsequently recover when the strategy is likely to have reverted to playing Longterm TFT.
### Becoming cooperation-inducing: revert if extorted
CooperateISORevert1 is still not cooperation-inducing; an extortionate ZD strategy can still get rewards per step higher than \(R\), if ISO responds optimally to
it.9 In order to create a strategy which is cooperation-inducing, we add a second rule for reverting from ISO to Longterm TFT: revert to Longterm TFT if ISO is being exotorted, which we define as the opponent achieving payoffs per step which are higher than \(R\). We call the resulting strategy _CooperateISORevert2_. Formally, let let \(N_{o}\), \(U_{o}\), and \(\sigma_{o}\) as above describe the payoffs which the opponent achieves while playing against ISO. We revert to playing Longterm TFT if \(N_{o}\geq 10\) and \(U_{o}-2\sigma_{o}/\sqrt{N_{o}}>R\).
Footnote 9: Extortionate strategies face difficulties in the presence of noise, however; see Hao et al. (2015) for a discussion.
By construction, it is not possible to achieve payoffs larger than \(R\) in the steady state against CooperateISORevert2 while CooperateISORevert2 is in the ISO state. TFT is the least cooperative strategy which ensures Longterm TFT's cooperation in the steady state, and so is the best response to Longterm TFT. The average payoff per step which TFT achieves against Longterm TFT is \(R+(S+2T-3R)p_{\text{noise}}+O(p_{\text{noise}}^{2})\), since the two leading-order effects of noise are Longterm TFT accidentally defecting and TFT retaliating, and TFT accidentally defecting with Longterm TFT _not_ retaliating. With the conventional payoffs, \(S+2T-3R>0\). TFT thus also achieves payoffs of \(R+(S+2T-3R)p_{\text{noise}}+O(p_{\text{noise}}^{2})\geq R\) against CooperateISORevert2, making it the best response to CooperateISORevert2. This makes CooperateISORevert2 cooperation-inducing.
Fig. 4 evaluates all strategies developed in the present work against the strategies in the Axelrod library. We also compare all of them to the previously existing strategies which, to the best of our knowledge, show the strongest performance: EvolvedLookerUp2\(2\)2 for \(p_{\text{noise}}=0\) and DBS for \(p_{\text{noise}}>0\). Ensuring that the strategy is both self-cooperating and cooperation-inducing, i.e., playing CooperateISORevert2 instead of CooperateISO, has a small cost in performance against this pool of opponents. However, CooperateISORevert2 still outperforms the best-performing previously existing strategies at all noise levels.
## 5 Discussion
While even some bacteria show TFT-like behavior (Smith et al., 2020), the strategies introduced in the present work are arguably too complex to evolve in biological systems. The lessons from this work are thus more relevant in the context of humans, human organizations, and human-designed systems which interact in iterated social dilemmas. In such contexts, it is plausible that opponents imitate or analyze our strategy and adapt theirs to it. This makes it desirable that our strategy be both self-cooperating and cooperation-inducing.
When facing a diverse set of opponents, CooperateISO shows the value of enabling robust mutual cooperation, while also being ready to adapt oneself to the opponent if it does not incentivize cooperative behavior. We have shown that such a strategy can be updated to one which is both self-cooperating and cooperation-inducing, without incurring a large loss in performance even in the more narrow context of a tournament against a fixed set of opponents.
Figure 4: Evaluation of all strategies introduced in the present work against the 239 strategies in the Axelrod library, evaluated at noise levels of 0%, 1%, 5%, and 10%. “Best previous” shows the performance of EvolvedLookerUp2\(2\)2 for \(p_{\text{noise}}=0\) and DBS for \(p_{\text{noise}}>0\). Each IPD lasts 400 steps, and each evaluated strategy plays 5 IPDs against each opponent. Shaded regions show one standard error calculated as \(\sigma/\sqrt{5\cdot 400}\), where \(\sigma\) is the sample standard deviation of the average payoff per step.
## Acknowledgements
The author thanks Marc Harper for valuable comments on this manuscript.
|
2304.02878 | Online Stabilization of Unknown Linear Time-Varying Systems | This paper studies the problem of online stabilization of an unknown
discrete-time linear time-varying (LTV) system under bounded non-stochastic
(potentially adversarial) disturbances. We propose a novel control algorithm
based on convex body chasing (CBC). Under the assumption of infrequently
changing or slowly drifting dynamics, the algorithm guarantees
bounded-input-bounded-output stability in the closed loop. Our approach avoids
system identification and applies, with minimal disturbance assumptions, to a
variety of LTV systems of practical importance. We demonstrate the algorithm
numerically on examples of LTV systems including Markov linear jump systems
with finitely many jumps. | Jing Yu, Varun Gupta, Adam Wierman | 2023-04-06T05:51:15Z | http://arxiv.org/abs/2304.02878v2 | # Online Stabilization of Unknown Linear Time-Varying Systems
###### Abstract
This paper studies the problem of online stabilization of an unknown discrete-time linear time-varying (LTV) system under bounded non-stochastic (potentially adversarial) disturbances. We propose a novel algorithm based on convex body chasing (CBC). Under the assumption of infrequently changing or slowly drifting dynamics, the algorithm guarantees bounded-input-bounded-output stability in the closed loop. Our approach avoids system identification and applies, with minimal disturbance assumptions, to a variety of LTV systems of practical importance. We demonstrate the algorithm numerically on examples of LTV systems including Markov linear jump systems with finitely many jumps.
## I Introduction
Learning-based control of linear-time invariant (LTI) systems in the context of linear quadratic regulators (LQR) has seen considerable progress. However, many real-world systems are time-varying in nature. For example, the grid topology in power systems can change over time due to manual operations or unpredictable line failures [1]. Therefore, there is increasing recent interest in extending learning-based control of LTI systems to the linear time-varying (LTV) setting [2, 3, 4, 5, 6].
LTV systems are widely used to approximate and model real-world dynamical systems such as robotics [7] and autonomous vehicles [8]. In this paper, we consider LTV systems with dynamics of the following form:
\[x_{t+1}=A_{t}x_{t}+B_{t}u_{t}+w_{t}, \tag{1}\]
where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\) and \(w_{t}\) denotes the state, the control input, and the bounded and potentially adversarial disturbance, respectively. We use \(\theta_{t}=[A_{t}\ B_{t}]\) to succinctly denote the system matrices at time step \(t\).
On the one hand, offline controller design for LTV systems is well-established in the setting where the underlying LTV model is _known_[9, 10, 11, 12]. Additionally, recent work has started focusing on regret analysis and non-stochastic disturbances for known LTV systems [2, 13].
On the other hand, online control design for LTV systems where the model is _unknown_ is more challenging. Historically, there is a rich body of work on adaptive control design for LTV systems [14, 15, 16]. Also related is the system identification literature for LTV systems [17, 18, 19], which estimates the (generally assumed to be stable) system to allow the application of the offline techniques.
In recent years, the potential to leverage modern data-driven techniques for controller design of unknown linear systems has led to a resurgence of work in both the LTI and LTV settings. There is a growing literature on "learning to control" unknown LTI systems under stochastic or no noise [20, 21, 22]. Learning under bounded and potentially adversarial noises poses additional challenges, but online stabilization [23] and regret [24] results have been obtained.
In comparison, there is much less work on learning-based control design for unknown LTV systems. One typical approach, exemplified by [3, 25, 26], derives stabilizing controllers under the assumption that _offline_ data representing the input-output behavior of (1) is available and therefore an _offline_ stabilizing controller can be pre-computed. Similar _finite-horizon_ settings where the algorithm has access to offline data [27], or can _iteratively_ collect data [28] were also considered. In the context of _online_ stabilization, i.e., when offline data is not available, work has derived stabilizing controllers for LTV systems through the use of predictions of \(\theta_{t}\), e.g., [29]. Finally, another line of work focuses on designing regret-optimal controllers for LTV systems [30, 6, 4, 5, 31]. However, with the exception of [29], existing work on _online_ control of unknown LTV systems share the common assumption of either of open-loop stability or knowledge of an offline stabilizing controller. Moreover, the disturbances are generally assumed to be zero or stochastic noise independent of the states and inputs.
In this paper, we propose a model-based approach for stabilizing an unknown LTV system under arbitrary non-stochastic disturbances. Our approach uses ideas from convex body chasing (CBC), which is an online learning problem where an agent must choose a sequence of points within sequentially presented convex sets with the aim of minimizing the sum of distances between the chosen points [32, 33]. CBC has emerged as a promising tool in controller design, with most work making connections to a special case of CBC called _nested_ convex body chasing (NCBC), where the convex sets are sequentially nested within the previous set [34, 35]. In particular, [36] first explored the use of NCBC in the context of learning-based control of time-invariant nonlinear systems. NCBC was also used in combination with System Level Synthesis to design a distributed controller for networked systems [23] and in combination with model predictive control [37] for LTI system control as a promising alternative to system identification based methods. However, this line of work depends fundamentally on the time invariance of the system, which results in nested convex sets. LTV systems do not yield nested convex bodies and therefore represent a significant challenge.
Our work addresses this challenge and presents a novel online control algorithm (Algorithm 1) based on CBC (non-nested) techniques that guarantees bounded-input-bounded-output (BIBO) stability as a function of the total model variation \(\sum_{t=1}^{\infty}\|\theta_{t}-\theta_{t-1}\|\), without predictions or offline data under bounded arbitrary disturbances for unknown LTV systems (Theorem 1). This result implies that when the total model variation is finite or growing sublinearly, BIBO stability of the closed loop is guaranteed (Corollaries 1 and 2). In particular, our result depends on a refined analysis of the CBC technique (Lemma 1) and is based on the perturbation analysis of the Lyapunov equation. This contrasts with previous NCBC-based works for time-invariant systems, where the competitive ratio guarantee of NCBC directly applies and the main technical tool is the robustness of the model-based controller, which is a proven using a Lipschitz bound of a quadratic program in [23] and is directly assumed to exist in [36, 37].
We illustrate the proposed algorithm via numerical examples in Section IV to corroborate the stability guarantees. We demonstrate how the proposed algorithm can be used for data collection and complement data-driven methods like [26, 3, 27]. Further, the numerics highlight that the proposed algorithm can be efficiently implemented by leveraging the linearity of (1) despite the computational complexity of CBC algorithms in general (see Section III-B for details).
**Notation.** We denote \(\mathbb{S}^{n-1}\) as the unit sphere in \(\mathbb{R}^{n}\) and \(\mathbb{N}\left(\mathbb{N}_{+}\right)\) for (strictly) positive integers. For \(t,s\in\mathbb{N}\), we use \([t:s]\) as shorthand for the set of integers \(\{t,t+1,\ldots,s\}\) and \([t]\) for \(\{1,2,\ldots,t\}\). Unless otherwise specified, \(\|\cdot\|\) is the operator norm. We use \(\rho(\cdot)\) and \(\lambda_{\text{max}}(\cdot)(\lambda_{\text{min}}(\cdot))\) for the spectral radius and the maximum (minimum) eigenvalue of a matrix.
## II Preliminaries
In this section, we state the model assumptions underlying our work and review key results for convex body chasing, which we leverage in our algorithm design and analysis.
### _Stability and model assumptions_
We make the following standard assumptions about the dynamics in (1).
**Assumption 1**: _The disturbances are norm-bounded: \(\|w_{t}\|_{\infty}\leq W\) for all \(t\in\mathbb{N}\)._
**Assumption 2**: _The unknown time-varying system matrices \(\{\theta_{t}\}_{t=0}^{\infty}\) belong to a known (potentially large) polytope \(\Theta\) such that \(\theta_{t}\in\Theta\) for all \(t\). Moreover, there exists \(\kappa>0\) such that \(\|\theta\|\leq\kappa\) and \(\theta\) is stabilizable for all \(\theta\in\Theta\)._
Bounded and potentially adversarial disturbances is a common model both in the online learning and control problems [38, 39]. Since we make no assumptions on how large the bound \(W\) is, Assumption 1 models a variety of scenarios, such as bounded and/or correlated stochastic noise, as well as state-dependent disturbances such as the linearization and discretization error for nonlinear continuous-time dynamics. Assumption 2 is standard in the adaptive control and the learning-based control literature, e.g. [40, 41, 42].
**Remark 1**: _Representing model uncertainty as convex compact parameter sets where every model is stabilizable is not always possible. In particular, if a parameter set \(\Theta\) has a few singular points where \((A,B)\) loses stabilizability such as when \(B=0\), a simple heuristic is to ignore these points in the algorithm since we assume the underlying true system matrices \(\theta_{t}\) must be stabilizable._
A classical result in the theory of LTI systems is that for the infinite-horizon quadratic cost minimization problem where the stage cost is \(x_{t}^{\top}Qx_{t}+u_{t}^{\top}Ru_{t}\) with \(Q,R>0\), the optimal controller is a linear feedback gain \(K:=LQR(\theta;Q,R)\), which is parameterized by dynamics \(\theta\) and cost matrices \(Q,R\). In the proposed algorithm, we will adopt the optimal linear feedback controller as the model-based control methods such that appropriate quadratic cost matrices \(Q\) and \(R\) can be chosen for performance tuning.
### _Convex body chasing_
Convex Body Chasing (CBC) is a well-studied online learning problem [34, 35]. At every round \(t\in\mathbb{N}_{+}\), the player is presented a convex body/set \(\mathcal{K}_{t}\subset\mathbb{R}^{n}\). The player selects a point \(q_{t}\in\mathcal{K}_{t}\) with the objective of minimizing the cost defined as the total path length of the selection for \(T\) rounds, e.g., \(\sum_{t=1}^{T}\|q_{t}-q_{t-1}\|\) for a given initial condition \(q_{0}\notin\mathcal{K}_{1}\). There are many known algorithms for the CBC problem with a _competitive ratio_ guarantee such that the cost incurred by the algorithm is at most a constant factor from the cost incurred by the offline optimal algorithm which has the knowledge of the entire sequence of the bodies.
#### Ii-B1 The nested case
A special case of CBC is the _nested_ convex body chasing (NCBC) problem, where \(\mathcal{K}_{t}\subseteq\mathcal{K}_{t-1}\). A known algorithm for NCBC is to select the _Steiner point_ of \(\mathcal{K}_{t}\) at \(t\)[35]. The Steiner point of a convex body \(\mathcal{K}\) can be interpreted as the average of the extreme points of \(\mathcal{K}\) and is defined as \(\mathsf{st}(\mathcal{K}):=\mathbb{E}_{\varphi:\|\varphi\|\leq 1}\left[g_{ \mathcal{K}}(\nu)\right]\), where \(g_{\mathcal{K}}(\nu):=\operatorname*{argmax}_{x\in\mathcal{K}}\mathbb{C}^{ \top}x\) and the expectation is taken with respect to the uniform distribution over the unit ball. The intuition is that Steiner point remains "deep" inside of the (nested) feasible region so that when this point becomes infeasible due to a new convex body, this convex body must shrink considerably, which indicates that the offline optimal must have moved a lot. Given the initial condition \(q_{0}\notin\mathcal{K}_{1}\), the Steiner point selector achieves competitive ratio of \(O(n)\) against the offline optimal such that for all \(T\in\mathbb{N}_{+}\), \(\sum_{t=1}^{T}\|\mathsf{st}(\mathcal{K}_{t})-\mathsf{st}(\mathcal{K}_{t-1})\| \leq O(n)\cdot\text{OPT}\), where OPT is the offline optimal total path length. There are many works that combine the Steiner point algorithm for NCBC with existing control methods to perform learning-based online control for LTI systems, e.g., [36, 37, 23].
#### Ii-B2 General CBC
For general CBC problems, we can no longer take advantage of the nested property of the convex bodies. One may consider naively applying NCBC algorithms when the convex bodies happen to be nested and restarting the NCBC algorithm when they are not. However, due to the myopic nature of NCBC algorithms, which try to remain deep inside of each convex set, they no longer
guarantee a competitive ratio when used this way. Instead, [32] generalizes ideas from NCBC and proposes an algorithm that selects the _functional Steiner point_ of the _work function_.
**Definition 1** (Functional Steiner point): _For a convex function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the functional Steiner point of \(f\) is,_
\[\mathfrak{st}(f)=-n\cdot\fint_{v:\|v\|=1}f^{*}(v)\,v\,dv, \tag{2}\]
_where \(\fint_{x\in\mathcal{S}}f(x)dx\) denotes the normalized value \(\frac{\int_{x\in\mathcal{S}}f(x)dx}{\int_{x\in\mathcal{S}}1dx}\) of \(f(x)\) on the set \(\mathcal{S}\), and_
\[f^{*}(v):=\inf_{x\in\mathbb{R}^{n}}f(x)-\langle x,v\rangle \tag{3}\]
_is the Fenchel conjugate of \(f\)._
The CBC algorithm selects the functional Steiner point of the _work function_, which records the smallest cost required to satisfy a sequence of requests while ending in a given state, thereby encapsulating information about the offline-optimal cost for the CBC problem.
**Definition 2** (Work function): _Given an initial point \(q_{0}\in\mathbb{R}^{n}\), and convex sets \(\mathcal{K}_{1},\ldots,\mathcal{K}_{t}\subset\mathbb{R}^{n}\), the work function at time step \(t\) evaluated at a point \(x\in\mathbb{R}^{n}\) is given by:_
\[\omega_{t}(x)=\min_{q_{t}\in\mathcal{K}_{t}}\|x-q_{t}\|+\sum_{s=1}^{t}\|q_{s}- q_{s-1}\|\,. \tag{4}\]
Importantly, it is shown that the functional Steiner points of the work functions are valid, i.e., \(\mathfrak{st}(\omega_{t})\in\mathcal{K}_{t}\) for all \(t\)[32]. On a high level, selecting the functional Steiner point of the work function helps the algorithm stay competitive against the currently estimated offline optimal cost via the work function, resulting in a competitive ratio of \(n\) against the offline optimal cost (OPT) for general CBC problems,
\[\sum_{t=1}^{T}\|\mathfrak{st}(\omega_{t})-\mathfrak{st}(\omega_{t+1})\|\leq n \cdot\text{OPT}. \tag{5}\]
Given the non-convex nature of (2) and (4), we note that, in general, it is challenging to compute the functional Steiner point of the work function. However, in the proposed algorithm, we are able to leverage the linearity of the LTV systems and numerically approximate both objects with efficient computation in Section III-B.
## III Main Results
We present our proposed online control algorithm to stabilize the unknown LTV system (1) under bounded and potentially adversarial disturbances in Algorithm 1. After observing the latest transition from \(x_{t},u_{t}\) to \(x_{t+1}\) at \(t+1\) according to (1) (line 2), the algorithm constructs the set of all feasible models \(\widehat{\theta}_{t}\)'s (line 3) such that the model is _consistent_ with the observation, i.e., there exists an admissible disturbance \(\widehat{w}_{t}\) satisfying Assumption 1 such that the state transition from \(x_{t},u_{t}\) to \(x_{t+1}\) can be explained by the tuple \((\widehat{\theta}_{t},\,\widehat{w}_{t})\). We call this set the _consistent model set_\(\mathcal{P}_{t}\) and we note that the unknown true dynamics \(\theta_{t}=[A_{t}\,\,B_{t}]\) belongs to \(\mathcal{P}_{t}\). The algorithm then selects a _hypothesis_ model out of the consistent model set \(\mathcal{P}_{t}\) using the CBC algorithm by computing the functional Steiner point (2) of the work function (4) with respect to the history of the consistent parameter sets \(\mathcal{P}_{1},\ldots,\mathcal{P}_{t}\) (line 4). In particular, we present an efficient implementation of the functional Steiner point chasing algorithm in Section III-B by taking advantage of the fact that \(\mathcal{P}_{t}\)'s are polytopes that can be described by intersection of half-spaces. The implementation is summarized in Algorithm 2. Based on the selected hypothesis model \(\widehat{\theta}_{t}\), a certainty-equivalent LQR controller is synthesized (line 5) and the state-feedback control action is computed (line 6).
Note that, by construction, at time step \(t\in\mathbb{N}_{+}\) we perform certainty-equivalent control \(\widetilde{K}_{t-1}\) based on a hypothesis model \(\widehat{\theta}_{t-1}\) computed using retrospective data, even though the control action (\(u_{t}=\widetilde{K}_{t-1}x_{t}\)) is applied to the dynamics (\(\theta_{t}\)) that we do not yet have any information about. In order to guarantee stability, we would like for \(\widetilde{K}_{t-1}\) to be stabilizing the "future" dynamics (\(\theta_{t}\)). This is the main motivation behind our choice of the CBC technique instead of regression-based techniques for model selection. Thanks to the competitive ratio guarantee (5) of the functional Steiner point selector, when the true model variation is "small," our previously selected hypothesis model will stay "consistent" in the sense that \(\widetilde{K}_{t-1}\) can be stabilizing for \(\theta_{t}\) despite the potentially adversarial or state-dependent disturbances. On the other hand, when the true model variation is "large," \(\widetilde{K}_{t-1}\) does not stabilize \(\theta_{t}\), and we see growth in the state norm. Therefore, our final state bound is in terms of the total variation of the true model.
We show in the next section that, by drawing connections between the stability of the closed-loop system and the path length cost of the selected hypothesis model via CBC, we are able to stabilize the unknown LTV system without any identification requirements, e.g., the selected hypothesis models in Algorithm 1 need not be close to the true models. It is observed that even in the LTI setting, system identification can result in large-norm transient behaviors with numerical stability issues if the underlying unknown system is open-loop unstable or under non-stochastic disturbances; thus motivating the development of NCBC-based online control methods [24, 23, 36]. In the LTV setting, it is not sufficient to use NCBC ideas due to the time-variation of the model; however, the intuition for the use of CBC is similar. In fact, it can be additionally beneficial to bypass identification in settings where the true model is a moving target, thus making identification more challenging. We illustrate this numerically in Section IV.
### _Stability Analysis_
The main result of this paper is the BIBO stability guarantee for Algorithm 1 in terms of the true model variation and the disturbance bound. We sketch the proof in this section and refer Appendix C for the formal proof. This result depends on a refined analysis of the competitive ratio for the functional Steiner point chasing algorithm introduced in [32], which is stated in Lemma 1 and proven in Appendix A.
**Lemma 1** (Partial-path competitive ratio): _For \(t\in\mathbb{N}_{+}\), let \(s,e\in[t]\) and \(s<e\), and let \(\Theta\subset\mathbb{R}^{n}\) be a convex compact set. Denote \(\widehat{\Delta}_{[s,e]}:=\sum_{e=s+1}^{e}\|\mathfrak{sl}(\omega_{r})- \mathfrak{sl}(\omega_{r-1})\|_{F}\) as the partial-path cost of the functional Steiner point selector during interval \([s,e]\) and \(\{\operatorname{OPT}_{r}\}_{r=1}^{t}\) as the (overall) offline optimal selection for \(\mathcal{K}_{1},\ldots,\mathcal{K}_{t}\subset\Theta\). The functional Steiner point chasing algorithm has the following competitive ratio,_
\[\widehat{\Delta}_{[s,e]}\leq n\left(\mathsf{d}\mathsf{i}\mathsf{ a}(\Theta)+2\kappa+\sum_{r=s+1}^{e}\|\mathsf{OPT}_{r}-\operatorname{OPT}_{r-1} \|_{F}\right).\]
_on interval \([s,e]\), where \(\mathsf{d}\mathsf{i}\mathsf{a}(\Theta):=\max_{\theta_{1},\theta_{2}\in\Theta} \|\theta_{1}-\theta_{2}\|_{F}\) denotes the diameter of \(\Theta\) and \(\kappa:=\max_{\theta\in\Theta}\|\theta\|_{F}\)._
**Theorem 1** (BIBO Stability): _Under Assumption 1 and 2, the closed loop of (1) under Algorithm 1 is BIBO stable such that for all \(t\geq 0\),_
\[\|x_{t}\|\leq c_{1}\sum_{s=0}^{t-2}c_{2}^{\Delta_{[s,t-1]}}\rho_{L}^{t-s}\]
_where \(\Delta_{[s,t-1]}:=\sum_{r=s+1}^{t-1}\|\theta_{r}-\theta_{r-1}\|_{F}\) is the true model variation and \(c_{1},c_{2}>0,\rho_{L}\in(0,1)\) are constants that depend on the system-theoretical quantities of the worst-case model in the parameter set \(\Theta\)._
_Proof Sketch:_ At a high level, the structure of our proof is as follows. We first use the fact that our time-varying feedback gain \(\widehat{K}_{t}\) is computed according to a hypothesis model from the _consistent_ model set. Therefore, we can characterize the closed-loop dynamics in terms of the consistent models \(\widehat{\theta}_{t}\) and \(\widehat{K}_{t}\). Specifically, consider a time step \(t\) where we take the action \(u_{t}=\widehat{K}_{t-1}x_{t}\) after observing \(x_{t}\). Then, we observe \(x_{t+1}=A_{t}x_{t}+B_{t}u_{t}+w_{t}\) and select a new hypothesis model \(\widehat{\theta}_{t}=[\widehat{A}_{t}\ \widehat{B}_{t}]\) that is consistent with this new observation. Since we have selected a consistent hypothesis model, there is some admissible disturbance \(\widehat{w}_{t}\) satisfying Assumption 1 such that
\[x_{t+1}=\left(A_{t}+B_{t}\widehat{K}_{t-1}\right)x_{t}+w_{t}= \left(\widehat{A}_{t}+\widehat{B}_{t}\widehat{K}_{t-1}\right)x_{t}+\widehat{w} _{t}.\]
We therefore have
\[x_{t}=\widehat{w}_{t-1}+\sum_{s=0}^{t-2}\prod_{\tau\in[t-1:s+1]} \left(\widehat{A}_{\tau}+\widehat{B}_{\tau}\widehat{K}_{\tau-1}\right)\widehat{ w}_{s}. \tag{6}\]
We have two main challenges in bounding \(\|x_{t}\|\) in (6):
1. \(\widehat{K}_{t}\) is computed using \(\widehat{\theta}_{t}\) in Algorithm 1, but is applied to the next time step \(\widehat{\theta}_{t+1}\). While we know \(\widehat{(A}_{t}+\widehat{B}_{t}\widehat{K}_{t})<1\), in (6) we have \(\widehat{K}_{t-1}\) instead of \(\widehat{K}_{t}\).
2. Naively applying submultiplicativity of the operator norm for (6) results in bounding \(\left\|\left(\widehat{A}_{\tau}+\widehat{B}_{t}\widehat{K}_{t-1}\right)\right\|\). However, even if \(\widehat{K}_{t-1}\) satisfies \(\rho(\widehat{A}_{t}+\widehat{B}_{t}\widehat{K}_{t})<1\), in general the operator norm can be greater than \(1\).
To address the first challenge, our key insight is that by selecting hypothesis models via CBC technique, in any interval where the true model variation is small, our selected hypothesis model also vary little. Specifically, by Lemma 1, we can bound the partial-path variation of the selected hypothesis models with the true model partial-path variation \(\Delta_{[s,e]}\) as follows.
\[\widehat{\Delta}_{[s,e]} \leq n\left(\mathsf{d}\mathsf{i}\mathsf{a}(\Theta)+2\kappa+\sum_ {r=s}^{e-1}\|\operatorname{OPT}_{\tau+1}-\operatorname{OPT}_{\tau}\|_{F}\right)\] \[\leq n\left(\mathsf{d}\mathsf{i}\mathsf{a}(\Theta)+2\kappa+ \Delta_{[s,e]}\right). \tag{7}\]
where \(\Theta\) and \(\kappa\) are from Assumption 2. A consequence of (7) is that, during intervals where the true model variation is small, we have \(\left(\widehat{A}_{t}+\widehat{B}_{t}\widehat{K}_{t-1}\right)\approx\left( \widehat{A}_{t}+\widehat{B}_{t}\widehat{K}_{t}\right)\).
For the second challenge, we leverage the concept of sequential strong stability [43], which allows bounding \(\left\|\prod_{\tau\in[t-1:s+1]}\rho\left(\widehat{A}_{\tau}+\widehat{B}_{\tau }\widehat{K}_{\tau}\right)\right\|\) approximately with \(\prod_{\tau\in[t-1:s+1]}\rho\left(\widehat{A}_{\tau}+\widehat{B}_{\tau} \widehat{K}_{\tau}\right)\) times \(\mathcal{O}\left(\exp(\Delta_{[s,t-1]})\right)\).
We now sketch the proof. The helper lemmas are summarized in Appendix B and the formal proof can be found in Appendix C. Consider \(L_{t},H_{t}\in\mathbb{R}^{m\times n}\) with \(H_{t}>0\) such that
\[\widehat{A}_{t}+\widehat{B}_{t}\widehat{K}_{t-1}:=H_{t}^{1/2}L_{t}H_{t}^{-1/2}.\]
We use \(I_{s}\) as shorthand for the interval \([t-1:s+1]\). Then each summand in (6) can be bounded as
\[\left\|\prod_{\tau\in I_{s}}\left(\widehat{A}_{\tau}+\widehat{B} _{\tau}\widehat{K}_{\tau-1}\right)\right\|\] \[\leq\underbrace{\left\|H_{t-1}^{1/2}\right\|}_{(a)}\underbrace{ \left\|\prod_{k\in I_{s+1}}\left\|H_{k}^{-1/2}H_{k-1}^{1/2}\right\|}_{(b)} \underbrace{\left\|\prod_{\tau\in I_{s}}\left\|L_{t}\right\|\right.}_{(c)} \tag{8}\]
Therefore showing BIBO stability comes down to bounding individual terms in (8). In particular we will show that
by selecting appropriate \(H_{t}\) and \(L_{t}\), term (a) is bounded by a constant \(C_{H}\) that depends on system theoretical properties of the worst-case parameter in \(\Theta\). For (b) and (c), we isolate the instances when
\[\left\|\widehat{\theta}_{t}-\widehat{\theta}_{t-1}\right\|_{F}\leq\epsilon \tag{9}\]
for some chosen \(\epsilon>0\). For instances where (9) holds, we use the perturbation analysis of the Lyapunov equation involving the matrix \(\widehat{A}_{t}+\widehat{B}_{t}\widetilde{K}_{t-1}\) (Lemma 6 for (b) and Lemma 4 for (c)) to bound (b) and (c) in terms of the partial-path movement of the selected parameters \(\widehat{A}_{[s,\epsilon]}:=\sum_{r=s+1}^{\epsilon}\|\mathfrak{st}(\omega_{r+ 1})-\mathfrak{st}(\omega_{r})\|_{F}\). Specifically, Lemma 6 implies
\[\left\|H_{t}^{-1/2}H_{t-1}^{1/2}\right\|\leq\begin{cases}e^{\frac{\rho\| \widehat{\theta}_{r}-\widehat{\theta}_{t-1}\|_{F}}{2}},&\text{if (9) holds}\\ \bar{H}&\text{otherwise},\end{cases} \tag{10}\]
where \(\beta,\bar{H}>1\) are constants. We also show that from Lemma 4,
\[\|L_{t}\|\leq\begin{cases}\rho_{L}&\text{if (9) holds}\\ \bar{L}&\text{otherwise},\end{cases} \tag{11}\]
for \(\rho_{L}\in(0,1)\) and \(\bar{L}>1\) a constant.
We now plug (10) and (11) into (8). Denote by \(n_{[s,t]}\) the number of pairs \((\tau,\tau-1)\) with \(s+1\leq\tau\leq t-1\) where (9) fails to hold. Let \(\Delta_{[s,\epsilon]}:=\sum_{r=s+1}^{\epsilon}\|\theta_{r}-\theta_{r-1}\|_{F}\) be the true model partial-path variation. Then (8) can be bounded as
\[\left\|\prod_{r\in[I-1:s+1]}\left(\widehat{A}_{r}+\widehat{B}_{ r}\widetilde{K}_{r-1}\right)\right\|\] \[\leq C_{H}\cdot\bar{H}^{n_{[s,t]}}\cdot e^{\frac{\rho\widehat{ \Delta}_{[s+1,t-1]}}{2}}\cdot L^{n_{[s,t]}}\cdot\rho_{L}^{t-s-\bar{n}_{[s,t]}-1}\] \[\leq C_{H}\left(\frac{\bar{L}\bar{H}}{\rho_{L}}\right)^{\frac{ \widehat{\Delta}_{[s,t-1]}}{\epsilon}}e^{\frac{\bar{\rho}_{[s+1,t-1]}}{2}} \cdot\rho_{L}^{t-s-1}\] \[\leq C_{H}\left(\frac{\bar{L}\bar{H}}{\rho_{L}}\right)^{\frac{ \bar{n}\left(\mathfrak{st}(\omega_{r})+2\kappa\lambda_{[s,t-1]}\right)}{ \epsilon}}e^{\frac{\bar{\rho}_{\left(\mathfrak{st}(\omega_{r})+2\kappa\lambda _{[s,t-1]}\right)}}{2}}\cdot\rho_{L}^{t-s-1}\] \[=:c\cdot c_{2}^{\Delta_{[s,t-1]}}\rho_{L}^{t-s},\]
for constants \(c,c_{2}\) and \(\bar{n}:=n(n+m)\) for the dimension of the parameter space for \(A_{t},B_{t}\). In the second inequality, we used the observation that \(n_{[s,t]}\leq\frac{\widehat{\Delta}_{[s,t-1]}}{\epsilon}\) and in the last inequality we used Lemma 1. Combined with (6) and Assumption 1, this proves the desired bound.
An immediate consequence of Theorem 1 is that when the model variation in (1) is bounded or sublinear, Algorithm 1 guarantees BIBO stability. This is summarized below.
**Corollary 1** (Bounded variation): _Suppose (1) has model variation \(\Delta_{[0,t]}\leq M\) for a constant \(M\). Then,_
\[\sup_{t}\|x_{t}\|\leq\frac{c_{1}\cdot c_{2}^{M}}{1-\rho_{L}}.\]
**Corollary 2** (Unbounded but sublinear variation): _Let \(\alpha\in(0,1)\) and \(t\in\mathbb{N}_{+}\). Suppose (1) is such that for each \(k\leq t\), \(\Delta_{[k,k+1]}\leq\delta_{t}:=1/t^{(1-\alpha)}\), implying a total model variation \(\Delta_{[0,t]}=\mathcal{O}(t^{\alpha})\). Then for large enough \(t\), \(\rho_{L}c_{2}^{\delta_{t}}\leq\frac{1+\rho_{L}}{2}\), and therefore_
\[\|x_{k}\|\leq c_{1}\sum_{i=0}^{k}\left(\rho_{L}c_{2}^{\delta_{t}}\right)^{i} \leq\frac{2c_{1}}{1-\rho_{L}}.\]
Corollary 1 can be useful for scenarios where the mode of operation of the system changes infrequently and for systems such that \(\theta(t)\to\theta^{*}\) as \(t\to\infty\)[44]. As an example, consider power systems where a prescribed set of lines can potentially become disconnected from the grid and thus change the grid topology. Corollary 2 applies to slowly drifting systems [45].
### _Efficient implementation of CBC_
In general, implementation of the functional Steiner point of the work function may be computationally inefficient. However, by taking advantage of the LTV structure, we are able to design an efficient implementation in our setting. The key observation here is that for each \(t\), \(\mathcal{P}_{t}\) (Algorithm 1, line 3) can be described by the intersection of half-spaces because the ambient parameter space \(\Theta\) is assumed to be a polytope and the observed online transition data from \(x_{t},u_{t}\) to \(x_{t+1}\) specifies two half-space constraints at each time step due to linearity of (1). Our approach to approximate the functional Steiner point for chasing the consistent model sets is inspired by [33] where second-order cone programs (SOCPs) are used to approximate the (nested set) Steiner point of the sublevel set of the work functions for chasing half-spaces.
Denote \(\{(a_{i},b_{i})\}_{i=1}^{p_{t}}\) as the collection of \(p_{t}\) half-space constraints describing \(\mathcal{P}_{t}\), i.e., \(a_{i}^{\top}\theta\leq b_{i}\). To approximate the integral for the functional Steiner point (2) of \(\omega_{t}\), we sample \(N\) number of random directions \(v\in\mathbb{S}^{n-1}\), evaluate the Fenchel conjugate of the work function \(\omega_{t}^{*}\) at each \(v\) with an SOCP, and take the empirical average. Finally we project the estimated functional Steiner point back to the set of consistent model \(\mathcal{P}_{t}\cap\Theta\). Even though the analytical functional Steiner point (2) is guaranteed to be a member of the consistent model set, the projection step is necessary because we are integrating numerically, which may result in an approximation that ends up outside of the set. We summarize this procedure in Algorithm 2. Specifically, given a direction \(v\in\mathbb{S}^{n-1}\), the Fenchel conjugate of the work function at time step \(t\) is
\[\omega_{t}^{*}(v) =\inf_{x\in\mathbb{R}^{n}}\omega_{t}(x)-\langle x,v\rangle\] \[=\min_{\begin{subarray}{c}x\in\mathbb{R}^{n}\\ q_{t}\in\mathcal{K}_{s}\end{subarray}}\sum_{s=1}^{t}\|q_{s}-q_{s-1}\|+\|x-q_{t} \|-\langle x,v\rangle\,.\]
This can be equivalently expressed as the following SOCP with decision variables \(x,q_{1},\ldots,q_{t},\lambda,\lambda_{1},\ldots,\lambda_{t}\):
\[\min_{\begin{subarray}{c}x,q_{1},\ldots,q_{t}\\ \lambda,\lambda_{1},\ldots,\lambda_{t}\end{subarray}} \lambda+\sum_{s=1}^{t}\lambda_{s}-\langle v,x\rangle\] (12) s.t. \[\|q_{s}-q_{s-1}\|\leq\ \lambda_{s},\quad\text{for }s\in[t]\] \[\|x-q_{t}\|\leq\ \lambda\] \[a_{i}^{\top}q_{s}\leq b_{i},\quad\text{for }i\in[p_{s}],s\in[t]\]
Another potential implementation challenge is that the number of constraints in the SOCP (12) grows linearly with time due to the construction of the work function (4). This is a common drawback of online control methods based on CBC and NCBC techniques and can be overcome through truncation or over-approximation in of the work functions in practice. Additionally, if the LTV system is periodic with a known period, then we can leverage Algorithm 1 during the initial data collection phase. Once representative (persistently exciting) data is available, one could employ methods like [3] to generate a stabilizing controller for the unknown LTV system. In Section IV, we show that data collection via Algorithm 1 results in a significantly smaller state norm than random noise injection when the system is unstable.
## IV Simulation
In this section, we demonstrate Algorithm 1 in two LTV systems. Both of the systems we consider are open-loop unstable, thus the algorithms must work to stabilize them. We use the same algorithm parameters for both, with \(\Theta=[-2,3]^{2}\), LQR cost matrices \(Q=I\) and \(R=1\).
### _Example 1: Markov linear jump system_
We consider the following Markov linear jump system model from [46], with
\[A_{1} =\left[\begin{array}{cc}1.5&1\\ 0&0.5\end{array}\right],\quad A_{2}=\left[\begin{array}{cc}0.6&0\\ 0.1&1.2\end{array}\right],\quad B_{1}=\left[\begin{array}{c}0\\ 1\end{array}\right],\] \[B_{2} =\left[\begin{array}{c}1\\ 1\end{array}\right],\quad\Pi=\left[\begin{array}{cc}0.8&0.2\\ 0.1&0.9\end{array}\right]\]
where \(\Pi\) is the transition probability matrix from \(\theta_{1}\) to \(\theta_{2}\) and vice versa. We inject uniformly random disturbances such that \(w_{t}\in\{-3\mathds{1},3\mathds{1}\}\) where \(\mathds{1}\) is the all-one vector. We set the disturbances to be zero for the last 8 time steps to make explicit the stability of the closed loop. We also simulate certainty-equivalent control based on online least squares (OLS+CE) as the baseline for comparison. The result is shown in Figure 1, where the naive online least squares algorithm not only fails to stabilize the system; it also incurs a significantly larger state norm than the open loop system without any control input. In Figure 2 we run the same experiment with a different random seed and observe that despite being stabilizing, OLS+CE still incurs much larger state norm than Algorithm 1.
### _Example 2: LTV system_
Our second example highlights that Algorithm 1 is a useful data-collection alternative to open-loop random noise injection. We consider the LTV system from [3, 27], with
\[A(k) =\left[\begin{array}{cc}1.5&0.0025k\\ -0.1\cos(0.3k)&1+0.05^{3/2}\sin(0.5k)\sqrt{k}\end{array}\right],\] \[B(k) =0.05\left[\begin{array}{c}1\\ \frac{0.1k+2}{0.1k+3}\end{array}\right].\]
where we modified \(A(1,1)\) from \(1\) to \(1.5\) to increase the instability of the open loop in the beginning; thus making it more challenging to stabilize. We consider no disturbances here and set \(W=0\) in the algorithm. In particular, we compare the proposed algorithm against randomly generated bounded inputs from \(\mathsf{UNIF}[-1,1]\). We also modify the control inputs from Algorithm 1 to be \(u_{t}=\widehat{K}_{t-1}x_{t}+\eta_{t}\cdot\mathds{1}\) with \(\eta_{t}\sim\mathsf{UNIF}[-1,1]\) so that we can collect rich data in the closed loop. This is motivated by the growing body of data-driven control methods such as [3, 26, 27] that leverage sufficiently rich offline data to perform control design for unknown LTV systems. However, most of these works directly inject random inputs for data collection. It is evident in Figure 3 that when the open-loop system is unstable it may be undesirable to run the system without any feedback control. Therefore, Algorithm 1 complements existing data-driven methods by allowing safe data collection with significantly better transient behavior.
## V Concluding remarks
In this paper, we propose a model-based approach for stabilizing an unknown LTV system under arbitrary non-stochastic disturbances in the sense of bounded input bounded output under the assumption of infrequently chang
Fig. 1: Simulation result for the Markov linear jump system. Top plot shows the state norm trajectories of the proposed algorithm, certainty-equivalent control based on online least squares, and the open loop. Middle plot shows the norm of the selected hypothesis model via Algorithm 2. Bottom plot shows the true model modes.
Fig. 2: Same as Figure 1 but with a different randoms seed.
ing or slowly drifting dynamics. Our approach uses ideas from convex body chasing (CBC), which is an online problem where an agent must choose a sequence of points within sequentially presented convex sets with the aim of minimizing the sum of distances between the chosen points. The algorithm requires minimal tuning and achieves significantly better performance than the naive online least squares based control. Future work includes sharpening the stability analysis to go beyond the BIBO guarantee in this work, which will require controlling the difference between the estimated disturbances and true disturbances. Another direction is to extend the current results to the networked case, similar to [23].
|
2310.17454 | A Marstrand projection theorem for lines | Fix integers $1<k<n$. For $V\in G(k,n)$, let $P_V: \mathbb{R}^n\rightarrow V$
be the orthogonal projection. For $V\in G(k,n)$, define the map \[ \pi_V:
A(1,n)\rightarrow A(1,V)\bigsqcup V. \] \[ \ell\mapsto P_V(\ell). \]
For any $0<a<\text{dim}(A(1,n))$, we find the optimal number $s(a)$ such that
the following is true. For any Borel set $\boldsymbol{A} \subset A(1,n)$ with
$\text{dim}(\boldsymbol{A})=a$, we have \[
\text{dim}(\pi_V(\boldsymbol{A}))=s(a), \text{for a.e. } V\in G(k,n). \] When
$A(1,n)$ is replaced by $A(0,n)=\mathbb{R}^n$, it is the classical Marstrand
projection theorem, for which $s(a)=\min\{k,a\}$. A new ingredient of the paper
is the Fourier transform on affine Grassmannian. | Shengwen Gan | 2023-10-26T15:01:48Z | http://arxiv.org/abs/2310.17454v1 | # A Marstrand Projection Theorem for Lines
###### Abstract.
Fix integers \(1<k<n\). For \(V\in G(k,n)\), let \(P_{V}:\mathbb{R}^{n}\to V\) be the orthogonal projection. For \(V\in G(k,n)\), define the map
\[\pi_{V}:A(1,n)\to A(1,V)\bigsqcup V.\]
\[\ell\mapsto P_{V}(\ell).\]
For any \(0<a<\dim(A(1,n))\), we find the optimal number \(s(a)\) such that the following is true. For any Borel set \(\mathbf{A}\subset A(1,n)\) with \(\dim(\mathbf{A})=a\), we have
\[\dim(\pi_{V}(\mathbf{A}))=s(a),\text{ for a.e. }V\in G(k,n).\]
When \(A(1,n)\) is replaced by \(A(0,n)=\mathbb{R}^{n}\), it is the classical Marstrand projection theorem, for which \(s(a)=\min\{k,a\}\). A new ingredient of the paper is the Fourier transform on affine Grassmannian.
Key words and phrases:Marstrand projection theorem, exceptional set estimate, Fourier analysis, Brascamp-Lieb inequality 2020 Mathematics Subject Classification: 28A75, 28A78
## 1. Introduction
For \(k<n\), let \(G(k,n)\) be the set of \(k\)-dimensional subspaces in \(\mathbb{R}^{n}\), and \(A(k,n)\) be the set of \(k\)-dimensional affine spaces in \(\mathbb{R}^{n}\). For \(V\in G(k,n)\) and \(l<k\), let \(G(l,V)\) be the set of \(l\)-dimensional subspaces in \(V\), and \(A(l,V)\) be the set of \(l\)-dimensional affine spaces in \(V\). For simplicity, we just call \(l\)-dimensional affine space as \(l\)-plane. Fix integers \(1<k<n\). For \(V\in G(k,n)\), let
\[P_{V}:\mathbb{R}^{n}\to V\]
be the orthogonal projection onto \(V\).
Let \(V\in G(k,n)\). Note that for \(L\in A(1,n)\), we have \(P_{V}(L)\) is either a line or a point in \(V\). We can define
\[\pi_{V}:A(1,n)\to A(1,V)\bigsqcup V\]
\[L\mapsto P_{V}(L).\]
Marstrand [8] proved the following result. Let \(A\subset\mathbb{R}^{n}\) with \(\dim(A)=a\). Then
\[\dim(P_{V}(A))=\min\{a,k\},\text{ for a.e. }V\in G(k,n). \tag{1}\]
Naturally, one can consider the following Marstrand-type projection problem for \(A(1,n)\).
For \(0<a<2(n-1)=\dim(A(1,n))\), what is the optimal number \(s(a)\) such that the following is true? Let \(\mathbf{A}\subset A(1,n)\) with \(\dim(\mathbf{A})=a\). (Since \(A(1,n)\) is a Riemannian manifold, \(\dim(\mathbf{A})\) is naturally defined.) Then we have
\[\dim(\pi_{V}(\mathbf{A}))=s(a),\text{ for a.e. }V\in G(k,n). \tag{2}\]
It is not hard to see that
\[s(a)\leq\min\{a,\dim(A(1,k))\}=\min\{a,2(k-1)\}.\]
At the first glance, one may guess the optimal number \(s(a)\) is \(\min\{a,2(k-1)\}\) which has the same form as in (1). However, this is not true as shown in the following example.
Consider \(n=3,k=2\). Let \(\mathbf{A}=G(1,3)\) which is the set of lines passing through the origin. We see that for any \(V\in G(2,3)\), \(\pi_{V}(\mathbf{A})=G(1,V)\bigsqcup\{0\}\) which is one-dimensional set of lines plus a point. We get
\[\dim(\pi_{V}(\mathbf{A}))=1<2=\min\{\dim(\mathbf{A}),2(k-1)\}.\]
We see that the problem becomes quite different when we consider the projection of lines, compared with the projection of points. One reason is that distinct lines can have overlaps while distinct points do not. Because of this overlapping phenomenon, we are able to stack lines in \(\mathbb{R}^{n}\) so that their projections to subspaces have some different structure. This makes the line version of Marstrand projection problem harder than the point version.
**Definition 1**.: _Fix \(1<k<n\). For any \(0<a<\dim(A(1,n))\), define_
\[S(a):=\inf_{\mathbf{A}\subset A(1,n),\dim(\mathbf{A})=a}\operatorname*{ess\, sup}_{V\in G(k,n)}\dim(\pi_{V}(\mathbf{A})). \tag{3}\]
_Here \(\operatorname*{ess\,sup}\) is with respect to the unique probability measure on \(G(k,n)\) which is invariant under rotation. In our paper, we always assume \(\mathbf{A}\) to be a Borel set to avoid some measurability issue. We also remark that \(S(a)=S_{k,n}(a)\) should also depend on \(k,n\), but we just drop them from the notation for simplicity._
We state our main theorem.
**Theorem 2**.: _We have the following exact value of \(S(a)\)._
\[S(a) =a,\quad a\in[0,k-1], \tag{5}\] \[S(a) =k-1,\quad a\in[k-1,n-1],\] (6) \[S(a) =a-(n-k),\quad a\in[n-1,n+k-2],\] (7) \[S(a) =2(k-1),\quad a\in[n+k-2,2(n-1)]. \tag{4}\]
**Remark 3**.: We can also write \(S(a)\) as
\[S(a) =\min\{a,k-1\},\quad a\in[0,n-1], \tag{9}\] \[S(a) =\min\{a-(n-k),2(k-1)\},\quad a\in[n-1,2(n-1)]. \tag{8}\]
We will prove Theorem 2 by showing the following two propositions.
**Proposition 4**.: _We have the following upper bounds of \(S(a)\)._
\[S(a) \leq\min\{a,k-1\},\quad a\in[0,n-1],\] \[S(a) \leq\min\{a-(n-k),2(k-1)\},\quad a\in[n-1,2(n-1)]. \tag{10}\]
**Proposition 5**.: _We have the following lower bounds of \(S(a)\)._
\[S(a) \geq\min\{a,k-1\},\quad a\in[0,n-1], \tag{12}\] \[S(a) \geq\min\{a-(n-k),2(k-1)\},\quad a\in[n-1,2(n-1)]. \tag{11}\]
The proof of Proposition 4 is short and provides good examples. We give it here.
Proof of Proposition 4.: When \(a\in[0,n-1]\), we choose \(\mathbf{A}\) to be a subset of \(G(1,n)\) with dimension \(a\). We see that for any \(V\in G(k,n)\), \(\pi_{V}(\mathbf{A})\subset G(1,V)\bigsqcup\{0\}\). Therefore,
\[\dim(\pi_{V}(\mathbf{A}))\leq\min\{a,k-1\}\]
for any \(V\in G(k,n)\), and hence
\[S(a)\leq\min\{a,k-1\}.\]
When \(a\in[n-1,2(n-1)]\), we write \(a=n-1+\beta\). Choose \(A\subset\mathbb{R}^{n-1}\) (here \(\mathbb{R}^{n-1}\) is spanned by the first \(n-1\) coordinates) to be a \(\beta\)-dimensional set. For each \(x\in A\), let \(\mathbf{A}_{x}\) be the set of lines passing through \(x\) and transverse to \(\mathbb{R}^{n-1}\). More precisely,
\[\mathbf{A}_{x}=\{\ell\in A(1,n):x\in\ell,\ell\not\subset\mathbb{R}^{n-1}\}.\]
Choose \(\mathbf{A}=\bigsqcup_{x\in A}\mathbf{A}_{x}\). Since \(\{\mathbf{A}_{x}\}_{x\in A}\) are disjoint and \(\mathbf{A}\) has a product structure, we have
\[\dim(\mathbf{A})=\dim A+\dim(\mathbf{A}_{x})=\beta+n-1=a.\]
Note that if \(V\in G(k,n)\) and \(V\) does not contain the \(x_{n}\)-axis, then \(\pi_{V}(\mathbf{A}_{x})=\big{(}P_{V}(x)+G(1,V)\big{)}\bigsqcup\{P_{V}(x)\}\). If \(V\in G(k,n)\) contains the \(x_{n}\)-axis, then \(\pi_{V}(\mathbf{A}_{x})=P_{V}(x)+G(1,V)\). As a result, \(\pi_{V}(\mathbf{A})=\bigcup_{x\in A}\pi_{V}(\mathbf{A}_{x})\) consists of at most \(\beta\)-dimensional translated copies of \(G(1,V)\) plus at most \(\beta\)-dimensional set of points. We have
\[\dim(\pi_{V}(\mathbf{A}))\leq\beta+k-1=a-(n-k).\]
Of course, we also have \(\dim(\pi_{V}(\mathbf{A}))\leq\dim(A(1,V)\bigsqcup V)=2(k-1)\).
For Proposition 5, we will be able to prove a stronger result known as the exceptional set estimate.
Recall that \(P_{V}:\mathbb{R}^{n}\to V\) is the orthogonal projection for a given \(V\in G(k,n)\). For a set \(A\subset\mathbb{R}^{n}\) with \(\dim A=a\) and a parameter \(s\) satisfying \(0<s<\min\{a,k\}\), we consider the set
\[\{V\in G(k,n):\dim(P_{V}(A))<s\}, \tag{13}\]
which is known as the exceptional set. There are two types of the estimates for the exceptional set:
\[\dim\left(\{V\in G(k,n):\dim(P_{V}(A))<s\}\right)\leq k(n-k)+s-k. \tag{14}\]
\[\dim\left(\{V\in G(k,n):\dim(P_{V}(A))<s\}\right)\leq k(n-k)+s-a; \tag{15}\]
We call the first one Kaufman-type estimate, and the second one Falconer-type estimate. See the reference [7], [2], [10] and [9]. It is not hard to see that by letting \(s\to\min\{a,k\}\), either (15) or (14) will imply (1).
By the same idea, we can deduce Proposition 5 from the following two exceptional set estimates.
**Theorem 6**.: _Fix a number \(0<\mu<1/100\). Let \(A_{\mu}\) be a ball of radius \(\mu\) in \(A_{loc}(1,n)\) and \(G_{\mu}\) be a ball of radius \(\mu\) in \(G(k,n)\), so that for any \(\ell\in A_{\mu},V\in G_{\mu}\), we have_
\[\angle(\ell,V^{\perp})>\mu. \tag{16}\]
_For \(\mathbf{A}\subset A_{\mu}\) with \(\dim(\mathbf{A})=a\), and \(0<s<\min\{a,2(k-1)\}\), define the exceptional set_
\[E_{s}(\mathbf{A}):=\{V\in G_{\mu}:\dim(\pi_{V}(\mathbf{A}))<s\}. \tag{17}\]
_We have the following estimates:_
\[\dim(E_{s}(\mathbf{A}))\leq k(n-k)+s-(k-1). \tag{18}\]
\[\dim(E_{s}(\mathbf{A}))\leq\max\{0,k(n-k)+s-a+(n-k)\}. \tag{19}\]
**Remark 7**.: For some technical reason, here we use \(A_{\mu},G_{\mu}\) (instead of \(A(1,n),G(k,n)\)) to make that for any \(\ell\in A_{\mu}\) and \(V\in G_{\mu}\), \(\pi_{V}(\ell)\) is a line.
The proof of (18) relies on a tube-slab incidence estimate. The proof of (19) relies on the Fourier analysis in \(A(1,n)\), which is the main novelty of this paper.
There is another notable thing. To prove (1), we just need one of (14) or (15). However, to prove Proposition 5, we need both (18) and (19). Actually, (18) implies (11), while (19) implies (12).
Peres and Schlag [10] explored projection problems within a broader context. For an introduction to this method, we recommend [9, Chapter 18]. Peres and Schlag introduced the concept of the "transversality condition" for a family of general projection maps. They also established Marstrand-type estimates and exceptional set estimates when this condition is met. One might ponder whether we can employ the approach of Peres and Schlag, involving the definition of general projection maps and the verification of the transversality condition, for our specific problem. However, in our case, it does not yield the precise estimate we require.
In Section 2, we introduce the notation. In Section 3, we show that Theorem 6 implies Proposition 5. In Section 4, we prove the Kaufman-type estimate (18). In Section 5, we introduce the Fourier transform in \(A(1,n)\) and then prove the Falconer-type estimate (19).
## 2. Preliminary
We discuss the properties of affine Grassmannians and the metric on it. Most of the content in this section is from [5, Section 1.2]. See also in [4, Section 4.1].
### Notation and some useful lemmas
We will frequently use the following definitions.
**Definition 8**.: _For a number \(\delta>0\) and any set \(X\) (in a metric space), we use \(|X|_{\delta}\) to denote the maximal number of \(\delta\)-separated points in \(X\)._
**Definition 9**.: _Let \(\delta,s>0\). We say \(A\subset\mathbb{R}^{n}\) is a \((\delta,s,C)\)-set if it is \(\delta\)-separated and satisfies the following estimate:_
\[\#(A\cap B_{r}(x))\leq C(r/\delta)^{s}. \tag{20}\]
_for any \(x\in\mathbb{R}^{n}\) and \(1\geq r\geq\delta\). In this paper, the constant \(C\) is not important, so we will just say \(A\) is a \((\delta,s)\)-set if_
\[\#(A\cap B_{r}(x))\lesssim(r/\delta)^{s}\]
_for any \(x\in\mathbb{R}^{n}\) and \(1\geq r\geq\delta\)._
**Remark 10**.: We remark that we make "\(\delta\)-separated" as a part of the definition for a \((\delta,s)\)-set.
**Lemma 11**.: _Let \(\delta,s>0\) and let \(B\subset\mathbb{R}^{n}\) be any set with \(\mathcal{H}^{s}_{\infty}(B)=:\kappa>0\). Then, there exists a \((\delta,s)\)-set \(P\subset B\) with \(\#P\gtrsim\kappa\delta^{-s}\)._
Proof.: See [3, Lemma 3.13].
**Lemma 12**.: _Fix \(a>0\). Let \(\nu\) be a probability measure satisfying \(\nu(B_{r})\lesssim r^{a}\) for any \(B_{r}\) being a ball of radius \(r\). If \(A\) is a set satisfying \(\nu(A)\geq\kappa\) (\(\kappa>0\)), then for any \(\delta>0\) there exists a subset \(F\subset A\) such that \(F\) is a \((\delta,a)\)-set and \(\#F\gtrsim\kappa\delta^{-a}\)._
Proof.: By the previous lemma, we just need to show \(\mathcal{H}^{a}_{\infty}(A)\gtrsim\kappa\). We just check it by definition. For any covering \(\{B\}\) of \(A\), we have
\[\kappa\leq\sum_{B}\nu(B)\lesssim\sum_{B}r(B)^{a}.\]
Ranging over all the covering of \(A\) and taking infimum, we get
\[\kappa\lesssim\mathcal{H}^{a}_{\infty}(A).\]
**Lemma 13**.: _Suppose \(X\subset[0,1]^{2}\) with \(\dim X<s\). Then for any \(\varepsilon>0\), there exist dyadic squares \(\mathcal{C}_{2^{-k}}\subset\mathcal{D}_{2^{-k}}\)\((k>0)\) so that_
1. \(X\subset\bigcup_{k>0}\bigcup_{D\in\mathcal{C}_{2^{-k}}}D,\)__
2. \(\sum_{k>0}\sum_{D\in\mathcal{C}_{2^{-k}}}r(D)^{s}\leq\varepsilon\)_,_
3. \(\mathcal{C}_{2^{-k}}\) _satisfies the_ \(s\)_-dimensional condition: For_ \(l<k\) _and any_ \(D\in\mathcal{D}_{2^{-l}}\)_, we have_ \(\#\{D^{\prime}\in\mathcal{C}_{2^{-k}}:D^{\prime}\subset D\}\leq 2^{(k-l)s}\)_._
Proof.: See [6, Lemma 2].
### Metric on affine Grassmannian
For every \(k\)-plane \(V\in A(k,n)\), we can uniquely write it as
\[V=\text{dir}(V)+x_{V},\]
where \(\text{dir}(V)\in G(k,n)\) and \(x_{V}\in V^{\perp}\). \(\text{dir}(V)\) refers to the direction of \(V\), as can be seen that \(\text{dir}(V)=\text{dir}(V^{\prime})\Leftrightarrow V\parallel V^{\prime}\).
In this paper, we use \(A_{\text{loc}}(k,n)\) to denote the set of \(k\)-planes \(V\) such that \(x_{V}\in B^{n}(0,1/2)\). (\(B^{n}(0,1/2)\) is the ball of radius \(1/2\) centered at the origin in \(\mathbb{R}^{n}\).)
\[A_{\text{loc}}(k,n)=\{V:V\text{ is a $k$ dimensional plane },x_{V}\in B^{n}(0,1/2)\}. \tag{21}\]
Later in our proof, instead of considering \(A(k,n)\), we only care about those \(V\) lying near the origin. This is through a standard localization argument.
Next, we discuss the metrics on \(G(k,n)\) and \(A_{\text{loc}}(k,n)\). For \(V_{1},V_{2}\in G(k,n)\), we define
\[d(V_{1},V_{2})=\|\pi_{V_{1}}-\pi_{V_{2}}\|.\]
Here, \(\pi_{V_{1}}:\mathbb{R}^{n}\to V_{1}\) is the orthogonal projection. We have another characterization for this metric. Define \(\rho(V_{1},V_{2})\) to be the smallest number \(\rho\) such that \(B^{n}(0,1)\cap V_{1}\subset N_{\rho}(V_{2})\). We have the comparability of \(d(\cdot,\cdot)\) and \(\rho(\cdot,\cdot)\).
**Lemma 14**.: _There exists a constant \(C>0\) (depending on \(k,n\)) such that_
\[\rho(V_{1},V_{2})\leq d(V_{1},V_{2})\leq C\rho(V_{1},V_{2}).\]
Proof.: Suppose \(B^{n}(0,1)\cap V_{1}\subset N_{\rho}(V_{2})\), then for any \(v\in\mathbb{R}^{n}\), we have
\[|\pi_{V_{1}}(v)-\pi_{V_{2}}(v)|\lesssim\rho|v|,\]
which implies \(d(V_{1},V_{2})\lesssim\rho\). On the other hand, if for any \(|v|\leq 1\) we have
\[|\pi_{V_{1}}(v)-\pi_{V_{2}}(v)|\leq d|v|,\]
then we obtain that \(\pi_{V_{1}}(v)\subset N_{d}(V_{2})\). Letting \(v\) ranging over \(B^{n}(0,1)\cap V_{1}\), we get \(B^{n}(0,1)\cap V_{1}\subset N_{d}(V_{2})\), which means \(\rho(V_{1},V_{2})\leq d\).
We can also define the metric on \(A_{\rm loc}(k,n)\) given by
\[d(V,V^{\prime})=d(\operatorname{dir}(V),\operatorname{dir}(V^{\prime}))+|x_{V} -x_{V^{\prime}}|. \tag{22}\]
Here, we still use \(d\) to denote the metric on \(A_{\rm loc}(k,n)\) and it will not make any confusion.
Similarly, for \(V,V^{\prime}\in A_{\rm loc}(k,n)\) we can define \(\rho(V,V^{\prime})\) to be the smallest number \(\rho\) such that \(B^{n}(0,1)\cap V\subset N_{\rho}(V^{\prime})\). We also have the following lemma. We left the proof to the interested readers.
**Lemma 15**.: _There exists a constant \(C>0\) (depending on \(k,n\)) such that for \(V,V^{\prime}\in A_{loc}(k,n)\),_
\[C^{-1}d(V,V^{\prime})\leq\rho(V,V^{\prime})\leq Cd(V,V^{\prime}).\]
**Definition 16**.: _For \(V\in A(k,n)\) and \(0<r<1\), we define_
\[V_{r}:=N_{r}(V)\cap B^{n}(0,1).\]
_We say that \(V_{r}\) is a \(k\)-dimensional \(r\)-slab._
Actually, \(V_{r}\) is morally a slab of dimensions \(\underbrace{r\times\cdots\times r}_{k\text{ times}}\times\underbrace{1\times \cdots\times 1}_{n-k\text{ times}}\). When \(k\) is already clear, we simply call \(V_{r}\) an \(r\)-slab. If \(W\) is a convex set such that \(C^{-1}W\subset V_{r}\subset CW\), then we also call \(W\) an \(r\)-slab. Here, the constant \(C\) will be a fixed large constant.
**Definition 17**.: _For two \(r\)-slab \(V_{r}\) and \(V^{\prime}_{r}\). We say they are comparable if \(C^{-1}V_{r}\subset V^{\prime}_{r}\subset CV_{r}\). We say they are essentially distinct if they are not comparable._
In this paper, we will also consider the balls and \(\delta\)-neighborhood in \(A_{\rm loc}(k,n)\). Recall that we use \(B_{r}(x)\) to denote the ball in \(\mathbb{R}^{n}\) of radius \(r\) centered at \(x\). To distinguish the ambient space, we will use letter \(Q\) to denote the balls in \(A_{\rm loc}(k,n)\). For \(V\in A_{\rm loc}(k,n)\), we use \(Q_{r}(V)\) to denote the ball in \(A_{\rm loc}(k,n)\) of radius \(r\) centered at \(V\). More precisely,
\[Q_{r}(V_{0}):=\{V\in A_{\rm loc}(k,n):d(V,V_{0})\leq r\}.\]
For a subset \(X\subset A_{\rm loc}(k,n)\), we use the fancy letter \(\mathcal{N}\) to denote the neighborhood in \(A_{\rm loc}(k,n)\):
\[\mathcal{N}_{r}(X):=\{V\in A_{\rm loc}(k,n):d(V,X)\leq r\}.\]
Here, \(d(V,X)=\inf_{V^{\prime}\in X}d(V,V^{\prime})\).
### Hausdorff dimension on metric space
We briefly discuss how to define the Hausdorff dimension for subsets of a metric space. Let \((M,d)\) be a metric space. For \(X\subset M\), we denote the \(s\)-dimensional Hausdorff measure of \(X\) under the metric \(d\) to be \(\mathcal{H}^{s}(X;d)\). We see that if \(d^{\prime}\) is another metric on \(M\) such that \(d(\cdot,\cdot)\sim d^{\prime}(\cdot,\cdot)\), then \(\mathcal{H}^{s}(X;d)\sim\mathcal{H}^{s}(X;d^{\prime})\). It make sense to define the Hausdorff dimension of \(X\) which is independent of the choice of comparable metrics:
\[\dim X:=\sup\{s:\mathcal{H}^{s}(X;d)>0\}.\]
### \(\delta\)-discretized version: projections of lines onto \(k\)-planes
We first talk about the projections of points to \(k\)-planes. Let \(x\in B^{n}(0,1/2)\) and \(V\) be a \(k\)-plane. we see that the orthogonal projection of \(x\) onto \(V\) is \(P_{V}(x)\), and the fiber of \(P_{V}\) at \(P_{V}(x)\) is the \((n-k)\)-plane \(P_{V}^{-1}(P_{V}(x))\). The \(\delta\)-discretized version is to replace \(x\) by a \(\delta\)-ball \(B_{\delta}\) centered at \(x\), and replace \(P_{V}^{-1}(P_{V}(x))\) by an \((n-k)\)-dimensional \(\delta\)-slab \(T=\left(P_{V}^{-1}(P_{V}(x))\right)_{\delta}\). (Here, see Definition 16.) We see that \(T\) is orthogonal to \(V\), and \(T\) contains \(B_{\delta}\).
We want to generalize this \(\delta\)-discretized notion for projections of lines. We meet a new issue about _transversality_: the projection of a line onto a \(k\)-plane may be a point but not a line. To handle with this degenerate case, we need to restrict ourselves to subsets \(A_{\mu}\subset A_{\mathrm{loc}}(1,n)\) and \(G_{\mu}\subset G(k,n)\). Here, \(0<\mu<1/100\) is a parameter. \(A_{\mu}\) is a \(\mu\)-ball in \(A_{\mathrm{loc}}(1,n)\) and \(G_{\mu}\) is a \(\mu\)-ball in \(G(k,n)\), so that they satisfy the following quantitative transversality: For any \(\ell\in A_{\mu}\) and \(V\in G_{\mu}\),
\[\angle(\ell,V^{\perp})>\mu.\]
(See also (16) in Theorem 6.) Now for any \(\ell\in A_{\mu}\) and \(V\in G_{\mu}\), we see that \(\ell\) is quantitatively away from \(V^{\perp}\), and hence \(P_{V}(\ell)\) is a line in \(V\), and there exits an \((n-k+1)\)-plane of form \(W=P_{V}(\ell)\oplus W^{\prime}\) where \(W^{\prime}\) is an \((n-k)\)-subspace orthogonal to \(V\). We say that \(W\) is **orthogonal to \(V\) at \(P_{V}(\ell)\)**. More generally, we have the following definition.
**Definition 18**.: _Let \(V\in G(k,n)\), \(\ell^{\prime}\) be a line in \(V\). If an \((n-k+1)\)-plane \(W\) is of form \(W=\ell^{\prime}\oplus W^{\prime}\) where \(W^{\prime}\) is an \((n-k)\)-subspace orthogonal to \(V\), then we say that \(W\) is orthogonal to \(V\) at \(\ell^{\prime}\)._
**Remark 19**.: Here are two ways to understand this notion in terms of preimage of \(P_{V}\), or preimage of \(\pi_{V}\). On the one hand, we see that
\[W=P_{V}^{-1}(\ell^{\prime}).\]
On the other hand, we can build a subset \(\mathbf{W}\) of \(A(1,n)\) from \(W\) as
\[\mathbf{W}:=\{\ell\in A(1,n):\ell\subset W,\ell\not\perp V\}.\]
Then we also have
\[\mathbf{W}=\pi_{V}^{-1}(\ell^{\prime})\cap A(1,n).\]
Next, we discuss the geometry of the \(\delta\)-discretized version. We will constantly use the following heuristic.
**Heuristic.** Fix \(0<\delta<1\). Given \(W\in A_{\mathrm{loc}}(k,n)\) (see (21)), we have two \(\delta\)-thickened versions of \(W\). One is \(W_{\delta}\), which is a \(k\)-dimensional \(\delta\)-slab in \(B^{n}(0,1)\)
The other is \(Q_{\delta}(W)\), which is a \(\delta\)-ball in \(A_{\mathrm{loc}}(k,n)\). By Lemma 15 and ignoring some constant, we can morally think of \(W_{\delta}\) and \(Q_{\delta}(W)\) as follows:
\[Q_{\delta}(W) \approx\{W^{\prime}\in A_{\mathrm{loc}}(k,n):W^{\prime}\cap B^{n }(0,1)\subset W_{\delta}\},\] \[W_{\delta} \approx\bigg{(}\bigcup_{W^{\prime}\in Q_{\delta}(W)}W^{\prime} \bigg{)}\cap B^{n}(0,1). \tag{23}\]
It is good to think of them as the same thing. The reader can consider the two \(\delta\)-thickened versions for a point \(x\): both of \(Q_{\delta}(\{x\})\) and \(\{x\}_{\delta}\) are \(B^{n}(x,\delta)\) (the ball of radius \(\delta\) centered at \(x\)), since we can identify \(A_{\mathrm{loc}}(0,n)\) as a subset of \(\mathbb{R}^{n}\).
Let us talk about the projections. As before, let \(\ell\in A_{\mu}\) and \(V\in G_{\mu}\). We have two \(\delta\)-thickened versions of \(\ell\): \(\ell_{\delta}\) which is a subset of \(B^{n}(0,1)\), and \(Q_{\delta}(\ell)\) which is a subset of \(A_{\mathrm{loc}}(1,n)\). By the quantitative transversality condition between \(\ell\) and \(V\), we see that \(P_{V}(\ell_{\delta})\) has dimensions \(\sim\delta\times\cdots\times\delta\times 1\), where the implicit constant depends on \(\mu\). When \(\mu\) is fixed, we may just ignore this implicit constant, so let us assume \(P_{V}(\ell_{\delta})\) is a \(\delta\)-tube.
We have the \(\delta\)-disretized version for Definition 18. Let \(W=P_{V}(\ell)\oplus W^{\prime}\) which is an \((n-k+1)\)-plane orthogonal to \(V\) at \(P_{V}(\ell)\). We can morally think of \(W_{\delta}\) as \(P_{V}(\ell_{\delta})\times W^{\prime}\). We can also morally think of \(W_{\delta}\) as \(P_{V}^{-1}(P_{V}(\ell_{\delta}))\cap B^{n}(0,1)\). There are some features of \(W_{\delta}\):
1. It is an \((n-k+1)\)-dimensional \(\delta\)-slab;
2. It contains \(\ell_{\delta}\);
3. Its intersection with \(V\) is a \(\delta\)-tube \(P_{V}(\ell_{\delta})\), and the other \((n-k)\) directions are orthogonal to \(V\).
We would like to say \(W_{\delta}\) is **orthogonal to \(V\) at \(P_{V}(\ell_{\delta})\)**. See Figure 1.
Figure 1.
## 3. Theorem 6 implies Proposition 5
We will use the Kaufman-type estimate (18) to deduce (11), and use Falconer-type estimate (19) to deduce (12).
We first prove (11). Since \(S(a)\) is monotonically increasing in \(a\), we can assume \(a\in[0,k-1]\) and to prove
\[S(a)\geq a.\]
Suppose by contradiction that \(S(a)\leq a-2\varepsilon\) for some \(\varepsilon>0\). From the definition of \(S(a)\) in (3), we can find a set \(\mathbf{A}\subset A(1,n)\) with \(\dim(\mathbf{A})=a\) so that the set
\[E=\{V\in G(k,n):\dim(\pi_{V}(\mathbf{A}))<a-\varepsilon\} \tag{24}\]
has positive measure. Since \(A(1,n)\) can be covered by countably many translated copies of \(A_{\mathrm{loc}}(1,n)\), so we may assume \(\mathbf{A}\subset A_{\mathrm{loc}}(1,n)\) with \(\dim(\mathbf{A})>a-\varepsilon/100\) and (24) still has positive measure.
Since \(\dim(\mathbf{A})>a-\varepsilon/100\), there exists \(\ell\in\mathbf{A}\) such that
\[\dim(Q_{r}(\ell)\cap\mathbf{A})\geq a-\varepsilon/2,\]
for any \(r>0\) and \(Q_{r}(\ell)\) being a ball of radius \(r\) centered at \(\ell\) in the metric space \(A_{\mathrm{loc}}(1,n)\). Usually, \(\ell\) is referred to as an \((a-\varepsilon/2)\)-density point in \(\mathbf{A}\). Without loss of generality, we assume \(\ell\) is parallel to the \(x_{n}\)-axis. We also use \(\ell_{0}\) to denote the translation of \(\ell\) to the origin, which by our assumption, is the \(x_{n}\)-axis.
Since \(\{V\in G(k,n):\ell_{0}\subset V^{\perp}\}\) has zero measure as a subset of \(G(k,n)\), we see that
\[E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\}\]
has positive measure. For any \(V\in\{V\in G(k,n):\ell_{0}\subset V^{\perp}\}\), there exists a number \(\mu=\mu_{V}>0\) such that the following holds. Let \(Q_{\mu}(\ell)\subset A_{\mathrm{loc}}(1,n)\) be a ball of radius \(\mu\) centered at \(\ell\), and \(Q_{\mu}(V)\subset G(k,n)\) be a ball of radius \(\mu\) centered at \(V\). Then
\[\angle(\ell_{1},V_{1}^{\perp})>\mu\]
for any \(\ell_{1}\in Q_{\mu}(\ell)\) and \(V_{1}\in Q_{\mu}(V)\). This guarantees the transversality condition (16). We will let \(s=a-\varepsilon\). We can check
\[s<\min\{\dim(Q_{r}(\ell)\cap\mathbf{A})),2(k-1)\}.\]
Then, we can apply Theorem 6 to the set \(\mathbf{A}\cap Q_{\mu}(\ell)\). By (18), we obtain that
\[\dim(\{V\in Q_{\mu}(V):\dim(\pi_{V}(\mathbf{A}\cap Q_{\mu}(\ell) ))<a-\varepsilon\}) \leq k(n-k)+a-\varepsilon-(k-1)\] \[\leq k(n-k)-\varepsilon. \tag{25}\]
The last inequality is because \(a\leq k-1\). This implies
\[\dim(\{V\in Q_{\mu}(V):\dim(\pi_{V}(\mathbf{A}))<a-\varepsilon\})\leq k(n-k) -\varepsilon. \tag{26}\]
Since we can write \(E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\}\) as a countable union of the set of form
\[\{V\in Q_{\mu}(V):\dim(\pi_{V}(\mathbf{A}))<a-\varepsilon\}.\]
Therefore, \(\dim(E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\})\leq k(n-k)-\varepsilon\), which contradicts that \(E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\}\) has positive measure.
Next. we prove (12). We can assume \(a\in[n-1,n+k-2]\) and to prove
\[S(a)\geq a-(n-k).\]
Arguing in the same way, we can find a set \(\mathbf{A}\subset A_{\mathrm{loc}}(1,n)\) with \(\dim(\mathbf{A})>a-\varepsilon/100\) so that
\[E=\{V\in G(k,n):\dim(\pi_{V}(\mathbf{A}))<a-(n-k)-\varepsilon\}\]
has positive measure. By finding an \((a-\varepsilon/2)\)-density point \(\ell\) in \(\mathbf{A}\) and similarly defining \(\ell_{0}\), we can assume
\[E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\}\]
has positive measure. We will let \(s=a-(n-k)-\varepsilon\). We can check
\[s<\min\{a-\varepsilon/2,2(k-1)\}.\]
Then, we can apply Theorem 6 and obtain a similar estimate as (26):
\[\begin{split}&\dim(\{V\in Q_{\mu}(V):\dim(\pi_{V}(\mathbf{A}))\leq a +(n-k)-\varepsilon\})\\ &\leq k(n-k)+(a-(n-k)-\varepsilon)-(a-\varepsilon/2)+(n-k)\\ &\leq k(n-k)-\varepsilon/2.\end{split} \tag{27}\]
Therefore, \(\dim(E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\})\leq k(n-k)- \varepsilon/2\), which contradicts that \(E\cap\{V\in G(k,n):\ell_{0}\not\subset V^{\perp}\}\) has positive measure.
**Remark 20**.: We actually showed that the number
\[\widetilde{S}(a):=\inf_{\mathbf{A}\subset A(1,n),\dim(\mathbf{A})=a}\operatorname {ess\,sup}_{V\in G(k,n)}\dim\Big{(}\pi_{V}(\mathbf{A})\cap A(1,V)\Big{)}, \tag{28}\]
which looks smaller than \(S(a)\) defined in (3), actually has the same lower bound as \(S(a)\) in Proposition 5.
## 4. Kaufman-type exceptional set estimate
We state a discretized version of the estimate (18).
**Theorem 21**.: _Fix a number \(0<\mu<1/100\), and let \(A_{\mu},G_{\mu}\) be as in Theorem 6. Fix \(t>k(n-k)-(k-1)\), \(0<s<a\) and \(0<u<t-k(n-k)+(k-1)\). For sufficiently small \(\varepsilon>0\) (depending on \(\mu,n,k,a,s,t,u\)), the following holds. Let \(0<\delta<1/100\). Let \(\mathbf{H}\subset A_{\mu}\) be a \((\delta,u)\)-set with \(\#\mathbf{H}\gtrsim(\log\delta^{-1})^{-2}\delta^{-u}\). Let \(\mathcal{V}\subset G_{\mu}\) be a \((\delta,t)\)-set with \(\#\mathcal{V}\gtrsim(\log\delta^{-1})^{-2}\delta^{-t}\). Suppose for each \(V\in G(k,n)\), we have a collection of slabs \(\mathbb{T}_{V}\), where each \(T\in\mathbb{T}_{V}\) has dimensions \(\underbrace{\delta\times\cdots\times\delta}_{k-1\text{ times}}\times \underbrace{1\times\cdots\times 1}_{n-k+1\text{ times}}\) and is orthogonal to \(V\) at some \(\delta\)-tube in \(V\)._
_The \(s\)-dimensional Forstman condition holds for \(\mathbb{T}_{V}\): for any \(\delta\leq r\leq 1\) and any \((n-k+1)\)-dimensional \(r\)-slab \(W_{r}\) that is orthogonal to \(V\) at some \(r\)-tube in \(V\), we have_
\[\#\{T\in\mathbb{T}_{V}:T\subset W_{r}\}\lesssim(r/\delta)^{s}. \tag{29}\]
_We also assume that for each \(\ell\in\mathbf{H}\),_
\[\#\{V\in\mathcal{V}:\ell_{\delta}\subset T,\text{ for some }T\in\mathbb{T}_{V}\} \gtrsim(\log\delta^{-1})^{-2}\#\mathcal{V}. \tag{30}\]
_Then, we have_
\[\delta^{-u-t}\lesssim_{\varepsilon}\delta^{-\varepsilon/2}\sum_{V\in\mathcal{ V}}\#\mathbb{T}_{V}\lesssim\delta^{-\varepsilon-t-s}. \tag{31}\]
### Proof of the Kaufman-type estimate
We give the proof of (18) using Theorem 21. We choose \(\alpha<\dim(\mathbf{A}),t<\dim(E_{s}(\mathbf{A}))\). By Frostman's lemma, there exist probability measures \(\nu_{\mathbf{A}}\) supported on \(\mathbf{A}\) and \(\nu_{E}\) supported on \(E_{s}(\mathbf{A})\) satisfying the Frostman's condition:
\[\nu_{\mathbf{A}}(Q_{r}) \lesssim r^{\alpha}\text{ for any }Q_{r}\text{ being a ball of radius }r\text{ in }A_{\text{loc}}(1,n). \tag{33}\] \[\nu_{E}(Q_{r}) \lesssim r^{t}\text{ for any }Q_{r}\text{ being a ball of radius }r\text{ in }G(k,n). \tag{32}\]
We only need to prove
\[t\leq k(n-k)+s-(k-1), \tag{34}\]
since then we can let \(t\to\dim(E_{s}(\mathbf{A}))\).
Fix a \(V\in E_{s}(\mathbf{A})\). By definition, we have \(\dim(\pi_{V}(\mathbf{A}))<s\). We also fix a small number \(\epsilon_{\circ}\) which we will later send to \(0\). We view \(\pi_{V}(\mathbf{A})\) as a subset of \(A_{\text{loc}}(1,V)=A_{\text{loc}}(1,k)\). By Lemma 13, we can cover \(\pi_{V}(\mathbf{A})\) by balls \(\mathbb{D}_{V}=\{D\}\) in \(A_{\text{loc}}(1,V)\), each of which has radius \(2^{-j}\) for some integer \(j>|\log_{2}\epsilon_{\circ}|\). We define \(\mathbb{D}_{V,j}:=\{D\in\mathbb{D}_{V}:r(D)=2^{-j}\}\). Lemma 13 yields the following properties:
\[\sum_{D\in\mathbb{D}_{V}}r(D)^{s}<1, \tag{35}\]
and for each \(j\) and \(r\)-ball \(Q_{r}\subset A_{\text{loc}}(1,V)\), we have
\[\#\{D\in\mathbb{D}_{V,j}:D\subset Q_{r}\}\lesssim\left(\frac{r}{2^{-j}}\right) ^{s}. \tag{36}\]
On the one hand, \(D\) is a \(2^{-j}\)-ball in \(A_{\text{loc}}(1,V)\). On the other hand we can view \(D\) as a \(2^{-j}\)-tube in \(V\). We use \(t_{D}\) to denote this \(2^{-j}\)-tube. By the heuristic (23), we can view \(t_{D}\) as
\[\bigcup_{\ell\in D}\ell\cap\{x\in V:|x|\leq 1\}.\]
For each \(V\in E_{s}(A)\), we can find such a \(\mathbb{D}_{V}=\bigcup_{j}\mathbb{D}_{V,j}\). We also define the slabs \(\mathbb{T}_{V,j}:=\{\pi_{V}^{-1}(t_{D})\cap B^{n}(0,1):D\in\mathbb{D}_{V,j}\}\), \(\mathbb{T}_{V}=\bigcup_{j}\mathbb{T}_{V,j}\). Each plank in \(\mathbb{T}_{V,j}\) has dimensions
\[\underbrace{2^{-j}\times 2^{-j}\times\cdots\times 2^{-j}}_{k-1\text{ times}}\times\underbrace{1\times 1\times\cdots\times 1}_{n-k+1\text{ times}}\]
such that it is orthogonal to \(V\) at some \(t_{D}\) for \(D\in\mathbb{D}_{V,j}\). For each such slab \(T\in\mathbb{T}_{V,j}\), we use its bold-font \(\mathbf{T}\) to denote the set of lines whose unit truncations are in \(T\). More precisely,
\[\mathbf{T}:=\{\ell:\ell\cap B^{n}(0,1)\subset T\}. \tag{37}\]
One easily sees that \(\mathbf{A}\subset\bigcup_{T\in\mathbb{T}_{V}}\mathbf{S}\). By pigeonholing, there exists \(j(V)\) such that
\[\nu_{\mathbf{A}}(\mathbf{A}\cap(\cup_{T\in\mathbb{T}_{V,j(V)}}\mathbf{T})) \geq\frac{1}{10j(V)^{2}}\nu_{\mathbf{A}}(\mathbf{A})=\frac{1}{10j(V)^{2}}. \tag{38}\]
For each \(j>|\log_{2}\epsilon_{\circ}|\), define \(E_{s,j}(A):=\{V\in E_{s}(A):j(V)=j\}\). Then we obtain a partition of \(E_{s}(\mathbf{A})\):
\[E_{s}(\mathbf{A})=\bigsqcup_{j}E_{s,j}(\mathbf{A}).\]
By pigeonholing again, there exists \(j\) such that
\[\nu_{E}(E_{s,j}(\mathbf{A}))\geq\frac{1}{10j^{2}}\nu_{E}(E_{s}(\mathbf{A}))= \frac{1}{10j^{2}}. \tag{39}\]
In the rest of the proof, we fix this \(j\). We also set \(\delta=2^{-j}\). By Lemma 12, there exists a \((\delta,t)\)-set \(\mathcal{V}\subset E_{s,j}(\mathbf{A})\) with cardinality \(\#\mathcal{V}\gtrsim(\log\delta^{-1})^{-2}\delta^{-t}\).
Next, we consider the set \(M:=\{(\ell,V)\in\mathbf{A}\times\mathcal{V}:\ell\in\cup_{T\in\mathbb{T}_{V,j}} \mathbf{T}\}\). We also use \(\#\) to denote the counting measure on \(\mathcal{V}\). Define the sections of \(M\):
\[M_{\ell}=\{V:(\ell,V)\in M\},\quad M_{V}:=\{\ell:(\ell,V)\in M\}.\]
By (38) and Fubini, we have
\[(\nu_{\mathbf{A}}\times\#)(M)\geq\frac{1}{10j^{2}}\mu(\mathcal{V}). \tag{40}\]
This implies
\[(\nu_{\mathbf{A}}\times\#)\bigg{(}\Big{\{}(\ell,V)\in M:\mu(M_{\ell})\geq \frac{1}{20j^{2}}\mu(\mathcal{V})\Big{\}}\bigg{)}\geq\frac{1}{20j^{2}}\mu( \mathcal{V}). \tag{41}\]
By (41), we have
\[\nu_{\mathbf{A}}\bigg{(}\Big{\{}\ell\in\mathbf{A}:\mu(M_{\ell})\geq\frac{1}{2 0j^{2}}\mu(\mathcal{V})\Big{\}}\bigg{)}\geq\frac{1}{20j^{2}}. \tag{42}\]
We are ready to apply Theorem 21. Recall \(\delta=2^{-j}\) and \(\#\mathcal{V}\gtrsim(\log\delta^{-1})^{-2}\delta^{-t}\). We may assume \(t>k(n-k)-(k-1)\), otherwise we are done. Set
\[u=\min\{t-k(n-k)+(k-1),a\}-\varepsilon. \tag{43}\]
By (42) and Lemma 12, we can find a \((\delta,u)\)-subset of \(\{\ell\in\mathbf{A}:\#M_{\ell}\geq\frac{1}{20j^{2}}\#\mathcal{V}\}\) with cardinality \(\gtrsim(\log\delta^{-1})^{-2}\delta^{-u}\). We denote this set by \(\mathbf{H}\). For each \(\ell\in H\), we see that there are \(\gtrsim(\log\delta^{-1})^{-2}\#\mathcal{V}\) many slabs from \(\cup_{V\in\mathcal{V}}\mathbb{T}_{V,j}\) that contain \(\ell_{\delta}\). We can now apply Theorem 21 to obtain
\[\delta^{-u-t}\lesssim_{\varepsilon}\delta^{-\varepsilon-t-s}.\]
By letting \(\epsilon_{\circ}\to 0\) (and hence \(\delta\to 0\)), we obtain
\[u+t\leq t+s+\varepsilon.\]
Plugging in the definition of \(u\) and letting \(\varepsilon\to 0\), we obtain
\[\min\{t-k(n-k)+(k-1),a\}\leq s.\]
Since \(a>s\), we obtain
\[t\leq k(n-k)+s-(k-1). \tag{44}\]
### Discretized Kaufman-type estimate
Proof of Theorem 21.: Define \(\mathbb{T}=\bigcup_{V\in\mathcal{V}}\mathbb{T}_{V}\), where \(\mathbb{T}_{V}\) is given by Theorem 21. If two \(\delta\)-slab are comparable, then we just identify them. So we assume the \(\delta\)-slabs in \(\mathbb{T}\) are essentially distinct.
We define the following incidence pair:
\[\mathcal{I}=\mathcal{I}(\mathbb{T},\mathcal{V}):=\{(T,V)\in\mathbb{T}\times \mathcal{V}:T\in\mathbb{T}_{V}\}. \tag{45}\]
We will prove the theorem by comparing the upper and lower bound of \(\mathcal{I}\). We easily see the upper bound
\[\#\mathcal{I}\lesssim\sum_{V\in\mathcal{V}}\#\mathbb{T}_{V}\lesssim\# \mathcal{V}\#\mathbb{T}_{V}\lesssim\delta^{-t-s}. \tag{46}\]
Here, we used the bound \(\#\mathcal{V}\lesssim\delta^{-t}\) since \(\mathcal{V}\) is a \((\delta,t)\)-set, and we used \(\#\mathbb{T}_{V}\lesssim\delta^{-s}\) by plugging in \(r=1\) in (29).
For the lower bound of \(\#\mathcal{I}\), we will use the following inequality:
\[\#\big{(}\bigcup_{i}A_{i}\big{)}\geq\sum_{i}\#(A_{i}\setminus\bigcup_{j\neq i}A_ {j}).\]
By [1, Lemma 13], we choose a \(\delta|\log\delta|^{O(1)}\)-separated subset \(\mathbf{H}^{\prime}\subset\mathbf{H}\) such that \(\mathbf{H}^{\prime}\) is a \((\delta|\log\delta|^{O(1)},u)\)-set and
\[\#\mathbf{H}^{\prime}\gtrsim|\log\delta|^{-O(1)}\delta^{-u}. \tag{47}\]
Here, \(O(1)\) is a large constant to be determined later.
For each \(\ell\in\mathbf{H}^{\prime}\), we define the following subset of \(\mathcal{I}\).
\[\mathcal{I}_{\ell}:=\{(T,V)\in\mathcal{I}:\ell_{\delta}\subset T\}. \tag{48}\]
Then, \(\mathcal{I}\supset\bigcup_{\ell\in\mathbf{H}^{\prime}}\mathcal{I}_{\ell}\).
We have
\[\#\mathcal{I}\geq\#\left(\bigcup_{\ell\in\mathbf{H}^{\prime}} \mathcal{I}_{\ell}\right) \geq\#\left(\bigcup_{\ell\in\mathbf{H}^{\prime}}\left(\mathcal{I }_{\ell}\setminus\bigcup_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}} \mathcal{I}_{\ell_{1}}\right)\right) \tag{50}\] \[=\sum_{\ell\in\mathbf{H}^{\prime}}\#\left(\mathcal{I}_{\ell} \setminus\bigcup_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}}\mathcal{I }_{\ell_{1}}\right)\] (51) \[=\sum_{\ell\in\mathbf{H}^{\prime}}\bigg{(}\#\mathcal{I}_{\ell}- \sum_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}}\#\left(\mathcal{I}_{ \ell}\cap\mathcal{I}_{\ell_{1}}\right)\bigg{)}. \tag{49}\]
We will show that
\[\#\mathcal{I}_{\ell}-\sum_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}} \#\left(\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}}\right)\geq\frac{1}{2}\# \mathcal{I}_{\ell}.\]
This will imply
\[\#\mathcal{I}\geq\frac{1}{2}\sum_{\ell\in\mathbf{H}^{\prime}}\#\mathcal{I}_{ \ell}\gtrsim(\log\delta^{-1})^{-O(1)}\delta^{-u-t}. \tag{52}\]
We make one observation for \(\mathcal{I}_{\ell}\). Note that for any \(V\in\mathcal{V}\), there are \(\lesssim 1\)\(T_{i}\)'s from \(\mathbb{T}\) such that \((T_{i},V)\in\mathcal{I}_{\ell}\). The reason is that if \(T_{i}\in\mathcal{I}_{\ell}\), then \(T_{i}\) is orthogonal to \(V\) at \(P_{V}(\ell_{\delta})\). There can be at most \(\lesssim 1\) such \(T_{i}\)'s. By losing a constant factor in the estimate, we may assume for any \(V\in\mathcal{V}\) there is one or no \(T\) such that \((T,V)\in\mathcal{I}_{\ell}\). Therefore, we can identify \(\mathcal{I}_{\ell}\) with the set on the left hand side of (30). Also, (30) implies for any \(\ell\in\mathbf{H}^{\prime}\),
\[\#\mathcal{I}_{\ell}\gtrsim(\log\delta^{-1})^{-2}\#\mathcal{V}\gtrsim(\log \delta^{-1})^{-4}\delta^{-t}. \tag{53}\]
For fixed \(\ell\), and \(\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}\), we want to find an upper bound for \(\#(\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}})\).
By the condition that \(\mathbf{H}^{\prime}\) is contained in a \(\mu\)-ball in \(A_{\mathrm{loc}}(1,n)\), we may assume all the \(\ell\) in \(\mathbf{H}^{\prime}\) form an angle \(\leq 10^{-1}\) with the \(x_{n}\)-axis. We will introduce a new metric for these lines. It will not be hard to see that this metric is comparable to the metric on \(A_{\mathrm{loc}}(1,n)\) introduced in (22).
Let \(\Pi_{0}=\mathbb{R}^{n-1}\times\{0\},\Pi_{1}=\mathbb{R}^{n-1}\times\{1\}\). Then each \(\ell\in\mathbf{H}^{\prime}\) will intersects \(\Pi_{0}\), \(\Pi_{1}\) at two points denoted by \(P_{0}(\ell),P_{1}(\ell)\). We will use \((P_{0}(\ell),P_{1}(\ell))\) for the local coordinates of \(\ell\). And we define the metric to be
\[d(\ell,\ell_{1}):=|P_{0}(\ell)-P_{0}(\ell_{1})|+|P_{1}(\ell)-P_{1}(\ell_{1})|. \tag{54}\]
This metric is equivalent to the metric defined in (22). We leave out the proof.
Next, we will estimate \(\#(\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}})\). By the definition of \(d(\ell,\ell_{1})\), we may assume \(|P_{0}(\ell)-P_{0}(\ell_{1})|\geq\frac{1}{2}d(\ell,\ell_{1})\). We denote the number
\[d:=|P_{0}(\ell)-P_{0}(\ell_{1})|.\]
We remind the reader \(\delta/2\leq d\leq 1\).
We introduce the two dimensional plank \(\mathcal{P}\) which has size \(\sim 1\times d\). The side of \(\mathcal{P}\) of length \(1\) is \(\ell\cap B^{n}(0,1)\). The side of \(\mathcal{P}\) of length \(d\) is \(\overline{P_{0}(\ell)P_{0}(\ell_{1})}\).
If \((T,V)\in\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}}\), then \(\mathcal{P}\subset T\) and \(P_{V}(T)\) is contained in a \(\delta\)-tube. Therefore, \(P_{V}(\mathcal{P})\) is contained in a \(\delta\)-tube. We formally write
\[\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}}\supset\{V\in\mathcal{V}:P_{V}( \mathcal{P})\text{ is contained in a $\delta$-tube}\}.\]
We make the following observation.
**Lemma 22**.: _Let \(\mathcal{M}\) be a set of \(k\)-dimensional subspaces \(V\) so that the projection of \(\mathcal{P}\) to \(V\) is a line or point. In other words,_
\[\mathcal{M}:=\{V\in G(k,n):\dim(P_{V}(\mathcal{P}))\leq 1\}. \tag{55}\]
_Then we have_
\[\{V\in G(k,n):P_{V}(\mathcal{P})\text{ is contained in a $\delta$-tube}\} \subset\mathcal{N}_{C\delta d^{-1}}(\mathcal{M}). \tag{56}\]
_Here, \(C\) is some big constant._
Proof.: We sketch the idea of the proof. Let \(V\) be in the LHS of (56). Then \(P_{V}(\mathcal{P})\) is contained in a \(1\times\delta\)-tube. Define \(\mathcal{Q}\) to be a \(1\times 1\)-square, which is obtained by prolonging the length-\(d\) side of \(\mathcal{P}\)\(d^{-1}\) times. We see that \(P_{V}(\mathcal{Q})\) is contained in a \(1\times d^{-1}\delta\)-tube. The next step is a rotation argument. We claim that by rotating \(V\) within angle \(\lesssim d^{-1}\delta\), we obtain another \(k\)-subspaces \(W\) such that \(P_{W}(\mathcal{Q})\) is a line. We leave out the proof of this claim.
Therefore, we find a \(W\in\mathcal{M}\) such that \(d(V,W)\lesssim d^{-1}\delta\), which finishes the proof of lemma.
The next lemma is about the dimension of \(\mathcal{M}\).
**Lemma 23**.: _Let \(\mathcal{M}\) be given by (55). Then_
\[\dim(\mathcal{M})=1+k(n-k-1). \tag{57}\]
Proof.: We sketch the proof. We may assume \(\mathcal{P}\) span the plane \(\mathbb{R}^{2}\). If \(\dim(P_{V}(\mathbb{R}^{2}))\leq 1\), then \(V^{\perp}\cap\mathbb{R}^{2}\) contains a line or the whole \(\mathbb{R}^{2}\). We just need to consider the first case. We think of \(V^{\perp}\) as spanned by \(n-k\) orthonormal vectors \(v_{1},\dots,v_{n-1}\). From the condition that \(V^{\perp}\cap\mathbb{R}^{2}\) contains a line, we have one dimensional choice of vector \(V_{1}\) that lies in \(\mathbb{R}^{2}\), and then we choose \(v_{2},\dots,v_{n-k}\) from \(v_{1}^{\perp}\) which has \(\dim(G(n-k-1,n-1))=k(n-k-1)\) dimensional choice. Therefore, we see that such \(V^{\perp}\) and hence \(V\) has \(1+k(n-k-1)\) dimensional choice.
We have the estimate
\[\#(\mathcal{I}_{\ell}\cap\mathcal{I}_{\ell_{1}})\leq\#(\mathcal{V}\cap \mathcal{N}_{C\delta d^{-1}}(\mathcal{M})). \tag{58}\]
And we remind the reader that \(\mathcal{V}\) is a \((\delta,t)\)-set.
By Lemma 23, we can cover \(\mathcal{N}_{C\delta d^{-1}}(\mathcal{M})\) by \(\sim(\delta^{-1}d)^{1+k(n-k-1)}\) many \(\delta d^{-1}\)-balls in \(G(k,n)\). By the \((\delta,t)\) property of \(\mathcal{V}\), we have
\[\#(\mathcal{V}\cap\mathcal{N}_{C\delta d^{-1}}(\mathcal{M}))\lesssim(\delta^{- 1}d)^{1+k(n-k-1)}d^{-t}.\]
So, we have
\[\sum_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}}\#(\mathcal{I} _{\ell}\cap\mathcal{I}_{\ell_{1}}) \lesssim\sum_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}} \big{(}\delta^{-1}d(\ell,\ell_{1})\big{)}^{1+k(n-k-1)}\,d(\ell,\ell_{1})^{-t}\] \[=\sum_{\delta|\log\delta|^{O(1)}\leq d\leq 1}\ \ \sum_{\ell_{1}\in\mathbf{H}^{\prime},d(\ell,\ell_{1})\sim d}\big{(}\delta^{-1}d \big{)}^{1+k(n-k-1)}\,d^{-t}.\]
Here the summation over \(d\) is over dyadic numbers. Since \(\#(\mathbf{H}^{\prime}\cap Q_{d}(\ell))\lesssim(\frac{d}{\delta})^{u}\), the expression above is bounded by
\[\lesssim\sum_{\delta|\log\delta|^{O(1)}\leq d\leq 1}\left(\frac{d}{ \delta}\right)^{u}\left(\frac{d}{\delta}\right)^{1+k(n-k-1)}d^{-t}\] \[=\delta^{-t}\sum_{\delta|\log\delta|^{O(1)}\leq d\leq 1}\left( \frac{d}{\delta}\right)^{u+1+k(n-k-1)-t}\] \[\lesssim\delta^{-t}|\log\delta|^{O(1)(u+1+k(n-k-1)-t)}.\]
From the condition in Theorem 21, \(u+1+k(n-k-1)-t\leq-\varepsilon<0\). by choosing the constant \(O(1)\) big enough, we have
\[\sum_{\ell_{1}\in\mathbf{H}^{\prime}\setminus\{\ell\}}\#(\mathcal{I}_{\ell} \cap\mathcal{I}_{\ell_{1}})\leq C^{-1}|\log\delta|^{-4}\delta^{-t}\leq\frac{1 }{2}\#\mathcal{I}_{\ell}.\]
As a result, we have
\[\#\mathcal{I}\geq|\log\delta|^{-O(1)}\delta^{-u-t}.\]
Compared with the upper bound of \(\#\mathcal{I}\) in (46), we finish the proof.
## 5. Falconer-type exceptional set estimate
We will use the high-low method to prove Falconer-type exceptional set estimate. The key ingredient in the high-low method is the Fourier analysis. Since we are working with the set \(\mathbf{A}\) as a subset of \(A(1,n)\), we need the Fourier transform in \(A(1,n)\). It is quite hard to define a global Fourier transform on \(A(1,n)\), but since our set \(\mathbf{A}\) has been localized in \(A_{\mu}\) (see Theorem 6), we just need to consider the Fourier transform in the set \(\widetilde{A}(1,n)\) consisting of lines in \(A(1,n)\) that is transverse to \(\mathbb{R}^{n-1}\). In other words,
\[\widetilde{A}(1,n):=\{\ell\in A(1,n):\ell\text{ is not parallel to }\mathbb{R}^{n-1}\}. \tag{59}\]
In the next subsection, we introduce the Fourier transform in \(\widetilde{A}(1,n)\) and discuss some properties.
### Fourier transform on \(A(1,n)\)
First, we introduce the coordinate for \(\widetilde{A}(1,n)\). By definition, every \(\ell\in\widetilde{A}(1,n)\) intersect \(\mathbb{R}^{n-1}\times\{0\}\) and \(\mathbb{R}^{n-1}\times\{1\}\) at two points for which we denote by \(X(\ell),Y(\ell)\). We see that we parametrize \(\widetilde{A}(1,n)\) using the coordinates \((X,Y)\).
\[(X,Y):\widetilde{A}(1,n)\xrightarrow{\cong}\mathbb{R}^{n-1}\times\mathbb{R}^{ n-1}\ \ \ell\mapsto(X(\ell),Y(\ell)). \tag{60}\]
Here, we can just view \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\) as \(\mathbb{R}^{2(n-1)}\). We see that we can pull back the Fourier transform in \(\mathbb{R}^{2(n-1)}\) to that in \(\widetilde{A}(1,n)\). For convenience, we denote the
inverse of (60) to be
\[\ell:\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\xrightarrow{\cong}\widetilde{A}(1,n) \ \ (X,Y)\mapsto\ell(X,Y). \tag{61}\]
Here is the precise definition for Fourier transform on \(\widetilde{A}(1,n)\). Suppose \(F=F(\ell)\) is a function on \(\widetilde{A}(1,n)\), we define the Fourier transform of \(F\), denoted by \(\widehat{F}(\ell)\), to be the function on \(\widetilde{A}(1,n)\) given by
\[\widehat{F}(\ell):=\big{(}\mathcal{F}(F\circ\ell)\big{)}(X(\ell),Y(\ell)). \tag{62}\]
We explain the expression. Given \(F\) as a function on \(\widetilde{A}(1,n)\), we use \(\ell\) in (61) to pull back \(F\) to a function \(F\circ\ell\) which is defined on \(\mathbb{R}^{2(n-1)}\). We apply \(\mathcal{F}\), which is the standard Fourier transform on \(\mathbb{R}^{2(n-1)}\), to \(F\circ\ell\), and obtain the function \(\mathcal{F}(F\circ\ell)\) on \(\mathbb{R}^{2(n-1)}\). At last, we use \((X,Y)\) in (60) to pull back \(\mathcal{F}(F\circ\ell)\) to the function defined on \(\widetilde{A}(1,n)\).
Next, we will introduce the dual rectangle. This tool is always used as a black box. It says that if a function is a smooth bump function adapted to a rectangle, then its Fourier transform is morally a smooth function adapted to the dual rectangle multiplied by some normalizing constant. We first review this property for Fourier transform in \(\mathbb{R}^{n}\) and then talk about it in \(\widetilde{A}(1,n)\).
**Definition 24**.: _Let \(R\subset\mathbb{R}^{n}\) be a rectangle of dimensions \(a_{1}\times a_{1}\times\cdots\times a_{n}\). We define the dual rectangle of \(R\) to be another rectangle \(R^{*}\) centered at the origin with dimensions \(a_{1}^{-1}\times a_{2}^{-1}\times\cdots\times a_{n}^{-1}\), so that the edge of \(R^{*}\) of length \(a_{i}^{-1}\) is parallel to the edge of \(R\) of length \(a_{i}\)._
We state with out proof the following result.
**Proposition 25**.: _Let \(R\) be a rectangle in \(\mathbb{R}^{n}\). Then there exists a smooth bump function \(\psi_{R}\) such that \(\psi_{R}(x)\geq 1\) for \(x\in R\) and \(\psi_{R}\) decays rapidly outside \(R\). And \(\psi_{R}\) satisfies \(\operatorname{supp}(\mathcal{F}(\psi_{R}))\subset R^{*}\) and_
\[|\mathcal{F}(\psi_{R})|\lesssim|R|\cdot 1_{R^{*}}, \tag{63}\]
_Intuitively, we may think of \(\psi_{R}\) as the indicator function \(1_{R}\)._
Now, we are going to introduce the notion of rectangle and dual rectangle in \(\widetilde{A}(1,n)\), and obtain the inequality of form (63). We will view \(\widetilde{A}(1,n)\) as \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\).
Figure 2.
**Definition 26**.: _We say \(\mathbf{R}\) is a rectangle in \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\) if \(\mathbf{R}\) has the following form_
\[\mathbf{R}=R_{0}\times R_{1}. \tag{64}\]
_Here, \(R_{0}\subset\mathbb{R}^{n-1}\) is a rectangle and \(R_{1}\subset\mathbb{R}^{n-1}\) is a translated copy of \(R_{0}\). We define the dual rectangle to be_
\[\mathbf{R}^{*}=R_{0}^{*}\times R_{1}^{*}. \tag{65}\]
**Remark 27**.: See Figure 2. Since \(R_{1}\) is a translated copy of \(R_{0}\), we have \(R_{0}^{*}=R_{1}^{*}\). There are two ways to understand \(\mathbf{R}\). On the one hand, \(\mathbf{R}\) is a Cartesian product of \(R_{0},R_{1}\) as a subset of \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\). On the other hand, \(\mathbf{R}\) is a subset of \(\widetilde{A}(1,n)\). If we use \(P_{R_{0},R_{1}}\) to denote the rectangle in \(\mathbb{R}^{n}\) which is the convex hull of \(R_{0}\) and \(R_{1}\), then \(\mathbf{R}=\bigg{\{}\ell\in\widetilde{A}(1,n):\ell\cap\{0\leq x_{n}\leq 1\} \subset P_{R_{0},R_{1}}\bigg{\}}\).
We have the following result.
**Proposition 28**.: _Let \(\mathbf{R}\) be a rectangle in \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\). Then there exists a smooth bump function \(\psi_{\mathbf{R}}\) such that \(\psi_{\mathbf{R}}(X,Y)\geq 1\) for \((X,Y)\in\mathbf{R}\) and \(\psi_{\mathbf{R}}\) decays rapidly outside \(\mathbf{R}\). And \(\psi_{\mathbf{R}}\) satisfies \(\mathrm{supp}(\widehat{\psi}_{\mathbf{R}})\subset\mathbf{R}^{*}\) and_
\[|\widehat{\psi}_{\mathbf{R}}|\lesssim|\mathbf{R}|\cdot 1_{\mathbf{R}^{*}}, \tag{66}\]
_Intuitively, we may think of \(\psi_{\mathbf{R}}\) as the indicator function \(1_{\mathbf{R}}\)._
### Discretized estimate
We state a discretized version of (19). The setup is basically the same as Theorem 21. Instead of (31), we have (69).
**Theorem 29**.: _Fix a number \(0<\mu<1/100\), and let \(A_{\mu},G_{\mu}\) be as in Theorem 6. Fix \(t>0\), \(0<s<a\). For sufficiently small \(\varepsilon>0\) (depending on \(\mu,n,k,a,s,t\)), the following holds. Let \(0<\delta<1/100\). Let \(\mathbf{H}\subset A_{\mu}\) be a \((\delta,a)\)-set with \(\#\mathbf{H}\gtrsim(\log\delta^{-1})^{-2}\delta^{-a}\). Let \(\mathcal{V}\subset G_{\mu}\) be a \((\delta,t)\)-set with \(\#\mathcal{V}\gtrsim(\log\delta^{-1})^{-2}\delta^{-t}\). Suppose for each \(V\in\mathcal{V}\), we have a collection of slabs \(\mathbb{T}_{V}\), where each \(T\in\mathbb{T}_{V}\) has dimensions \(\underbrace{\delta\times\cdots\times\delta}_{(k-1)\text{ times}}\times \underbrace{1\times\cdots\times 1}_{(n-k+1)\text{ times}}\) and is orthogonal to \(V\) at some \(\delta\)-tube in \(V\)._
_The \(s\)-dimensional Forstman condition holds for \(\mathbb{T}_{V}\): for any \(\delta\leq r\leq 1\) and any \((n-k+1)\)-dimensional \(r\)-slab \(W_{r}\) that is orthogonal to \(V\) at some \(r\)-tube in \(V\), we have_
\[\#\{T\in\mathbb{T}_{V}:T\subset W_{r}\}\lesssim(r/\delta)^{s}. \tag{67}\]
_We also assume that for each \(\ell\in\mathbf{H}\),_
\[\#\{V\in\mathcal{V}:\ell_{\delta}\subset T,\text{ for some }T\in\mathbb{T}_{V}\} \gtrsim(\log\delta^{-1})^{-2}\#\mathcal{V}. \tag{68}\]
_Then, we have_
\[\delta^{-t-a}\lesssim_{\varepsilon}\delta^{-\varepsilon-(k+1)(n-k)-s}. \tag{69}\]
The proof of (19) is deduced from Theorem 29 by a similar argument. We just do not repeat here. We will focus on the proof of Theorem 29.
Proof of Theorem 29.: The idea is to use the Fourier transform on affine Grassmannian together with the high-low method.
We set up some notation. Let \(\mathbf{H}_{\delta}\) be the set of \(\delta\)-balls in \(A_{\text{loc}}(1,n)\) whose centers are points in \(\mathbf{H}\). We use \(\ell\) to denote the elements in \(\mathbf{H}\). By the heuristic (23), we just denote the \(\delta\)-ball in \(\mathbf{H}_{\delta}\) by the corresponding bold-font
\[\boldsymbol{\ell}_{\delta}:=\{\ell^{\prime}\in A_{\text{loc}}(1,n):d(\ell, \ell^{\prime})\leq\delta\}.\]
By (67), for each \(V\in\mathcal{V}\), we have \(s\)-dimensional set of slabs \(\mathbb{T}_{V}=\{T\}\). For a slab \(T\), we use the bold-font \(\mathbf{T}\) to denote a subset of \(\widetilde{A}(1,n)\), so that each \(\ell\in\mathbf{T}\) satisfies \(\ell\cap B^{n}(0,1)\subset T\). Again, here we use the heuristic (23).
Consider the the following integral
\[\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{ \mathbf{T}})^{2}. \tag{70}\]
First of all, by (68) we notice that each \(\ell_{\delta}\in\mathbf{H}_{\delta}\) is contained in \(\gtrsim(\log\delta^{-1})^{-2}\delta^{-t}\) different \(\mathbf{T}\). The volume of a \(\delta\)-ball in \(A(1,n)\) is \(\sim\delta^{2(n-1)}\), therefore the volume of \(\mathbf{H}_{\delta}\) is \(\sim\delta^{2(n-1)}\#\mathbf{H}\) We have the lower bound
\[\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{ \mathbf{T}})^{2}\gtrsim(\log\delta^{-1})^{-2t}\delta^{2(n-1)}\#\mathbf{H} \delta^{-2t}\gtrsim_{\varepsilon}\delta^{\varepsilon}\delta^{2(n-1)}\delta^{- a}\delta^{-2t}. \tag{71}\]
Our next goal is to obtain an upper bound for the integral. Recalling Definition 26, since each \(\mathbf{T}\) is a rectangle in \(A_{\text{loc}}(1,n)\), we write \(\mathbf{T}=R_{0,\mathbf{T}}\times R_{1,\mathbf{T}}\). Here, \(R_{0,\mathbf{T}}\) is a rectangle in \(\mathbb{R}^{n-1}\times\{0\}\) of dimensions \(\underbrace{\delta\times\cdots\times\delta}_{k-1\text{ times}}\times \underbrace{1\times\cdots\times 1}_{n-k\text{ times}}\), \(R_{1,\mathbf{T}}\) is a translated copy of \(R_{0,\mathbf{T}}\) which lies in \(\mathbb{R}^{n-1}\times\{1\}\). We can choose a smooth bump function
\[\psi_{\mathbf{T}}(X,Y)=\psi_{R_{0,\mathbf{T}}}(X)\psi_{R_{1,\mathbf{T}}}(Y) \tag{72}\]
adapted to \(\mathbf{T}\) so that \(\text{supp}\widehat{\psi}_{\mathbf{T}}\subset\mathbf{T}^{*}=R_{0,\mathbf{T}} ^{*}\times R_{0,\mathbf{T}}^{*}\).
Define
\[f_{V}=\sum_{T\in\mathbb{T}_{V}}\psi_{\mathbf{T}}\text{ and }f=\sum_{V\in \mathcal{V}}f_{V}.\]
We will do the high-low decomposition. Let \(K=(\log\delta^{-1})^{O(1)}\), where \(O(1)\) is a large number to be determined later.
Since we can identify \(\widetilde{A}(1,n)\) and \(\mathbb{R}^{n-1}\times\mathbb{R}^{n-1}\) through the coordinates in (60) and (61), we will constantly jump between these two spaces in the latter discussion.
Choose a function \(\eta(X,Y)\) on \(\widetilde{A}(1,n)=\mathbb{R}^{n-1}\times\mathbb{R}^{-1}\), such that \(\eta(X,Y)\) is a smooth bump function adapted to \(B^{2(n-1)}(0,(K\delta)^{-1})\). We have the following high-low decomposition for \(f_{V}\):
\[f_{V}=f_{V,\text{high}}+f_{V,\text{low}},\]
where \(\widehat{f}_{V,\text{low}}=\eta\widehat{f}_{V}\) and \(\widehat{f}_{V,\text{high}}=(1-\eta)\widehat{f}_{V}\). For each \((X,Y)\in\mathbf{H}_{\delta}\), we have
\[(\log\delta^{-1})^{-2}\#\mathcal{V}\lesssim f(X,Y)\leq|f_{\text{high}}(X,Y)|+ |f_{\text{low}}(X,Y)|. \tag{73}\]
We also let
\[f_{\text{high}}=\sum_{V\in\mathcal{V}}f_{V,\text{high}},\quad f_{\text{low}}= \sum_{V\in\mathcal{V}}f_{V,\text{low}}. \tag{74}\]
We will show that the high part dominates for \((X,Y)\in\mathbf{H}_{\delta}\), i.e., \(|f_{\text{high}}(X,Y)|\gtrsim(\log\delta^{-1})^{-2}\#\mathcal{V}\). It suffices to show
\[|f_{\text{low}}(X,Y)|\leq C^{-1}(\log\delta^{-1})^{-2}\#\mathcal{V}, \tag{75}\]
for a large constant \(C\).
Recall that \(f_{\text{low}}=\sum_{V\in\mathcal{V}}f_{V}*\eta^{\vee}\). By the definition of \(\eta\), we see that \(\eta^{\vee}\) is an \(L^{1}\)-normalized bump function essentially supported in \(B^{2(n-1)}(0,K\delta)\) and decays rapidly out side of it. Let \(\chi(X)\) be a positive function \(=1\) on \(B^{n-1}(0,K\delta)\) and decays rapidly outside \(B^{n-1}(0,K\delta)\). We have
\[|\eta^{\vee}(X,Y)|\lesssim(K\delta)^{-2(n-1)}\chi(X)\chi(Y). \tag{76}\]
Together with (72), we have
\[\begin{split}|f_{\text{low}}(X,Y)|&\lesssim\sum_{V \in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}\psi_{\mathbf{T}}*\eta^{\vee}(X,Y)\\ &\lesssim(K\delta)^{-2(n-1)}\sum_{V\in\mathcal{V}}\sum_{T\in \mathbb{T}_{V}}\psi_{R_{0,\mathbf{T}}}*\chi(X)\psi_{R_{1,\mathbf{T}}}*\chi(Y) \end{split} \tag{77}\]
Let \(KR_{0,\mathbf{T}}\) be a tube of dimensions \(\underbrace{K\delta\times\cdots\times K\delta}_{k-1\text{ times}}\times \underbrace{1\times\cdots\times 1}_{n-k\text{ times}}\), which is the \(K\)-dilation of the short edges of \(R_{0,\mathbf{T}}\). Define \(KR_{1,\mathbf{T}}\) similarly. Let \(K\mathbf{T}=KR_{0,\mathbf{T}}\times KR_{1,\mathbf{T}}\). We see that \(\psi_{R_{0,\mathbf{T}}}*\chi(X)\) is morally a bump function at \(KR_{0,\mathbf{T}}\) with weight \((K\delta)^{n-1}K^{-(k-1)}\). Let us just ignore the rapidly decaying tail and write
\[\psi_{R_{0,\mathbf{T}}}*\chi(X)\lesssim(K\delta)^{n-1}K^{-(k-1)}1_{KR_{0, \mathbf{T}}}(X). \tag{78}\]
Similarly, we have
\[\psi_{R_{1,\mathbf{T}}}*\chi(X)\lesssim(K\delta)^{n-1}K^{-(k-1)}1_{KR_{1, \mathbf{T}}}(X). \tag{79}\]
Plugging back to (77), we obtain
\[|f_{\text{low}}(X,Y)|\lesssim K^{-2(k-1)}\sum_{V\in\mathcal{V}}\sum_{T\in \mathbb{T}_{V}}1_{K\mathbf{T}}(X,Y). \tag{80}\]
Fix \(\ell=\ell(X,Y)\in\mathbf{H}\), let \(W_{K\delta}\) be the \(K\delta\)-slab that is orthogonal to \(V\) at \(P_{V}(\ell_{K\delta})\). By the \(s\)-dimensional condition for \(\mathbb{T}_{V}\), we have
\[\sum_{T\in\mathbb{T}_{V}}1_{K\mathbf{T}}(X,Y)\lesssim\#\{T\in\mathbb{T}_{V}:T \subset W_{K\delta}\}\lesssim K^{s}. \tag{81}\]
Plugging back to (77), we get
\[|f_{\text{low}}(X,Y)|\lesssim K^{s-2(k-1)}\#\mathcal{V}. \tag{82}\]
Noting that \(s<2(k-1)\), we may choose \(K\sim(\log\delta^{-1})^{O(1)}\) for large \(O(1)\) so that (75) holds.
Next, we want to regroup the slabs in \(\mathbb{T}_{V}\). Note that each \(T\in\mathbb{T}_{V}\) is an \((n-k+1)\)-dimensional \(\delta\)-slab, by the transversality assumption, its intersection with \(\mathbb{R}^{n-1}\times\{0\}\) and \(\mathbb{R}^{n-1}\times\{1\}\) are two congruent \((n-k)\)-dimensional \(\delta\)-slabs. We define \(\mathbb{G}\) to be a maximal \(\delta\)-separated subset of \(G(n-k,n-1)\). We are going to define \(\mathbb{T}_{V}^{W}\) where \(W\) ranges over \(\mathbb{G}\). For each \(T\in\mathbb{T}_{V}\), we put \(T\) into \(\mathbb{T}_{V}^{W}\) if \(T\cap(\mathbb{R}^{n-1}\times\{0\})\) is parallel to \(W\) up to \(\delta\)-error. We obtain a partition
\[\mathbb{T}_{V}=\bigsqcup_{W\in\mathbb{G}}\mathbb{T}_{V}^{W}. \tag{83}\]
We have the following observation. For \(T_{1}\in\mathbb{T}_{V_{1}}^{W},T_{2}\in\mathbb{T}_{V_{2}}^{W}\), we have that if \(T_{1}\) and \(T_{2}\) are not comparable, then \(\mathbf{T}_{1}\cap\mathbf{T}_{2}=\emptyset\). The reason is that if \(T_{1}\) and \(T_{2}\) are not comparable, then either \(T_{1}\cap(\mathbb{R}^{n-1}\times\{0\})\) and \(T_{2}\cap(\mathbb{R}^{n-1}\times\{0\})\) are disjoint or \(T_{1}\cap(\mathbb{R}^{n-1}\times\{1\})\) and \(T_{2}\cap(\mathbb{R}^{n-1}\times\{1\})\) are disjoint, which means \(\mathbf{T}_{1}\cap\mathbf{T}_{2}=\emptyset\).
For \(T\in\mathbb{T}_{V}^{W}\), we have
\[\text{supp}(\widehat{\psi_{\mathbf{T}}})\subset W_{\delta}^{*}\times W_{\delta }^{*}. \tag{84}\]
Here, \(W_{\delta}^{*}\) is the dual rectangle of \(W_{\delta}\) in \(\mathbb{R}^{n-1}\). Therefore, we see that \(W_{\delta}^{*}\) has dimensions \(\underbrace{\delta^{-1}\times\cdots\times\delta^{-1}}_{k-1\text{ times}}\times \underbrace{1\times\cdots\times 1}_{n-k\text{ times}}\).
Now, we have
\[\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{ \mathbf{T}})^{2}\lesssim\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_ {T\in\mathbb{T}_{V}}\psi_{\mathbf{T}})^{2}\lesssim\int_{\mathbf{H}_{\delta}}| f_{\text{high}}|^{2}\lesssim\int_{\widetilde{A}(1,n)}|f_{\text{high}}|^{2}. \tag{85}\]
By Plancherel, it is further bounded by
\[\int_{\widetilde{A}(1,n)}|\sum_{V\in\mathcal{V}}\sum_{W\in\mathbb{G}}\widehat {f_{V}^{W}}(1-\eta)|^{2}=\int_{\widetilde{A}(1,n)}|\sum_{W\in\mathbb{G}}\sum_ {V\in\mathcal{V}}\widehat{f_{V}^{W}}(1-\eta)|^{2}. \tag{86}\]
We will estimate the overlap of \(\{\text{supp}(\sum_{V\in\mathcal{V}}\widehat{f}_{V}^{W}(1-\eta))\}_{W\in \mathbb{G}}\). We notice that
\[\text{supp}(\sum_{V\in\mathcal{V}}\widehat{f_{V}^{W}}(1-\eta))\subset\left(W_ {\delta}^{*}\times W_{\delta}^{*}\right)\setminus B^{2(n-1)}(0,(K\delta)^{-1}).\]
We pick \((\Xi_{1},\Xi_{2})\in\widetilde{A}(1,n)\) in the frequency space. We assume \((\Xi_{1},\Xi_{2})\) lies in some \(\text{supp}(\sum_{V\in\mathcal{V}}\widehat{f_{V}^{W}}(1-\eta))\). Therefore, we can assume without loss of generality that \(\Xi_{1}\notin B^{n-1}(0,\frac{1}{2}(K\delta)^{-1})\). For any \(W\in\mathbb{G}\), if \((\Xi_{1},\Xi_{2})\in\text{supp}(\sum_{V\in\mathcal{V}}\widehat{f_{V}^{W}}(1- \eta))\), then \(\Xi_{1}\in W_{\delta}^{*}\). Therefore, the overlap of \(\{\text{supp}(\sum_{V\in\mathcal{V}}\widehat{f_{V}^{W}}(1-\eta))\}_{W\in \mathbb{G}}\) at \((\Xi_{1},\Xi_{2})\) is bounded by
\[\sum_{W\in\mathbb{G}}1_{W_{\delta}^{*}}(\Xi_{1}), \tag{87}\]
Since \(\Xi_{1}\notin B^{n-1}(0,\frac{1}{2}(K\delta)^{-1})\), we can further bound (87) by the overlapping number of
\[\left\{W_{\delta}^{*}\setminus B^{n-1}(0,\frac{1}{2}(K\delta)^{-1})\right\}_{W \in\mathbb{G}}. \tag{88}\]
If we do a dilation by the factor \(\delta\), then \(W_{\delta}^{*}\) becomes \((W^{\perp})_{\delta}\). So we just need to bound the overlapping number of
\[\left\{(W^{\perp})_{\delta}\setminus B^{n-1}(0,\frac{1}{2}K^{-1})\right\}_{W \in\mathbb{G}}. \tag{89}\]
We observe that when \(W\) ranges over \(\mathbb{G}\), \(W^{\perp}\) will range over \(\delta\)-separated subset of \(G(k-1,n-1)\). By [1, Lemma 18], we see that the overlapping number of (89) is bounded by
\[K^{O(1)}\delta^{-\dim(G(k-2,n-2))} \tag{90}\]
The RHS of (86) is bounded by
\[\begin{split}& K^{O(1)}\delta^{-\dim(G(k-2,n-2))}\int_{\widetilde{A} (1,n)}\sum_{W\in\mathbb{G}}|\sum_{V\in\mathcal{V}}\widehat{f_{V}^{W}}(1-\eta)| ^{2}\\ \leq& K^{O(1)}\delta^{-\dim(G(k-2,n-2))}\int_{ \widetilde{A}(1,n)}\sum_{W\in\mathbb{G}}|\sum_{V\in\mathcal{V}}\widehat{f_{V}^ {W}}|^{2}\\ =& K^{O(1)}\delta^{-\dim(G(k-2,n-2))}\sum_{W\in \mathbb{G}}\int_{\widetilde{A}(1,n)}|\sum_{V\in\mathcal{V}}f_{V}^{W}|^{2}\end{split} \tag{91}\]
The last step is by Plancherel to return back to the physical space.
Now, for fixed \(W\in\mathbb{G}\), we estimate
\[\int_{\widetilde{A}(1,n)}|\sum_{V\in\mathcal{V}}f_{V}^{W}|^{2}=\int_{ \widetilde{A}(1,n)}|\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}^{W}}\psi_{ \mathbf{T}}|^{2}. \tag{92}\]
We may ignore the rapidly decaying tail of \(\psi_{\mathbf{T}}\) and think of it as \(1_{\mathbf{T}}\). To estimate the RHS of (92), we will estimate the overlapping number of \(\{\sum_{T\in\mathbb{T}_{V}^{W}}1_{\mathbf{T}}\}_{V\in\mathcal{V}}\).
By an observation we addressed in the paragraph below (83), for any two tubes \(T_{1}\in\mathbb{T}_{V_{1}}^{W},T_{2}\in\mathbb{T}_{V_{2}}^{W}\), we have either \(T_{1}\) and \(T_{2}\) are comparable, or \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}\) are disjoint. Our task is as follows. For a given \(T\), how many \(V\in\mathcal{V}\) can there be so that \(T\) is comparable to some \(T^{\prime}\in\mathbb{T}_{V}^{W}\)? Noting that if \(T\) is comparable to some element in \(\mathbb{T}_{V}^{W}\), then \(T\) must be morally orthogonal to \(V\) at some \(\delta\)-tube. Finding such \(V\) is equivalent to finding \(\delta\)-separated lines in \(G(1,T)\). Therefore, for fixed \(T\), the number of such \(V\in\mathcal{V}\) is \(\lesssim\delta^{-(\dim T-1)}=\delta^{-(n-k)}\). As a result, we obtain
\[\int_{\widetilde{A}(1,n)}|\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}^{W}} 1_{\mathbf{T}}|^{2}\lesssim\delta^{-(n-k)}\int_{\widetilde{A}(1,n)}\sum_{V \in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}^{W}}1_{\mathbf{T}}. \tag{93}\]
Combining (85),(86),(91),(92) and (93), we get the upper bound
\[\int_{\mathbf{H}_{s}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{T})^{ 2}\lesssim K^{O(1)}\delta^{-(k-2)(n-k)}\delta^{-(n-k)}\int_{\widetilde{A}(l,n) }\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{\mathbf{T}}. \tag{94}\]
We notice that for any \((n-k+1)\)-dimensional slab \(S\), the dimension of
\[\{\ell\in\widetilde{A}(1,n):\ell\cap B^{n}(0,1)\subset S\}\]
is \(2(n-k)\). If \(S\) is the core of the \(\delta\)-slab \(T\), or in other words \(T=S_{\delta}\), then
\[\mathbf{T}=\{\ell\in\widetilde{A}(1,n):\ell\cap B^{n}(0,1)\subset T\}\]
is roughly the \(\delta\)-neighborhood of \(\{\ell\in\widetilde{A}(1,n):\ell\cap B^{n}(0,1)\subset S\}\) in \(\widetilde{A}(1,n)\), which has measure \(\sim\delta^{2(n-1)-2(n-k)}\). We also note that \(K\sim(\log\delta^{-1})^{O(1)}\). We can bound the RHS of (94) by
\[\lesssim_{\varepsilon}\delta^{-\varepsilon}\delta^{-(k-2)(n-k)}\delta^{-(n-k) }\delta^{-s-t}\delta^{2(n-1)-2(n-k)} \tag{95}\]
In summary, we obtain
\[\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{ \mathbf{T}})^{2}\lesssim_{\varepsilon}\delta^{-\varepsilon}\delta^{-(k-2)(n-k )}\delta^{-(n-k)}\delta^{-s-t}\delta^{2(n-1)-2(n-k)}. \tag{96}\]
Comparing with the lower bound
\[\int_{\mathbf{H}_{\delta}}(\sum_{V\in\mathcal{V}}\sum_{T\in\mathbb{T}_{V}}1_{ \mathbf{T}})^{2}\gtrsim\delta^{2(n-1)}\#\mathbf{H}\delta^{-2t}\sim\delta^{2(n-1 )}\delta^{-a}\delta^{-2t}, \tag{97}\]
we obtain
\[t\leq k(n-k)+s-a+(n-k). \tag{98}\]
## 6. The projection problem for \(A(l,n)\)
It is quite natural to further think about the same projection problem for \(A(l,n)\). We formally state the problem.
Fix integers \(0<l<k<n\). Let \(V\in G(k,n)\). Note that for \(L\in A(l,n)\), we have \(P_{V}(L)\in A(j,V)\) for some \(0\leq j\leq k\). We can define
\[\pi_{l,V}:A(l,n)\to\bigsqcup_{j=0}^{l}A(j,V)\]
\[L\mapsto P_{V}(L).\]
We can ask the same Marstrand-type projection problem. For \(0<a<(l+1)(n-l)=\dim(A(l,n))\), what is the optimal number \(s(n,k,l,a)\) such that the following is true? Let \(\mathbf{A}\subset A(l,n)\) with \(\dim(\mathbf{A})=a\). Then we have
\[\dim(\pi_{l,V}(\mathbf{A}))=s(n,k,l,a),\text{ for a.e. }V\in G(k,n). \tag{99}\]
For simplicity, we assume \(n,k,l\) are fixed. So we write \(\pi_{V}\) for \(\pi_{l,V}\).
**Definition 30**.: _Fix \(0\leq l<k<n\). For any \(0<a<\dim(A(l,n))\), define_
\[S(a):=\inf_{\mathbf{A}\subset A(l,n),\dim(\mathbf{A})=a}\operatorname*{ess\, sup}_{V\in G(k,n)}\dim(\pi_{V}(\mathbf{A})). \tag{100}\]
_Here, we require \(\mathbf{A}\) to be a Borel set to avoid some measurability issue. We also remark that \(S(a)=S_{l,k,n}(a)\) should also depend on \(l,k,n\), but we just omit them from the notation for simplicity._
A reasonable conjecture would be:
**Conjecture 31**.: _For \(j=0,1,\ldots,l\), we have the following lower bounds of \(S(a)\)._
\[\begin{split} S(a)=a-(l-j)(n-k)\quad a\in[(l-j)(n-l),(l-j)(n-l)+ k-l].\\ S(a)=(l-j+1)(k-l)\quad a\in[(l-j)(n-l)+k-l,(l-j+1)(n-l)].\end{split} \tag{101}\]
We are able to show the upper bound of \(S(a)\) by constructing examples as follows.
**Proposition 32**.: _For \(j=0,1,\ldots,l\), we have the following lower bounds of \(S(a)\)._
\[\begin{split} S(a)\leq a-(l-j)(n-k)\quad a\in[(l-j)(n-l),(l-j)(n -l)+k-l].\\ S(a)\leq(l-j+1)(k-l)\quad a\in[(l-j)(n-l)+k-l,(l-j+1)(n-l)]. \end{split} \tag{102}\]
Proof.: Fix \(j\in\{0,\ldots,l\}\). Let \(V_{j}=\mathbb{R}^{j}\times\{0\}^{n-j}\), \(V_{l}=\mathbb{R}^{l}\times\{0\}^{n-l}\), \(V_{k}=\mathbb{R}^{k}\times\{0\}^{n-k}\). We have \(V_{j}\subset V_{l}\subset V_{k}\subset\mathbb{R}^{n}\). We introduce a new notation. For \(j<l\) and \(V\in G(j,n)\), define
\[\operatorname{Bush}(l,V):=\{W\in G(l,n):W\supset V\}, \tag{103}\]
which is the set of \(l\)-subspaces that contain \(V\). We call \(V\) the _stem_ of the bush. It is not hard to see for \(V\in G(j,n)\),
\[\dim(\operatorname{Bush}(l,V))=(l-j)(n-l). \tag{104}\]
We first consider the case \(a=(l-j)(n-l)+b\) where \(0\leq b\leq k-l\). We choose \(B\subset V_{l}^{\perp}\), so that \(\dim(B)=b\). Note that this is allowable since \(\dim(V_{l}^{\perp})=n-l\geq b\). Next, we choose
\[\mathbf{A}=\bigcup_{v\in B}\operatorname{Bush}(l,V_{j}+v). \tag{105}\]
One way to think about \(\mathbf{A}\) is as follows. We first locate the set of stems \(\{V_{j}+v\}_{v\in B}\), and then enrich each stem to become a bush \(\operatorname{Bush}(l,V_{j}+v)\). \(\mathbf{A}\) is the union of these bushes.
We compute the dimension of \(\mathbf{A}\). The idea is that we have \(b\)-dimensional set of bushes and each bush has dimension \((l-j)(n-l)\), so we have
\[\dim\mathbf{A}\leq\dim B+\dim(\operatorname{Bush}(l,V_{j}))=(l-j)(n-l)+b.\]
The equality holds if there is not too much overlap between different bushes. We handle it in the following way. Consider
\[\operatorname{Bush}^{\prime}(l,V_{j}+v):=\{W\in\operatorname{Bush}(l,V_{j}+v): W\cap V_{l}^{\perp}=\{0\}\}, \tag{106}\]
which are the \(l\)-planes in the bush that are transverse to \(V_{l}^{\perp}\). It is not hard to see
\[\dim(\operatorname{Bush}^{\prime}(l,V_{j}+v))=\dim(\operatorname{Bush}(l,V_{j }+v))=(l-j)(n-l).\]
We also have that for \(v_{1}\neq v_{2}\in B\),
\[\operatorname{Bush}^{\prime}(l,V_{j}+v_{1})\cap\operatorname{Bush}^{\prime}(l,V_{j}+v_{2})=\emptyset. \tag{107}\]
We prove it. Suppose there exists \(W\in\operatorname{Bush}^{\prime}(l,V_{j}+v_{1})\cap\operatorname{Bush}^{ \prime}(l,V_{j}+v_{2})\). Then \(W\supset V_{j}+v_{1},V_{j}+v_{2}\) which implies \(W\supset v_{1}-v_{2}\). However \(v_{1}-v_{2}\in V_{l}^{\perp}\) which contradicts \(W\cap V_{l}^{\perp}=\{0\}\).
Now we have
\[\mathbf{A}\supset\bigsqcup_{v\in B}\operatorname{Bush}^{\prime}(l,V_{j}+v), \tag{108}\]
so \(\dim(\mathbf{A})\geq\dim B+\dim(\operatorname{Bush}^{\prime}(l,V_{j}+v))=(l-j )(n-l)+b\). Therefore, we have
\[\dim(\mathbf{A})=(l-j)(n-l)+b=a. \tag{109}\]
Next, we show that for generic \(V\in G(k,n)\),
\[\dim(\pi_{V}(\mathbf{A}))\leq(l-j)(k-l)+b=a-(l-j)(n-k). \tag{110}\]
We need the following notation. For subspaces \(W_{1}\subset W_{2}\) with \(\dim W_{1}\leq i\leq\dim W_{2}\), define
\[\operatorname{Bush}(l,W_{1},W_{2}):=\{W\in G(l,W_{2}):W\supset W_{1}\}. \tag{111}\]
One can check that \(\operatorname{Bush}(l,W_{1},\mathbb{R}^{n})=\operatorname{Bush}(l,W_{1})\) and \(\operatorname{Bush}(l,\{0\},W_{2})=A(l,W_{2})\).
We prove (110). If \(V\in G(k,n)\) satisfies \(V_{j}\cap V^{\perp}=\{0\}\) (or in other words, \(\pi_{V}(V_{j})\) is \(j\)-dimensional), then for each bush \(\operatorname{Bush}(l,V_{j}+v)\) in \(\mathbf{A}\), we have
\[\pi_{V}(\operatorname{Bush}(l,V_{j}+v))=\operatorname{Bush}(l,\pi_{V}(V_{j}+v ),V)\sqcup\bigsqcup_{j\leq i\leq l-1}\operatorname{Bush}(i,\pi_{V}(V_{j}+v),V). \tag{112}\]
In other words, \(\pi_{V}\) project the bush \(\operatorname{Bush}(l,V_{j}+v)\) to the set of planes of dimension less than \(l\) in \(V\) that contain \(\pi_{V}(V_{j}+v)\). Since the first term on the right hand side of (112) dominates the Hausdorff dimension, we have
\[\dim(\pi_{V}(\operatorname{Bush}(l,V_{j}+v)))=\dim(\operatorname{Bush}(l, \mathbb{R}^{j},\mathbb{R}^{k}))=(l-j)(k-l). \tag{113}\]
Therefore,
\[\dim(\pi_{V}(\mathbf{A}))\leq\dim B+(l-j)(k-l)=b+(l-j)(k-l), \tag{114}\]
which finishes the proof of (110), and hence the first part of (102).
We prove the second part of (102). We just need to show that for \(a=(l-j+1)(n-l)\). The idea is similar to (105), but here we choose \(B=V_{l}^{\perp}\). We construct
\[\mathbf{A}=\bigcup_{v\in V_{l}^{\perp}}\mathrm{Bush}(l,V_{j}+v). \tag{115}\]
By the same reasoning, we have
\[\dim(\mathbf{A})=\dim(V_{l}^{\perp})+(l-j)(n-l)=(l-j+1)(n-l).\]
On the other hand, for generic \(V\in G(k,n)\), \(\pi_{V}(\mathbf{A})\) consists of \((k-l)\)-dimensional set of bushes, since the stems of these projected bushes are parallel \(l\)-planes in \(V\). Also, the dimension of each bush is \((l-j)(k-l)\). Therefore, we have
\[\dim(\pi_{V}(\mathbf{A}))\leq(k-l)+(l-j)(k-l)=(l-j+1)(k-l). \tag{116}\]
This finishes the second part of (102).
However, with the current method in the paper, we are not able to show the lower bound of \(S(a)\) for all range of \(a\). We can also prove Falconer-type and Kaufman-type exceptional set estimates, and obtain some partial results. We just state it without proof.
**Theorem 33**.: _For \(j=0\) or \(l\), we have the following lower bound of \(S(a)\)._
\[S(a)\geq a-(l-j)(n-k)\quad a\in[(l-j)(n-l),(l-j)(n-l)+k-l]. \tag{118}\] \[S(a)\geq(l-j+1)(k-l)\quad a\in[(l-j)(n-l)+k-l,(l-j+1)(n-l)]. \tag{117}\]
\(j=0\) corresponds to the Falconer-type estimate, while \(j=l\) corresponds to the Kaufman-type estimate.
|
2305.13484 | Flover: A Temporal Fusion Framework for Efficient Autoregressive Model
Parallel Inference | Autoregressive models, despite their commendable performance in a myriad of
generative tasks, face challenges stemming from their inherently sequential
structure. Inference on these models, by design, harnesses a temporal
dependency, where the current token's probability distribution is conditioned
on preceding tokens. This inherent characteristic severely impedes
computational efficiency during inference as a typical inference request can
require more than thousands of tokens, where generating each token requires a
load of entire model weights, making the inference more memory-bound. The large
overhead becomes profound in real deployment where requests arrive randomly,
necessitating various generation lengths. Existing solutions, such as dynamic
batching and concurrent instances, introduce significant response delays and
bandwidth contention, falling short of achieving optimal latency and
throughput. To address these shortcomings, we propose Flover -- a temporal
fusion framework for efficiently inferring multiple requests in parallel. We
deconstruct the general generation pipeline into pre-processing and token
generation, and equip the framework with a dedicated work scheduler for fusing
the generation process temporally across all requests. By orchestrating the
token-level parallelism, Flover exhibits optimal hardware efficiency and
significantly spares the system resources. By further employing a fast buffer
reordering algorithm that allows memory eviction of finished tasks, it brings
over 11x inference speedup on GPT and 16x on LLAMA compared to the cutting-edge
solutions provided by NVIDIA FasterTransformer. Crucially, by leveraging the
advanced tensor parallel technique, Flover proves efficacious across diverse
computational landscapes, from single-GPU setups to distributed scenarios,
thereby offering robust performance optimization that adapts to variable use
cases. | Jinghan Yao, Nawras Alnaasan, Tian Chen, Aamir Shafi, Hari Subramoni, Dhabaleswar K., Panda | 2023-05-22T20:58:09Z | http://arxiv.org/abs/2305.13484v3 | # Flover: A Temporal Fusion Framework for Efficient Autoregressive Model Parallel Inference
###### Abstract
In the rapidly evolving field of deep learning, the performance of model inference has become a pivotal aspect as models become more complex and are deployed in diverse applications. Among these, autoregressive models stand out due to their state-of-the-art performance in numerous generative tasks. These models, by design, harness a temporal dependency structure, where the current token's probability distribution is conditioned on preceding tokens. This inherently sequential characteristic, however, adheres to the Markov Chain assumption and lacks temporal parallelism, which poses unique challenges. Particularly in industrial contexts where inference requests, following a Poisson time distribution, necessitate diverse response lengths, this absence of parallelism is more profound. Existing solutions, such as dynamic batching and concurrent model instances, nevertheless, come with severe overheads and a lack of flexibility, these coarse-grained methods fall short of achieving optimal latency and throughput. To address these shortcomings, we propose Flavor - a temporal fusion framework for efficient inference in autoregressive models, eliminating the need for heuristic settings and applies to a wide range of inference scenarios. By providing more fine-grained parallelism on the temporality of requests and employing an efficient memory shuffle algorithm, Flover achieves up to 11x faster inference on GPT models compared to the cutting-edge solutions provided by NVIDIA Triton FasterTransformer. Crucially, by leveraging the advanced tensor parallel technique, Flover proves efficacious across diverse computational landscapes, from single-GPU setups to multi-node scenarios, thereby offering robust performance optimization that transcends hardware boundaries.
Autoregressive model, Inference frameworks, Temporal dependencies, Distributed inference
## I Introduction
Large-scale artificial intelligence (AI) models, especially autoregressive ones, are helping make significant strides in several important areas such as Natural Language Processing (NLP), time-series forecasting, and signal processing. Autoregressive models, including notable Large Language Models (LLMs) like the Generative Pretrained Transformer (GPT) series [2, 3, 4, 18, 19, 25], stand out for their ability to predict successive outputs based on preceding ones and the entire input sequence. This inherent characteristic of forming temporal dependencies among outputs is a characteristic that is particularly pronounced in autoregressive models.
The training of these autoregressive models is a computationally demanding process due to the sheer volume of parameters involved, the extensive sequence lengths, and the requirement of techniques such as beam search and top-k sampling. However, it's important to note that training is largely a one-time effort, often done in-house before the model is made available to the public. A technique known as sequence masking for parallelization has proved instrumental in mitigating this challenge. By leveraging the available ground truth for all output sequences in the training dataset, sequence masking enables the simultaneous processing of different parts of an input sequence, thereby considerably accelerating the training process.
While the optimization of the training phase is crucial, the real-time user experience predominantly hinges on the efficiency of the inference phase. This phase, however, encounters unique challenges due to the strict temporal dependency, a characteristic ingrained by the principles of the Markov chain [13]. This dependency necessitates that each output is generated sequentially, based on its predecessors, which precludes the use of sequence masking for parallelization due to the absence of known ground truth during the inference phase. This temporal data dependency significantly curtails potential parallelism, thus presenting substantial challenges for the efficient execution of the inference process. Therefore, while the training phase can be expedited via sequence masking, optimizing the inference phase, which directly impacts user experience, requires a more tailored approach.
### _Problem Statement_
With the rapid advancement of AI, inference servers routinely grapple with the processing of multiple concurrent inference requests from autoregressive models. These models,
bound by strict temporal dependencies, add a layer of complexity in maintaining high throughput and low latency--a critical requirement for any real-time, user-facing application. The intrinsically sequential nature of these models inherently restricts opportunities for parallel execution during the inference phase, further compounding the challenge.
Current methodologies such as dynamic batching and concurrent model instances, employed by inference frameworks like Microsoft DeepSpeed [1, 20] and NVIDIA Triton Inference Server [6], have demonstrated effectiveness in optimizing non-autoregressive models. However, these methodologies grapple with complexities when confronted with the unique sequential dependencies of autoregressive models. As a result, a pressing need in today's AI landscape is the development of robust strategies capable of parallelizing these temporally dependent inference requests. This would effectively enhance system efficiency, improve throughput, and reduce response time, ultimately leading to a better user experience and wider applicability of these advanced AI models in real-world scenarios.
### _Motivation_
The inherent constraints on parallelism during the inference phase of autoregressive models pose significant performance bottlenecks. These are particularly prominent in real-time applications and scenarios where models must be deployed on resource-limited devices. The issue becomes more pronounced in the context of large-scale autoregressive models, where the sheer volume of data and computations involved exacerbates the challenge. Addressing these challenges is not a matter of academic interest alone. The efficiency of the inference process directly impacts user experience, determining the responsiveness of AI systems in real-world applications. Consequently, there is an urgent need to enhance the efficiency of the inference process in autoregressive models, a necessity recognized by the AI community. This focus is paving the way for the next wave of advancements in AI, aimed at making these powerful models more accessible and efficient in real-world applications. The urgency of this problem and the potential for significant improvements in AI system performance underline the motivation for this work.
### _Contributions_
In this work, we propose **Flover**, a temporal fusion framework tailored to the context of inference in autoregressive models. The main contribution of Flower is to promptly process incoming requests, eliminating the need for batching or time window allocation, while not triggering the launch of redundant model instances. Flower only maintains one main computing stream throughout the lifecycle of inference, largely reducing the overhead in numerous separated kernel calls and scheduling redundant collective communicators.
The paper makes the following contributions:
1. We introduce a novel _temporal fusion framework_ for propelling autoregressive model inference by leveraging the temporality property of inferring generative tasks, delivering superior and more fine-grained parallelism beyond all current solutions.
2. We thoroughly analyze multiple real inference scenarios and compare our solution with the cutting-edge NVIDIA Triton FasterTransformer backend [6, 17] in terms of latency and throughput using the GPT-J [25] 6B model.
3. Our framework delivers over **3.5x** speedup when requests arrive with constant time intervals; Up to **11x** speedup when requests' arrival conforms to the Poisson process; And over **6.8x** speedup when requests largely vary in their sequence lengths.
4. We design an efficient memory shuffle algorithm that can significantly improve computing efficiency and reduce communication message sizes.
5. To the best of our knowledge, Flower is a breakthrough in the workflow of autoregressive model inference, which is also not restricted to hardware resources, delivering above performance gain on single GPU inference, and seamlessly works with the advanced tensor parallel [22] technique to accelerate distributed inference.
For the rest of this paper, we will first provide the necessary background on the paradigm of autoregressive models and their temporal dependency properties. And we will compare existing solutions for accelerating inference and further identifying their drawbacks when applied to autoregressive models. Then we will demonstrate how Flower overcomes these issues and why it is superior for handling complex inference scenarios. In the experiment part, we conduct thorough ablations on single GPU cases and extend to distributed scenarios where we set tensor parallel size up to eight to show how Flower is compatible with these advanced parallel techniques. The code will be available at [https://github.com/YJHMITWEB/Flover](https://github.com/YJHMITWEB/Flover).
## II Background
### _Temporal dependency_
Temporal dependency is a fundamental concept in data science and computer science, wherein the value or state of a certain data point or variable is influenced by the values or states of other data points at prior time steps. This principle is predominantly observed in time-series data, sequential data, or any dataset where the sequence of observations is significant. One of the primary mathematical constructs that captures temporal dependencies is the Markov Chain [13]. The presence of temporal dependencies introduces significant challenges when attempting to parallelize computations. This is because the order and sequence of events matter, and an output at time \(t_{i}\) can only be computed after the output at time \(t_{i-1}\) is available. This inherent sequentiality prevents us from using many traditional parallelization strategies that assume computations can be performed independently.
### _Non-autoregressive v.s. Autoregressive models_
Deep learning architectures encompass a diverse array of models, each with its unique characteristics and applicability. Predominantly, these models can be broadly categorized into
non-autoregressive and autoregressive types, distinguished by their distinct operational mechanisms.
Non-autoregressive models, such as ResNet [10], Inception [23], Vision Transformer [5, 10, 12, 26] for image classification, YOLO [21], FCOS [24] for object detection, and BERT [8] for language understanding, are feed-forward in design, processing each input independently through a series of transformations. For instance, classification models compute class probabilities, while object detection models predict bounding boxes and class probabilities for detected objects in an image. This design implies that an input undergoes a series of transformations to produce an output, and the absence of temporal dependencies within these models means that each input is processed autonomously, without requiring retention or reference to any preceding input.
In contrast, autoregressive models constitute a distinct class of deep learning models, differentiated by their inherent temporal dependencies. Unlike their non-autoregressive counterparts, the output at each step within these models is influenced by the preceding steps. This trait makes them particularly suitable for tasks such as natural language processing and time series analysis, where the sequential order of data points is of paramount importance. However, the sequential nature of these models introduces unique challenges pertaining to latency and computational resource utilization during inference, which are the primary focus of this work.
## III Challenges and Limitations of Existing Approaches
In the quest for efficient inference, general solutions such as **dynamic batching** and **concurrent model instances** as shown in Fig 1 have been integrated into frameworks like Microsoft DeepSpeed [1, 20] and NVIDIA Triton Inference Server [6].
**Dynamic batching** allows the server to wait within a time window \(\tau\), which is pre-defined according to the estimated volume of requests. Requests that arrive within the \(i_{th}\) time window \(\tau_{i}\) will be packed together along the batch dimension. When the time window is reached or the maximum requests are presented, the packed batch \(b_{i}\) will be passed into the inference model as a whole for more efficient processing. Since in inference scenarios, a single request usually has a much smaller batch size compared to training, packing requests to a larger batch will lead to higher GPU utilization and throughput. Though, determining the time window can be heuristic and exhibits no flexibility. For example, the first request that arrived at the beginning of a time window will have to wait for the whole window until it can be processed, this could lead to severe overhead in latency and also prevent possible overlap of computation. Even worth, as shown in Fig 1, request 3 arrives at 510 ms, thus it has to wait until the currently running batch finishes. In autoregressive models, this will significantly increase the response time.
**Concurrent model instances** allows the immediate launching of a new inference instance once a request arrives, and the instance will only infer this request. Specifically, the inference server first loads the model weight into the GPU memory. Then, for each request it receives, a new thread will be spawned by the server and it will create a new instance of the inference model. As more and more requests arrive, the server will continuously spawn new threads to handle each of them separately. Notice that all instances will share the same model weight that was pre-loaded in the global memory, so that the overall memory consumption is still reasonable. However,
Fig. 1: Workflow comparison on dynamic batching, concurrent model instances, and our Temporal fusion. Time stamps on the line give an example of different arrival times of requests. For dynamic batching, we assume the time window is 500ms, though this may vary in real cases. In this example, each inference request asks for 300 iterations.
Fig. 2: Using NVIDIA Triton Perf_analyzer to evaluate the efficiency of concurrent model instances, where per request uses a dedicated model instance. Blue line denotes the ideal latency with no overhead.
this method can introduce severe overhead because each model instance can consume a massive amount of memory bandwidth during computing, and when multiple instances run concurrently, they compete for the same resources, draining the bandwidth and creating a resource contention scenario, leading to severe performance degradation. As shown in Fig 2 (due to the limited support, we only show the trend using a simple Inception [23] network) and our experiment later.
In the context of autoregressive models, the considerable model size and the hundreds or even thousands of required inference iterations intensify the inherent drawbacks of dynamic batching and concurrent model instances. The waiting time for dynamic batching amplifies, while the resource contention for concurrent instances escalates, thereby heightening latency and reducing resource efficiency. These impediments accumulate over the course of many iterations, significantly hampering the overall efficiency of the inference process.
**Insights** from above are two-fold. First, since the time window is an empirical concept that lacks flexibility and introduces latency overhead, the ideal inference framework should be able to proceed with the incoming request instantaneously. Second, only one model instance should be created therefore it can utilize all the memory bandwidth when loading model weights from global memory, and this model instance will perform parallel computation on all requests.
## IV Preliminaries
To schematically demonstrate our method, let's first define what a request is in autoregressive model inference. Consider the GPT [2, 3, 4, 18, 19, 25] models, a request \(R_{i}\) has the following domains:
* Batch size: A positive integer \(n\), e.g. 1 Input words: \(n\) lists of words, e.g. ['How', 'can', 'AI', 'help', 'humans', '?']
* Max Output Length: A positive integer, e.g. 300
The above request indicates that for such a question "How can AI help humans?", the inference server is allowed to generate a response of at most 300 words. For a specific autoregressive model that runs on the inference server, different requests have various input words, and the inference model will generate each answer word by word. According to the model specification, the inference process might terminate early if it outputs an end word, such as "$", denoting the completion of the answer. Or, if it reaches the maximum length (e.g. 300), it will force the inference process to stop.
Next, we will analyze two real inference scenarios where requests' arrival follows a constant time interval \(\tau\) or the Poisson process [11], characterized by independence and stationarity. The memorylessness property of the Poisson process [11] aligns with the nature of independent request arrivals, while the burstiness and sparsity observed in deep learning systems can be accommodated within this paradigm. Considering the arrival of requests conforms to the Poisson process \(P(k)=\frac{e^{-\lambda}\lambda^{k}}{k!}\), the arrivals of requests occur randomly and independently over time. \(\lambda\) denotes the expected number of arrivals that occur in a unit interval of time, and \(P(k)\) represents the probability of \(k\) requests arriving within a unit time interval. Then the time interval \(x\) between two arrivals can be modeled by Exponential distribution \(f(x)=\lambda e^{-\lambda x},x\geq 0\).
Utilizing both paradigms enables us to gain insights into the request arrival patterns, facilitating efficient resource allocation and capacity planning within our design.
## V Framework design
With all the insights we have, we propose Flower, a temporal fusion framework for propelling inference on autoregressive models. First, we make the following clarifications. For every request, we consider it to have five phases, namely, 1). being received by the inference server 2). being pre-processed 3). being ready for computing 4). running and generating 5). finishing and evicting from the server. We refer to these phases as the lifecycle of the request.
Fig 1 shows the abstract workflow of Flower, and Fig 3 further shows more details. First note that Fig 3 (a) shows
Fig. 3: Schematic illustration of the proposed Temporal Fusion Framework for auto-regressive models. The horizontal axis is the timeline, the vertical axis depicts the lifecycle of every inference request. To better present the overall process, we assume that request 0 reaches a max output length of 300 words. The request queue is a FIFO queue, gray blocks denote requests that have already been popped, and the dark block represents request that is currently in the queue. Iter i: Currently running \(i_{th}\) iteration in the inference stream. req k: Currently generating output for \(k_{th}\) request.
the autoregressive process. Then, we start with the upper-left part of the figure. When Flover is launched, it first spawns a dedicated thread \(T_{rq}\) for receiving requests \(R_{i},i\in\mathbb{N}\) and placing them into the request queue \(Q_{r}\).
### _Request pre-processing_
As discussed, inference requests arrive randomly, therefore, in this phase, Flover allows request-specific pre-processing threads to be created dynamically and instantaneously once \(Q_{r}\) is not empty. Each pre-processing thread \(T^{i}_{pp}\) will handle one request \(R_{i}\) popped from \(Q_{r}\). During pre-processing, \(T^{i}_{pp}\) constructs the necessary input and output data structures from the original request \(R_{i}\). For example, in GPT families [2, 3, 4, 18, 19, 25], the pre-processing includes passing input tokens of \(R_{i}\) through the model once, to create a context \(C_{i}\) for later inference. Compare to the inference process which runs the model repeatedly, pre-processing is lightweight and therefore can be handled by multithreading. Finally, once \(R_{i}\) is done preparing, \(T^{i}_{pp}\) will add its runtime information \(I_{i}\) into the ready-for-fusion queue \(Q_{f}\). \(I_{i}\) contains the memory_offset, tensor_size, device_type with respect to every input, output, and intermediate tensor of \(R_{i}\), and it also contains variables like max_output_length, current_iteration, etc., which describe the runtime information. The current_iteration field is set to zero here denoting that the main inference stream has not touched this request yet.
### _Temporal fusion_
For autoregressive models, requests that run in the model may have different temporal steps. Consider request 0 and request 1 in Fig 3. When the first request is captured by the main inference stream, it will immediately start to generate output tokens. Meanwhile, request 1 arrives and is pre-processed. Notice that request 1 is ready for fusion when the main inference stream is still in the middle of processing iteration 2, thus in this circumstance, request 1 will wait until the current iteration finishes. As shown in the figure, at the beginning of iteration 3, request 1 is fused into the main inference stream, and in this iteration, the stream generates \(4^{th}\) output token for request 0 and \(1^{st}\) output token for request 1. Similarly, in iteration 4, the stream generates \(5^{th}\) output token for request 0 and \(2^{nd}\) output token for request 1, so on and so forth.
To put it into a more general form, in autoregressive models, Flower considers passing tensors through the model once, or one iteration, as an atomic operation, which can not be interrupted. The reason behind this is that for every request in the stream, one iteration will always generate one new output token, regardless of gaps in their temporal steps. As shown in Fig 3 (a), consider the abstract model which has n layers, an output token is valid only if the computation on the input tensor starts at layer 0 and finishes at layer n-1. Therefore, for requests that are ready for fusion, they will be postponed until the current iteration finishes. We also emphasize that the time waiting for the completion of an iteration might vary depending on the model specs, requests, and hardware, however, it is considered negligible as each inference consists of hundreds or thousands of such iterations.
Fig 4 illustrates how this temporal fusion works on GPU memory space. The temporal steps conform to the numbers in Fig 3. The pipeline of model execution contains various compute kernel calls as well as collective communication calls, such as listed in Fig 4 (a). Commonly, both calls require the memory offset of tensors and their buffer size, therefore, when fusing new requests to the main inference stream, we need to make sure that their memory space is contiguous for every tensor involved in the stream. Thus, the temporal fusion process contains two operations: 1) Place new request memory adjacent to current memory space; 2) Modify buffer_offset and buffer_size accordingly. Then, when computing kernels or collective operations are called, they can operate on the exact memory space we intend, without involving in additional unnecessary memories.
### _Memory shuffler_
We have discussed in preliminaries that the arrival of requests is random, however, if the inference of every request will always reach the maximum output length, e.g. 300 words before it evicts, then the memory management would be as simple as illustrated in Fig 4. We only need to increase buffer_size when a new request arrives, and increment buffer_offset when a request finishes, and the memory space is guaranteed to be contiguous (assume there is enough memory that allows us to monotonically increase
Fig. 4: Illustration on how request fusion works on the memory level. We make sure that tensors of different requests are located adjacent to each other, forming a contiguous memory space.
buffer_offset). The reason is that since all requests require 300 iterations, then basically the whole inference pipeline can be seen as a FIFO queue, where the request that arrives first will also evict from memory first. However, such an ideal assumption might not be true for complicated real inference scenarios.
As we discussed before, for an autoregressive model, inference requests are likely to differ in max output lengths. Some requests only need a few output tokens, whereas others might require thousands. More commonly, even for an inference server that has already set a max output length for all requests, the inference might output an ending token, such as "S", before it reaches the length limitation. In this case, keep generating new tokens for this request will waste lots of computing power and add additional latency as any tokens following the "S" will be invalid. Thus, it is clear that when a request sees an ending token $ or reaches the length limitation, it should immediately evict from the memory. Fig 5 depicts such a situation, where request 5 and 7 finish at iteration 458, then after they evict, how do we manage the memory space?
If we simply keep buffer_size and buffer_offset the same, then those evicted memories are detrimental to the inference pipeline, as both computing kernels and collective communication can only process contiguous memory buffers. Thus, we need an efficient algorithm to shuffle the memory by moving all valid buffers together to form a new continuous memory space. The problem now becomes how to minimize the amount of memory that needs to be shuffled and therefore not introduce too much overhead, as the inference server will block following iterations until memory is properly managed.
To abstract the problem, given an array of 0 and 1, where 0 denotes empty memory space, and 1 denotes valid, as shown in Fig 6 (a). We need an algorithm that can group all 1 together while moving as less number of elements as possible. Here we use a sliding window algorithm with time complexity \(O(n)\) to achieve it.
Since an ideal shuffle will result in a contiguous memory region of size \(n\) if there are \(n\) 1's in the array. Thus we only need to locate where this memory region of size \(n\) should lay, and we can copy those 1's outside of this region. Algorithm 1 shows how to find the offset of this memory region. Fig 6 (b) illustrates the shuffled memory region and the corresponding shuffle strategy. Note that our algorithm guarantees that the total amount of memory movement is minimized, but might disorder the memory offsets of requests. Therefore, for each request running in the inference model, it also tracks GPU memory offsets of all its tensors.
```
0: arr, a vector of integers
1:total_cost\(\gets 0\)
2:non_zero\(\gets 0\)
3:for\(i\gets 0\) to \(|\)arr\(|-1\)do
4:if\(\)arr\([i]\neq 0\)then
5:non_zero\(\leftarrow\)non_zero\(+1\)
6:total_cost\(\leftarrow\)total_cost\(+\)arr\([i]\)
7:endif
8:endfor
9:min_cost\(\leftarrow\)\(\infty\)
10:window_cost\(\gets 0\)
11:for\(i\gets 0\) to non_zero\(-1\)do
12:window_cost\(\leftarrow\)window_cost\(+\)arr\([i]\)
13:endfor
14:min_cost\(\leftarrow\)min(min_cost,total_cost\(-\)window_cost)
15:mem_offset\(\gets 0\)
16:for\(i\gets\)non_zero to \(|\)arr\(|-1\)do
17:window_cost\(\leftarrow\)window_cost\(+\)arr\([i]\)-arr\([i-\)non_zero)
18:current_cost\(\leftarrow\)total_cost\(-\)window_cost
19:if\(\)current_cost\(<\)min_cost\(\leftarrow\)constthen
20:min_cost\(\leftarrow\)current_cost
21:mem_offset\(\gets i-\)non_zero\(+1\)
22:endif
23:endfor
24:returnmem_offset
```
**Algorithm 1** Find Shuffled Memory Region
The above algorithm can be used to compute the number of requests in the array of 0 and 1, where 0 denotes empty memory space, and 1 denotes valid, as shown in Fig 6 (a). We need an algorithm that can group all 1 together while moving as less number of elements as possible. Here we use a sliding window algorithm with time complexity \(O(n)\) to achieve it.
Since an ideal shuffle will result in a contiguous memory region of size \(n\) if there are \(n\) 1's in the array. Thus we only need to locate where this memory region of size \(n\) should lay, and we can copy those 1's outside of this region. Algorithm 1 shows how to find the offset of this memory region. Fig 6 (b) illustrates the shuffled memory region and the corresponding shuffle strategy. Note that our algorithm guarantees that the total amount of memory movement is minimized, but might disorder the memory offsets of requests. Therefore, for each request running in the inference model, it also tracks GPU memory offsets of all its tensors.
### _Collective communication_
For distributed inference where tensor parallel [22] is enabled, each GPU will only hold a sha
Fig. 5: Eviction of requests will result in orphaned memory, which introduces additional computation overhead as well as wastes memory bandwidth.
Fig. 6: Array in (a) represents the GPU memory space. 1 denotes the memory region of currently running requests; (b) shows the optimal memory shuffle strategy given the layout, where we only need to move 3 pieces of memory to form a new contiguous memory region.
For example, given a fully connected layer \(l\in\mathbb{R}^{m\times n}\) and two GPUs where tensor parallel size is set to 2, they will hold half of the layer weight \(l^{0}\in\mathbb{R}^{m\times n/2}\) and \(l^{1}\in\mathbb{R}^{m\times n/2}\) respectively. Due to this, allgather and allreduce are crucial after every model layer of each iteration. For example, in Fig 3 (a), we show a model with \(n\) layers, when enabling tensor parallel, there will be several collective communication calls as each layer may include multiple collective operations. As discussed in the above sections, in the solution of concurrent model instances, each instance has its dedicated communicator for performing collective operations. In Flower, however, we only need one such communicator in the main inference stream which will handle all communication on all running requests by single collective calls across all GPUs.
## VI Experiments
### _Setup_
As we emphasized, on both single GPU cases and distributed scenarios where other advanced parallel strategies like tensor parallel [22] are already deployed, Flower can largely propel autoregressive model inference with its unique and efficient workflow. Therefore, we conduct ablation experiments on single GPU case to study how Flower improves inference efficiency at a fine-grained level, and we then step into multi-GPU scenarios where Flower works with tensor parallel technique to deliver extraordinary performance on clusters.
**Hardware:** We conduct all experiments on NVIDIA A100 40GB GPUs with AMD EPYC 7713 64-Core Processor. Each computing node has two GPUs connected by the PCI Express. Among nodes, we use the Mellanox InfiniBand HDR200 interconnection. All collective operations are performed by the NVIDIA Collective Communications Library [16] (NCCL).
**Software:** We implement our Flower framework based on NVIDIA Triton FasterTransformer [17] C++ codebase, which is one of the most widely used Triton [6] backends and large language model (LLM) solutions. For the following experiments, we use one of the largest language models supported -- GPT-J [25] 6B. It is created by EleutherAI, a community-driven organization that aims to promote open-source AI research. GPT-J [25] has 6 billion parameters and was trained on The Pile [9], an 825GB dataset from curated sources
Fig. 7: Overall latency comparisons on processing different numbers of requests. Here every batch in each request asks for generating 512 tokens. A single request with batch size 1 takes about **5800ms** on the inference server. Here FasterTransformer [17] deploys concurrent model instances to handle multi requests.
(e.g. Wikipedia, arXiv, GitHub, StackExchange, PubMed, etc.), making it suitable for single GPU or edge inference and can be easily expanded to distributed clusters.
### _Temporal fusion with constant time interval_
In this section, we start with analyzing how efficient Flower is when using temporal fusion to process multiple requests in parallel. As discussed, the real case of arrivals of requests is considered a Poisson process, where the time interval between two requests is a random variable from the exponential distribution. However, for simplicity, in this part, we will use constant time intervals to study the parallel efficiency, as this is also adopted by some inference frameworks.
Consider such a request \(R\), containing an inference task of batch_size=1, max_output_length=512, it takes the inference server about \(T_{r}\) to finish. Let's denote the time interval between request \(R_{0}\) and \(R_{1}\) as \(\tau\). If \(\tau\ll T_{r}\), then most of the time, \(R_{0}\) and \(R_{1}\) are temporally overlapped in the inference server. If \(\tau\gtrsim T_{r}\), then requests are considered sequentially processed. Therefore theoretically, we define:
\[r_{p}=\begin{cases}\frac{T_{r}-\tau}{T_{r}+\tau},&T_{r}>\tau\\ 0,&T_{r}\leq\tau\end{cases} \tag{1}\]
to represent the temporally overlapped portion of two requests, notice that \(r_{p}\in[0,1)\). In practice, however, overlapping two requests might affect \(T_{r}\). Here we stick to it as it is enough for our analysis. Note that in the following, \(r_{p}\) is always based on any two consecutive requests, also for ease of analysis. Fig 7 shows the latency performance of FasterTransformer and our method under three real case scenarios. In Fig 7 (a), we set the time interval between requests to be 500ms, where \(r_{p}\approx 84.6\%\), denoting that most requests are temporally overlapped during inference. For inferring 2 requests, Flower is **1.7x** faster than FasterTransformer [17]. As we increase temporally overlapped requests to 8, the performance gain increases to \(3.4\)x. Fig 7 (b) shows the scenario where the following request comes 2500ms later than the previous one. In this case, we have \(r_{p}\approx 41.2\%\) of temporal overlapping. As Flower is designed to benefit temporally overlapped requests, the speedup now is **1.3x** and **1.7x** for 2 and 8 requests respectively. We have also conducted an extra experiment as shown in Fig 7 (c), where the time interval between requests is 5000ms, and accordingly \(r_{p}\approx 9.09\%\). Here since the next request arrives when the previous one is almost done inference, there is very little overlapping space for Flower to perform, and the overall pipeline is almost sequential. It is also noteworthy that for each time interval, we vary the batch size of each request from \(1\sim 4\) to see how the batch size affects inference, as inference usually does not have a large batch size like in training. However, we found that within this range, it has little impact on the overall latency due to the hardware capacity of parallel execution.
### _Temporal fusion with Poisson process_
As we discussed, the arrival times of inference requests are not fixed or predictable in a strict sense. Instead of adhering to a constant time window or a constant interval between the arrival of each request, the process can be modeled as a Poisson process [11], in which the exponential distribution models the varying time intervals between the arrivals of requests. To better demonstrate the experiment setting for this part, we start with the following: Let \(\tau_{i}\) denote the time interval between \(R_{i-1}\) and \(R_{i}\), thus \(T_{1},T_{2},...,T_{n}\) is a sequence of independent and identically distributed (i.i.d) random variables from the exponential distribution with a finite mean \(\lambda\). According to the Central Limit Theorem [14] (CLT), as we increase the number of samples \(n\), the \(\bar{\tau}\) will better estimate \(\lambda\). Therefore, we set each request with batch_size=1, while increasing the total number of requests up to 32, which maximizes the memory utilization of hardware. And each request is with a 512 output tokens limitation as before. Bars in Fig 8 compare the total inference latency on 32 requests using FasterTransformer [17] and Flower respectively, under a span of \(\lambda\) in \([20ms,5000ms]\). The yellow line reports the average number of overlapped requests in the overall inference, which is in inverse proportion to \(\lambda\). When \(\tau=20\)ms, almost all requests are parallel processed by the inference server, while when \(\tau=5000\)ms, on average only 1 or 2 requests can temporally overlap with
Fig. 8: Total latency of inferring 32 requests follow the Poisson process. Time intervals between requests are randomly sampled from the exponential distribution with different \(\lambda\). A single request takes 5800ms on the inference server. The right-side table further shows the number of total iterations for inferring 32 requests, requests overlap, and speedup to FasterTransformer, following the Poisson process of different \(\lambda\). A single request requires 512 iterations to generate all output tokens.
each other. Table in Fig 8 provides a more detailed stat on the Poisson process. \(Overlap\) is dividing the average number of temporally overlapped requests by the total number of requests. Total Iters. counts from the first request's output token to the end token of the last request. Given that one request requires 512 iterations for inference, the larger the overlap, the more performance gain Flower can provide, as it is able to optimize most computing and communication during the inference. Also noteworthy is that in concurrent model instances, the time interval does not dominate the overall latency until it reaches 4000ms. We assume that this is due to operating multiple instances which introduce too much overhead for the inference server as we also stated in previous sections, resulting in severe degradation in performance.
### _Memory shuffle for non-uniform requests_
We have so far analyzed different arrival patterns of requests, e.g. constant, random. However, in real-world scenarios, requests from various users might vary drastically in the total number of iterations, which is another random variable. The distribution of the total number of iterations (i.e., the length of the generated sequences) before an end-of-sequence (EOS) token appears in a sequence generated by an autoregressive model like GPT [25, 18, 2, 4, 19, 2] largely depends on the specifics of the model and its training data. If the model has been trained on a dataset where text sequences typically have a certain length, it will likely generate sequences of similar length when run on similar data. Moreover, the generation process in autoregressive models inherently includes a degree of randomness. This randomness can cause variability in the length of the generated sequences, making it hard to fit a simple distribution. And techniques such as beam search, top-k sampling, or temperature adjustments used during the generation process can also affect the length of the output sequences. Given these factors, to better study how different frameworks perform in the most-uncertain scenarios or worst-case, we adopt a uniform distribution \(U_{l}(a,b)\) to model and sample requests' total number of iterations, where all values are equally distributed.
In this experiment, we will vary \(a\) and \(b\) correspondingly, to mimic the use cases of Flower in various autoregressive models. As stated in VI-C, we set the number of requests to 32 to approach the real distribution and reduce variance. In Fig 9 (a), we compare our method with the baseline FasterTransformer which uses concurrent model instances to infer requests. Notice that Flower without memory shuffle refers to the naive solution we showed in Fig 5, which will not perform any memory shuffle operations but leave those finished requests' buffers within the contiguous memory space. It is clear that when enabling memory shuffle after requests evict from the compute stream, Flower is able to gain more performance during the inference. This is due to that the memory shuffle will reconstruct the buffer to make sure evicted ones are no longer part of the computation. Also noteworthy is that, for \(U_{l}\) on the interval \([a,b]\), the standard deviation is given by the \(\sigma=\sqrt{\frac{(b-a)^{2}}{12}}\). Therefore, as we increase the upper bound of \(U_{l}\), requests tend to have more various numbers of iterations, which means there will be more orphaned buffers as requests finish and evict from the stream. Compared to FasterTransformer, Flower with memory shuffle delivers a **6.8x** speedup in overall inference latency.
To better study the capability of memory shuffle, we conduct a thorough experiment by fixing the lower bound at 128 but expanding the upper bound to 1792, forming much more diverse requests. As shown in Fig 9 (b), by dynamically reconstructing the buffer space, memory shuffle can further bring about a **2x** speedup compared to vanilla Flower. And the gap will be larger as the average number of iterations per request increases.
### _Distributed inference_
To scale inference across multiple GPUs, the most obvious solution is to use data parallel [7]. This method is straightforward as it only requires creating model replicas on multiple GPUs, and since inference does not include weight updates, there is no communication among these replicas, therefore can be easily implemented and compatible with other parallel techniques. The other advanced parallel strategy is pipeline
Fig. 9: (a) Total latency of inferring 32 requests with random number of iterations. Time intervals between requests are fixed at 20ms. 128-256 denotes every request’s total iterations follows a uniform distribution with lower bound at 128 and upper bound at 256. (b) Flower w/o. and w. memory shuffle compared by reducing to 16 requests but further pushing the upper bound to 1792, while fixing the lower bound at 128. Purple line to the right denotes the relative speedups.
parallel [15]. This technique involves the distribution of the model's layers across multiple GPUs. While this is effective in conserving memory by slicing the input batch and allowing simultaneous computation across different parts of the model, inherently it can be considered as a much more simplified and coarse-grained solution in auto-regressive model inference compared to Flover, as in which requests are further grouped on the temporal iteration level.
To the best of our acknowledge, in multi-GPU scenarios, tensor parallel [22] provides the parallelism that is orthogonal to Flover's temporal fusion. This orthogonality means that when working together, they can boost the inference along different dimensions. Compared to concurrent model instances where each instance has its dedicated communicator to perform collective operations, e.g. allreduce, allgather, broadcast, Flover keeps the overall communication simple and clear by only using one such communicator throughout the computing stream, and each collective call handles all on-the-fly requests, such that further reduces the inference overhead. For the clarity of the following, tensor parallel size denotes the number of GPUs.
In Fig 10, we conduct thorough experiments under different request settings. In general, within each iteration of the GPT-J [25] 6B model that we use, there will be two Allreduce operations and one Allgather operation across all GPUs involved.
GPUs' interconnectionFirst, to study how different inter-GPU connections can affect collective operations with and without memory shuffle, we set the tensor parallel size to 2, and compare the overall latency on both inter-node and intra-node cases. In this setting, the GPT-J [25] 6B model will be split into two shards. We use 16 requests and set the upper bound of output length to 3600, as it reaches the maximum capacity of GPU memory. In "TP 2, L3600, Intra", memory shuffle brings about 1.51x speedup, whereas in "TP 2, L3600, Inter", the boost becomes 1.64x. This further proves the growing importance of memory shuffle when the cost of communication is increasing, as memory shuffle will remove unnecessary buffers from involving in communication.
Concurrency of requestsNext, we study the scalability of Flover by increasing the total number of requests. On two GPUs, we set the upper bound of output length sampling to 1000. In handling 16, 32, 48 requests, memory shuffle gains us a 1.46x, 1.60x, 1.67x speedup in latency. This increasing trend of performance gains is because as there are more requests, the occurrence of orphaned memory left by evicted requests becomes more frequent, thus the substantial effectiveness of memory shuffle becomes more salient. A similar trend can also be observed in four GPUs where the upper bound of output length sampling is set at 2400.
Output length of requestsFinally, we study how memory shuffle becomes crucial for longer sequences' generation. With tensor parallel size at 4, we fix the total number of requests at 16, while increasing the upper bound of output length sampling from 2400 to 3840, 5120, 7200 respectively. Note that as the average output length increases, the vanilla version without memory shuffle will waste more time and resources on computing and communicating those orphaned memories, therefore, introducing additional overhead. As shown in Fig 10, the memory shuffle speeds up the overall inference by 2.34x, 2.97x, 3.13x, 3.30x as the average output length increases, compared to our vanilla Flover without memory shuffle. We also conduct an additional experiment on eight GPUs with tensor parallel [22], where we found that for the model we use, running on 8 GPUs underscores the overhead in synchronization and the effect of memory shuffle is less significant due to the limited number of requests.
By conducting the above experiments, we solidly demon
Fig. 10: Testing Flover in distributed inference scenarios where requests arrive in a fixed time interval of 20 ms, with max output length sampled from a uniform distribution with the lower bound of 128. TP 2/4/8 denotes the model is running with tensor parallel size of 2,4,8 respectively. L1000/2400/3600/3840/5120/16000 denotes the upper bound of the max output length sampling is 1000, 2400, 3600, 3840, 5120, 16000 respectively. Inter/Intra denotes whether the tensor parallelism is across inter-node or intra-node GPUs. 8/16/32/48 R denotes the number of requests is 8, 16, 32, 48 respectively.
strate the efficacy of Flover and the memory shuffle algorithm in handling multiple real inference scenarios on autoregressive models, and further present how Flover is compatible with the advanced tensor parallel [22] technique to propel large scale inference on distributed clusters.
## VII Conclusions
We have proposed a novel temporal fusion framework (Flover) for efficient autoregressive model inference across various industrial and commercial scenarios. Unlike existing solutions that either require a delayed batching of requests or launch multiple model instances to serve the need, which lacks flexibility and causes severe overhead in response time, Flover innovatively leverages temporal parallelism of autoregressive models, providing instantaneous inference on incoming requests while being able to seamlessly fuse new requests to proceeding ones regardless of their temporal gaps. By employing an efficient memory shuffle algorithm, our solution enhances hardware utilization and substantially reduces the overhead in computing and communication, guaranteeing a highly efficient and performant inference framework. Being synergistically coalesced with the advanced tensor parallel technique, Flover achieves optimal management on both single GPU and distributed inference scenarios, ensuring robustness and scalability in diverse autoregressive model inference landscapes. We hope that this work sparks further research and innovations, fostering new methods and techniques that build upon this foundation.
|
2301.01577 | Comparison of Shock-Boundary Layer Interactions in Adiabatic and
Isothermal Supersonic Turbine Cascades | Wall-resolved large eddy simulations are employed to investigate the
shock-boundary layer interactions (SBLIs) in a supersonic turbine cascade. An
analysis of the suction side separation bubbles forming due to the SBLIs is
presented for adiabatic and isothermal (cooled) walls. Flow snapshots indicate
that the separation bubble contracts and expands in a similar fashion for both
thermal boundary conditions. However, the skin-friction coefficient
distributions reveal a downstream displacement of the separation region when
cooling is applied. The separation bubble is also smaller for this setup
compared to the adiabatic one. A steeper pressure rise is observed for the
isothermal wall downstream of the incident oblique shock, and this occurs
because the incident shock wave gets closer to the blade surface when cooling
is applied. The Reynolds stresses are computed to investigate the effects of
wall temperature on the turbulence activity. While the levels of the tangential
stresses are similar for the cases analyzed, those for the wall-normal
component are higher for the cooled wall. | H. Lui, T. R. Ricciardi, W. R. Wolf, Carlos Junqueira-Junior | 2023-01-04T12:51:59Z | http://arxiv.org/abs/2301.01577v1 | Comparison of Shock-Boundary Layer Interactions in Adiabatic and Isothermal Supersonic Turbine Cascades
###### Abstract
Wall-resolved large eddy simulations are employed to investigate the shock-boundary layer interactions (SBLIs) in a supersonic turbine cascade. An analysis of the suction side separation bubbles forming due to the SBLIs is presented for adiabatic and isothermal (cooled) walls. Flow snapshots indicate that the separation bubble contracts and expands in a similar fashion for both thermal boundary conditions. However, the skin-friction coefficient distributions reveal a downstream displacement of the separation region when cooling is applied. The separation bubble is also smaller for this setup compared to the adiabatic one. A steeper pressure rise is observed for the isothermal wall downstream of the incident oblique shock, and this occurs because the incident shock wave gets closer to the blade surface when cooling is applied. The Reynolds stresses are computed to investigate the effects of wall temperature on the turbulence activity. While the levels of the tangential stresses are similar for the cases analyzed, those for the wall-normal component are higher for the cooled wall.
## 1 Introduction
Supersonic fluid machinery are applied in high-speed propulsion and power generation systems due to their high power density [1]. In supersonic turbines, inlet shock waves are formed and interact with the boundary layers of neighboring blades. The shock-boundary layer interactions (SBLIs) can increase the aerodynamic drag due to flow separation and induce higher heat transfer rates to the blade surface. They can also be a source of flow unsteadiness, where multiple frequencies are excited due to motion of the incident and reflected shock waves, breathing of the separation bubble, besides the incoming turbulent boundary layer. Typically, the shock wave motion leads to strong pressure fluctuations that can compromise the system's structural integrity [2, 3, 4, 5].
Most studies of SBLIs have considered adiabatic wall conditions and, thus, the effects of surface heat transfer are not fully explored. Schulein [6] used non-intrusive techniques to perform heat transfer and skin-friction measurements in the impingement of an incident oblique shock wave on a flat plate with isothermal wall conditions. Their results show that within the separation region, the heat flux increases in the streamwise direction, while the skin-friction decreases. Jaunet et al. [7] investigated experimentally the impact of the wall temperature on a shock-induced boundary layer separation. They observed that the interaction length considerably increases when the wall temperature is raised. Bernardini et al. [8] and Volpiani et al. [9] carried out direct numerical simulations (DNS) to investigate the wall temperature effects on the physics of SBLIs. Results revealed that wall cooling significantly reduces the size of the separation bubble and interaction scales, while the opposite behavior is noticed in the case of wall heating.
In the present work, a high-order overset compressible large eddy simulation (LES) methodology is employed to investigate the flow in a supersonic turbine cascade with two different wall thermal boundary conditions. These include an adiabatic and a cooled walls, where the wall to inlet temperature ratio is set as \(T_{w}/T_{\infty}=0.75\). First, the numerical methodology is described including the grid details and flow configurations. Spanwise and time averaged pressure and skin-friction coefficients, as well as the mean flow fields, are presented to assess the effect of cooling on the size and form of the separation bubble. Then, flow snapshots are analyzed to investigate the features of the separation bubbles and the shear layer dynamics at different instants of the SBLI. Finally, the effects of the wall thermal boundary conditions on the turbulence activity are analyzed by assessing the Reynolds stress distributions.
## II Numerical Methodology
The present wall-resolved large eddy simulations solve the compressible Navier-Stokes equations in a curvilinear coordinate system. The fluid is assumed to be a calorically perfect gas, where the molecular viscosity \(\mu\) is considered to depend on the local temperature through the nondimensional Sutherland's law. The spatial discretization of the governing equations is performed using a sixth-order accurate finite-difference compact scheme [10] implemented on a staggered grid. A sixth-order compact interpolation scheme is also used to obtain flow quantities on the different nodes of the staggered grid configuration.
Two grids are employed in the present simulations: one is a body-fitted O-grid block which surrounds the airfoil and the other is an H-grid block used to enforce the pitchwise periodicity of the cascade. In the O-grid, the time integration of the equations is carried out by the implicit second-order scheme of Beam and Warming [11]. This method overcomes the stiffness problem arising from the wall-resolving boundary layer mesh. In the background H-grid block, a third-order Runge-Kutta scheme is used for time advancement of the governing equations. A fourth-order Hermite interpolation scheme [12, 13] is used to exchange information between grid blocks in an overlapping zone. Further details about the numerical procedure can be found in [13].
Due to the non-dissipative characteristics of the compact finite-difference schemes, numerical instabilities may arise from mesh non-curvature, interpolation between the overset grids, and boundary conditions. To preserve stability of the numerical simulations, the high wavenumber compact filter presented by Lele [14] is applied in flow regions far away from solid boundaries at each time step. A shock capturing scheme is also employed to capture the shock waves forming in the present flows. In order to introduce minimal numerical dissipation in the vicinity of the shocks, without damping the small scales of turbulence, the localized artificial diffusivity (LAD) method [15] is employed to compute the artificial bulk viscosity and thermal conductivity. The approach LAD-D2-0 proposed by Kawai et al. [16] is employed here with no artificial shear viscosity. In order to transition the boundary layers, we apply a body forcing on the RHS of the Navier-Stokes equations, as described by Sansica [17]. Here, an unsteady actuation with a random spanwise treatment is assumed and the amplitude of the disturbances are chosen experimentally in order to guarantee a bypass transition with minimal flow disturbance. More details of the numerical procedure can be found in [5].
## III Flow and Mesh Configurations
This section shows details of the flow configuration studied and describes the computational grid used in the LES calculations. Figure 1 (a) presents the geometrical parameters and flow conditions. The inlet Mach number is set as \(M\) = 2.0 and the Reynolds number based on the inlet velocity \(U_{\infty}\) and axial blade chord is \(Re\) = 200,000. The ratio of specific heats is chosen as \(\gamma\) = 1.31, the Prandlt number is \(Pr=0.747\) and the ratio of the Sutherland constant over inlet temperature is set as \(S_{\mu}/T_{\infty}\) = 0.07182. These conditions are chosen based on previous studies [4, 5, 18].
Figure 1 (b) displays a schematic of the overset grid employed in the LES along with the implemented boundary conditions. The O-grid block has \(1200\times 280\times 144\) points and is embedded in the background Cartesian grid block of size \(960\times 280\times 72\). Therefore, the grid has approximately \(68,000,000\) points. Depending on the case, adiabatic or isothermal boundary conditions are applied along the blade surface. For the latter, the wall to inlet temperature ratio is \(T_{\rm w}/T_{\infty}=0.75\), representing a cooled wall. Supersonic inflow boundary conditions are used to set the inlet conditions. For the outflow, a boundary condition based on the Navier-Stokes characteristic boundary condition (NSCBC) [19] is employed. A damping sponge is also applied near the inflow and outflow boundaries to minimize reflections of disturbances [10, 20]. Periodic boundary conditions are used in the \(y\)-direction of the background grid, according to Fig.
Figure 1: Schematics of (a) flow configuration and geometrical parameters, and (b) computational domain skipping every 5 grid points.
1 (a), in order to simulate a linear cascade of blades and periodic boundary conditions are also applied in the spanwise direction, to enforce a statistically homogeneous flow along the span.
For the adiabatic wall case, the grid resolution in terms of wall units is kept in the range given by \(6<\Delta s^{+}<25\), \(0.1<\Delta n^{+}<0.3\), and \(3<\Delta z^{+}<9\), where \(s\), \(n\) and \(z\) represent the streamwise, wall-normal and spanwise flow coordinates. For the isothermal wall simulation, the near-wall grid spacing ranges from \(15<\Delta s^{+}<60,0.2<\Delta n^{+}<0.6\), and \(6<\Delta z^{+}<19\). These numbers are computed for regions where the boundary layers are fully developed and in equilibrium, away from the tripping and recirculation regions. It is worthwhile to mention that the same computational grid is used for both cases, but higher values in terms of wall units are obtained for the isothermal wall case due to a inherent reduction of the viscous length scales caused by cooling.
The simulation is initialized with a uniform flow and statistics are computed after the initial transients are discarded. In the simulations, a variable time step is computed based on an inviscid CFL parameter of 0.8. The body-force tripping is applied at \(0.22<x/c_{ax}<0.27\) for the suction side, and at \(0.10<x/c_{ax}<0.15\) for the pressure side. The wall normal height of the body-foce region is \(\delta=0.001c_{ax}\) and the actuation changes every \(\Delta t\approx 0.003\) in a spanwise-random fashion.
## 4 Results
This section presents results obtained by the LES computed for adiabatic and isothermal (cooled) wall boundary conditions. Flow quantities are collected for 4 flow through times, based on the inlet velocity and blade axial chord. Figure 2 shows iso-surfaces of \(Q\)-criterion colored by the \(u\)-velocity component together with a background view of density gradient magnitude, \(|\nabla_{\rho}|\). The top and bottom rows present results for the adiabatic and cooled wall cases, respectively.
In Figs. 2 (a) and (c), we can observe the complex shock structure across the turbine passage. The detached oblique shock waves generated at the airfoil leading edges interact with the boundary layers of the neighboring blades and are reflected across the cascade. On the pressure side, the incident shock wave becomes normal to the wall and, then, a Mach reflection is formed, while an oblique shock reflection is generated on the suction side. To highlight the effect
Figure 2: Iso-surfaces of \(Q\)-criterion colored by \(u\)-velocity component for the adiabatic (top) and cooled (bottom) wall cases. The background plane displays the shock waves by visualizing the density gradient magnitude \(|\nabla_{\rho}|\).
of cooling on the SBLI, a detailed view of the flow field can be seen in Figs. 2 (b) and (d), where one can observe differences between the lengths of the separation bubbles, especially on the suction side. For the cooled wall, a smaller recirculation region is noticed.
The mean skin-friction coefficient distribution \(c_{f}=\frac{\tau_{w}}{0.5\rho_{w}U_{w}^{2}}\) is provided in Fig. 3(a) for the blade suction side. This plot shows the presence of a separation bubble characterized by locations where \(c_{f}<0\), which is delimited by a horizontal dashed line. The effect of cooling on the size of the recirculation region is evident. For the isothermal case, one can observe a downstream displacement of the separation region compared to the adiabatic wall setup. On the other hand, the reattachment locations are similar for both cases. Hence, the cooled wall depicts a smaller separation bubble. For the adiabatic wall case, the time-averaged characteristic length of the separation bubble is \(\langle L_{SB}\rangle=0.16c_{ax}\) and it is observed along \(0.70<x/c_{ax}<0.86\). For the cooled wall, \(\langle L_{SB}\rangle=0.10c_{ax}\) and it is formed on \(0.75<x/c_{ax}<0.85\)
Fig. 4: Time-averaged contours of normalized \(u\)-velocity (top) and temperature (bottom) for the adiabatic (left) and cooled (right) wall cases. The black lines display the shock waves visualized by pressure gradient magnitude. The black dashed lines show the sonic line.
Fig. 3: Mean skin-friction and pressure coefficient distributions for the adiabatic (black) and cooled (blue) wall cases. The distributions are shown only along the suction side.
After a small negative skin-friction coefficient plateau, a similar recovery is observed downstream of the reattachment location for both cases.
Figure 3 (b) plots the mean pressure coefficient \(c_{p}=\frac{p-p_{\infty}}{0.5\rho_{\omega}U_{\omega}^{2}}\) along the airfoil chord. For both adiabatic and cold wall cases, it is possible to note two pressure rises: the first occurs near the separation point due to the compression waves formed upstream of the separation bubble, and the second takes place near the reattachment location as a result of the incident shock impingement and the turbulence amplification mechanism [21]. For the cooled wall setup, a steeper variation of \(c_{p}\) is observed, especially for the second pressure rise.
To highlight the influence of the wall thermal boundary conditions on the size and shape of the separation bubbles, the mean (spanwise and time averaged) \(u\)-velocity contours are presented in Figs. 4 (a) and (b), for the adiabatic and isothermal cases, respectively. Here, the velocity component is normalized by the inlet speed of sound. These figures reinforce the findings observed in the friction coefficient distributions. The main effect of wall cooling is to reduce the viscous length scales near the wall [8, 9] which in turn affects the shock penetration, as shown in 4 (a) and (b). One can see that the impinging shock penetrates deeper in the boundary layer for the cooled wall case due to the displacement of the sonic line (displayed as a dashed line) towards the wall. This effect is responsible for the steeper variation in the pressure coefficient observed in Fig. 3(b). One can also see that, for the cooled wall, the incident shock reaches further downstream compared to the adiabatic case.
Figures 4 (c) and (d) show the mean temperature fields for the the adiabatic and cold wall boundary conditions, respectively. The values are presented normalized by the inlet temperature. For the former case, one can observe that a region of maximum temperature occurs within the separation bubble. On the other hand, when cooling is applied, higher temperature values are observed in the free shear layer, downstream the bubble. For the adiabatic wall, friction from the shear stresses near the wall and around the bubble are converted into heat which is transferred along the boundary layer and inside the bubble. This causes the near-wall flow to reach higher temperatures. However, heat from the flow is transferred to the blade in the isothermal case, which has a lower temperature than the surrounding flow. For the cooled wall case, the maximum temperature values are observed along the free shear layer, behind the bubble, due to strong shearing effects that cause aerodynamic heating.
The temporal evolution of the separation bubble length \(L_{SB}\) is shown in Fig. 5 for the adiabatic and isothermal walls. The instantaneous length of the bubble is defined as the distance between the instantaneous reattachment and separation locations. One can observe that the separation region undergoes a contraction/expansion motion for both cases. The excursions from the mean appear to be similar for both cases. A spectral analysis of this signal should provide further information on the frequency scales related to the bubble motion and such analysis should be conducted in future work, once longer signals are collected for statistical convergence of the lower frequencies of interest.
To highlight the 2D structure of the suction side separation bubble and shear layer at different time instants, snapshots of \(z\)-vorticity are displayed in Fig. 6 for both thermal boundary conditions. These snapshots correspond to the instants indicated by the letters "a-d" in Fig. 5. The region enclosed by the green line shows the separation region and the black lines display the impinging shocks. In addition, the mean separation and reattachment positions are indicated by the orange and cyan squares, respectively. For both cases, when the bubble suffers a contraction, the instantaneous separation (reattachment) point moves downstream (upstream) with respect to its mean value, as can be visualized in Figs. 6(a) and (c). On the other hand, when the bubble undergoes an expansion, one can observe the upstream (downstream) movement of the instantaneous separation (reattachment) point with respect to its mean position. This indicates that the bubble has
Figure 5: Temporal variation of the suction side separation bubble length \(L_{SB}\) for the adiabatic (top) and cooled (bottom) wall cases.
a breathing pattern, but its central position does not have large excursions from the mean. Figure 6 also shows that the shear layer downstream of the bubble is more diffused for the adiabatic case, while more concentrated vorticity values are observed when cooling is applied. These findings corroborate the maximum temperature values observed in Figs. 4(c) and (d). For example, in the adiabatic case, the shear layer around the bubble creates a zone of intense heating.
The effects of the thermal boundary conditions on the turbulence properties, are investigated by the tangential and wall-normal Reynolds stresses, \(\langle u_{t}u_{t}\rangle\) and \(\langle u_{n}u_{n}\rangle\), respectively, and the turbulent kinetic energy (TKE) are presented in Fig. 7. In this figure, the top and bottom rows display results for the adiabatic and isothermal walls, respectively. In Figs. 7 (a) and (d), it can be seen that the highest fluctuations of \(\langle u_{t}u_{t}\rangle\) are observed just upstream of the shock-bubble interaction for both cases, with similar fluctuation values. The amplification of \(\langle u_{t}u_{t}\rangle\) is associated with the development of the shear layer [21]. The peak values of \(\langle u_{n}u_{n}\rangle\) are found along the free shear layer downstream of the bubble. The magnitude of \(\langle u_{n}u_{n}\rangle\) decreases when cooling is applied. In Figs. 7 (c) and (f), one can observe that the turbulent kinetic energy combines the trends observed from the \(\langle u_{t}u_{t}\rangle\) and \(\langle u_{n}u_{n}\rangle\) components. In addition, before the SBLI, we can notice a downstream displacement of the maximum turbulence amplification location for the cooled wall case. This occurs due to the higher shock penetration discussed previously.
## 5 Conclusions
Wall-resolved large eddy simulations are employed to investigate thermal effects in a supersonic turbine cascade. Simulations are performed for adiabatic and isothermal boundary conditions, where in the latter case the blade is cooled. For the present flow configurations, oblique shock waves are generated at the leading edges of the airfoils, and they interact with the boundary layers of the neighboring blades. A study of the shock-boundary layer interactions is presented for the blade suction side, where an incident oblique shock reflects on the wall leading to the formation of a separation bubble.
The impact of the thermal boundary conditions on the separation bubbles is investigated. The distributions of mean skin-friction show that the separation bubble is considerably smaller for the cooled wall compared to the adiabatic case. Pressure coefficient distributions show that a steeper pressure rise occurs downstream the incident shock wave for the cooled wall. For this case, cooling induces the formation of a thinner boundary layer and the sonic line forms closer to the wall. Results in terms of mean velocity contours reveal that the more pronounced pressure rise occurs due to the
Figure 6: Spanwise \(z\)-vorticity contours at different time instants for the adiabatic (top) and cooled (bottom) wall cases. The green line delimits the bubble while the black line shows the incident shock wave.
higher penetration of the incident shock in the isothermal (cooled) wall. Maximum temperature values are observed along the bubble, for the adiabatic case, and the free shear layer, for the cooled wall. In the former case, aerodynamic heating is transferred to the bubble due to a surrounding shear layer. For the latter case, intense shearing is observed along the free shear layer, behind the bubble, and leads to high temperatures.
An analysis of the instantaneous separation and reattachment locations demonstrates that the separation bubbles have a breathing pattern of contractions and expansions. For the contraction motions, the instantaneous separation point moves downstream while the reattachment point moves upstream. The other way around is observed for the expansion motions. The tangential Reynolds stress distributions reach maximum values just upstream the shock-bubble interactions, being similar for both the adiabatic and isothermal walls. However, due to the higher shock penetration of the isothermal wall, the peaks appear more downstream along the blade chord. The wall-normal Reynolds stresses reach maximum amplitudes downstream the SBII and they are more pronounced for the adiabatic wall. In future work, further analysis of the SBII dynamics will be provided for both suction and pressure side boundary layers.
## Acknowledgments
The authors acknowledge the financial support received from Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, FAPESP, under grants No. 2013/08293-7, 2019/26196-5 and 2021/06448-0. The authors also thank Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, CNPq, for supporting this research under grants No. 407842/2018-7 and 308017/2021-8. This work was granted access to the HPC resources of IDRIS under the allocation 2021-A0112A12067 made by GENCI.
|
2308.15009 | Double Public Key Signing Function Oracle Attack on EdDSA Software
Implementations | EdDSA is a standardised elliptic curve digital signature scheme introduced to
overcome some of the issues prevalent in the more established ECDSA standard.
Due to the EdDSA standard specifying that the EdDSA signature be deterministic,
if the signing function were to be used as a public key signing oracle for the
attacker, the unforgeability notion of security of the scheme can be broken.
This paper describes an attack against some of the most popular EdDSA
implementations, which results in an adversary recovering the private key used
during signing. With this recovered secret key, an adversary can sign arbitrary
messages that would be seen as valid by the EdDSA verification function. A list
of libraries with vulnerable APIs at the time of publication is provided.
Furthermore, this paper provides two suggestions for securing EdDSA signing
APIs against this vulnerability while it additionally discusses failed attempts
to solve the issue. | Sam Grierson, Konstantinos Chalkias, William J Buchanan, Leandros Maglaras | 2023-08-29T04:15:33Z | http://arxiv.org/abs/2308.15009v2 | # Double Public Key Signing Function Oracle Attack on EdDSA Software Implementations
###### Abstract
EdDSA is a standardised elliptic curve digital signature scheme introduced to overcome some of the issues prevalent in the more established ECDSA standard. Due to the EdDSA standard specifying that the EdDSA signature be deterministic, if the signing function were to be used as a public key signing oracle for the attacker, the unforgeability notion of security of the scheme can be broken. This paper describes an attack against some of the most popular EdDSA implementations, which results in an adversary recovering the private key used during signing. With this recovered secret key, an adversary can sign arbitrary messages that would be seen as valid by the EdDSA verification function. A list of libraries with vulnerable APIs at the time of publication is provided. Furthermore, this paper provides two suggestions for securing EdDSA signing APIs against this vulnerability while it additionally discusses failed attempts to solve the issue.
## I Introduction
Since it was first proposed independently in the late '80s by Koblitz [1] and Miller [2], Elliptic Curve Cryptography (ECC) has become the preferred choice for constructing classical public-key cryptosystems. The critical advantage of ECC is its capability to construct public key cryptosystems with a smaller key size than its discrete logarithm-based counterparts. For example, the Digital Signature Algorithm (DSA) proposed by the National Institute of Standards and Technology (NIST) for their Digital Signature Standard (DSS) (attributed to Kravitz [3]) has an ECC counterpart, the Elliptic Curve Digital Signature Algorithm (ECDSA) [4], which boasts greater efficiency and smaller key sizes while achieving similar levels of security. However, ECDSA is not without its share of common pitfalls that implementations can suffer from. For example, key recovery attacks are enabled by poorly generated random values [5] and nonce re-usage [6]. Lattice-based attacks, such as those using the Lenstra-Lenstra-Lovasz (LLL) method [7], have also been used to recover information about private keys from weak ECDSA signatures successfully [8, 9].
With the evident problems in ECDSA implementations and a loss of trust in NIST after the Snowden revelations, the cryptography community shifted towards a new cryptosystem based on Curve25519 proposed by Bernstein in 2006 [10]. In 2012, Bernstein _et al._[11] proposed using the Edwards variant of Curve25519 to construct a deterministic Schnorr-like [12] digital signature scheme. This scheme became the Edwards-curve Digital Signature Algorithm (EdDSA). One of the main advantages of EdDSA over other ECC signature schemes is how the scalar multiplication of points on the curve can be implemented without branching and lookups depending on a secret value [11]. Due to its many advantages over ECDSA [13], EdDSA quickly became widely implemented and was eventually standardised in both RFC 8032 [14] and NIST's own FIPS 186-5 [15].
This paper discloses an undiscovered vulnerability related to the implementation of EdDSA. This vulnerability is severe enough that adversaries can easily exploit it to extract the private key during the EdDSA signing process. This attack requires that an adversary use the signing function as an oracle that expects arbitrary public keys as inputs. While the majority of applications that use EdDSA are unlikely to expose signing functions to end users publicly or may mitigate the issue before signing invocation, there are some applications in which private and public keys are managed in different ways, exposing the surface to attack by adversaries. The details of this attack are given later in the paper, along with ways to mitigate the attack easily.
The rest of this paper is organised as follows: In the remainder of this section, various work related to EdDSA and its vulnerabilities is outlined, and the contributions of this paper are specified. In Section II provides background information on the EdDSA algorithm required for the rest of the paper. Section III describes the double public key signing function oracle attack and gives a list of libraries with EdDSA implementations vulnerable to the attack. In Section IV, possible countermeasures against the described vulnerability are given and Section V concludes the paper.
### _Related Work_
Since the proposal of Ed25519 by Bernstein _et al._[11] in 2012 and the subsequent generalisation of the algorithm into EdDSA [13], there has been a significant amount of work detailing various formal security notions of EdDSA as well as attacks to both the algorithm itself and implementation of the algorithm. Due to the fact that its construction is heavily based on the Schnorr signature scheme [12], security of the schemes proposed by Bernstein _et al._ in [11] and [16] is based on similar assumptions. More recently, Brendel _et al._[17] gave a comprehensive security analysis of Ed25519 based on its
implementation as per the RFC 8032 standard [14], in which they found that certain implementations guarantee stronger security than others. Furthermore, work by Chalkias _et al._[18] was done to formalise EdDSA implementations under the strictest notions of security.
There have also been some high-profile attacks against EdDSA. In 2017, an issue arose in an implementation of Ed25519 used by the Monero crypto-currency, which allowed users to get around double-spending prevent ions. This issue was mitigated by checking the order of the key using full scalar multiplication and arose due to the unique way in which Monero used Ed25519 [19]. Samwel _et al._[20] demonstrated that differential power analysis could be used on Ed25519's underlying hash function SHA-512. In particular, their work targeted the WolfSSL implementation and required 4000 EM traces to be successful. In an extension to this work, Weissbart _et al._[21] used machine learning techniques to reduce this attack to a single EM trace.
Another type of attack against EdDSA is a fault attack. Romallier and Pelissier [22] demonstrated that a single fault in the EdDSA signing process could be used to recover enough private key material for an attacker to sign arbitrary messages. Poddebniak _et al._[23] also studied fault attacks against deterministic digital signature schemes such as EdDSA, formalising requirements for protocols to be vulnerable to these types of attacks. Approaching the same problem slightly differently, Cao _et al._[24] constructed lattice-based attacks to recover private key information from deterministic digital signature schemes vulnerable to fault attacks.
This work presents an attack on the standard rather than an attack on implementation-specific details found in EdDSA software or hardware. More specifically, the standards fail to specify the format of key input into the EdDSA signing function. Due to the algorithmic details, if an adversary was able to use the signing function as an Oracle expecting arbitrary public key inputs, then it is possible for them to recover the full private key trivially. To the best of the authors' knowledge, this issue was unreported until now.
### _Contributions_
The main contributions of this paper are as follows:
* A new attack against the EdDSA standards RFC 8032 [14] and FIPS 186-5 DSS [15] is presented. It is shown that unless necessary precautions are taken, an adversary can perform full private key recovery if given Oracle access to the EdDSA signing function
* A list of potentially unsafe EdDSA libraries is given. At the time of writing, there are 45 libraries impacted by this vulnerability. Misuse of these libraries can result in private key exposure. Currently, 8 of the 45 impacted libraries have implemented fixes to the issues after notification.
* Finally, two countermeasures against this type of attack are given. These countermeasures are simple changes to the vulnerable EdDSA software implementations found in many libraries. Both changes require only a small amount of additional overhead in the signing function.
## II Edwards-curve Digital Signature Algorithm
EdDSA is a digital signature algorithm similar to ECDSA [4] proposed by Bernstein _et al._[11]. RFC 8032 [14] defines EdDSA defines for two twisted Edwards curves Ed25519 (similar to curve25519 proposed by Bernstein [10]) and Ed448; however, EdDSA may be instantiated over other curves. For a fixed field \(k\), a twisted Edwards curve with the coefficients \(a,d\in k\) is the curve:
\[E_{a,d}:ax^{2}+y^{2}=1+dx^{2}y^{2}\]
where \(a\) and \(b\) are non-zero elements. For example, Ed25519 (curve25519) is defined over \(\mathbb{F}_{p}\) where \(p=2^{255}-19\). Similarly, Ed448 is defined over \(\mathbb{F}_{p}\) where \(p=2^{448}-2^{244}-1\) and offers a 224-bit security level compared to the 128-bit security of Ed25519.
The sum of two points in the Ed25519 and Ed448 curves is represented by the following addition rule:
\[(x_{1},y_{1})+(x_{2},y_{2})=\left(\frac{x_{1}y_{2}+y_{1}x_{2}}{1+dx_{1}x_{2}y _{1}y_{2}},\frac{y_{1}y_{2}-ax_{1}x_{2}}{1-dx_{1}x_{2}y_{1}y_{2}}\right)\]
where the point \((0,1)\) is the neutral element. The above rule can also be used for point doubling and addition. Adding a point \(P\) to itself \(n\) times is the same as multiplication by a scalar, denoted as \(n\cdot P\).
EdDSA uses a Fiat-Shamir transformed Schnorr-like identification protocol to generate a cryptographic signature based on elliptic curve point addition. The EdDSA protocol is standardised under the Ed25519 and Ed488 curves; the parameters for both curves are given in Table I. Note that the actual number of points on the curve is \(|E_{a,d}|=c\cdot\ell\). The presence of the cofactor \(c\) in the order of the curve makes it harder to use in applications where prime-order groups are required for cryptographic proof.
The elliptic curve group \(E_{a,d}\) is isomorphic to \(\mathbb{Z}_{\ell}\times\mathbb{Z}_{e}\), where a base point \(G\in E_{a,d}\) generates a subgroup of order \(\ell\) and a small torsion point \(T_{c}\in E_{a,d}\) generates the subgroup of order \(c\). Any point \(P\in E_{a,d}\) can be uniquely represented as the linear combination of \(G\) and \(T_{c}\) with \(P=g\cdot B+t\cdot T_{c}\) where \(0\leq g<\ell\) and \(0\leq t<c\). In this case, the discrete log of \(P\) base \(G\) is \(g\). \(P\) is small order if \(g=0\), mixed order if \(t\neq 0\) and \(g\neq 0\), and order \(\ell\) if \(g\neq 0\) and \(t=0\).
\begin{table}
\begin{tabular}{l l l} \hline
**Parameter** & **Ed25519** & **Ed448** \\ \hline Field Modulus (\(p\)) & \(2^{255}-19\) & \(2^{448}-2^{244}-1\) \\ Key Bits (\(b\)) & \(256\) & \(456\) \\ Hash Function (H) & SHA-512 & SHAKE256 \\ Cofactor (\(c\)) & 8 & 4 \\ Coefficient (\(d\)) & \(-121665/121666\) & \(-39081\) \\ Coefficient (\(a\)) & \(-1\) & 1 \\ Base Point (\(G\)) & \((x,y)\in\mathbb{F}_{p}^{2}\) (see [25]) & \((x,y)\in\mathbb{F}_{p}^{2}\) (see [25]) \\ Curve Order (\(\ell\)) & see [25] & see [25] \\ \hline \end{tabular}
\end{table} TABLE I: Parameters for Ed25519 and Ed448 (RFC 8032 [14])
### _EdDSA Signing_
As defined in the RFC 8032 standard [14], EdDSA uses a \(b\)-bit long private key \(pk\) and a hash function H that produces a \(2b\)-bit output. An integer \(s\) is generated by taking the hash of the secret key \(\text{H}(sk)=(h_{0},h_{1},\ldots,h_{2b-1})\) and computing \(s=2^{b-1}+\sum_{3\leq i\leq b-3}2^{i}h_{i}\). The public key \(pk\) is then computed by taking the curve's base point \(G\) as defined in the public parameters and computing \(pk=s\cdot G\).
```
Input:\(m\), H, \(sk\), \(G\), and \(pk\) Output: The signature \((R,S)\)
1\(h\) := H(\(sk\))
2\(s\) := \(2^{b-1}+\sum_{3\leq i\leq b-3}2^{i}h_{i}\)
3\(r\) := H(\(h_{b},\ldots,h_{2b-1}\mid m\)) \((\bmod\ell)\)
4\(R\) := \(r\cdot G\)
5\(S\) := \(r+\text{H}(R\mid pk\mid m)\cdot s\pmod{\ell}\) return\((R,S)\)
```
**Algorithm 1**EdDSA Signing
The signature \((R,S)\) of a message \(m\in\{0,1\}^{*}\) is computed according to Algorithm 1. A significant difference between EdDSA and ECDSA is the signature generated is deterministic in EdDSA; in other words, for a message, any signature computed using the same key pair and public parameters will always be the same.
Both signatures and keys can be encoded for space-efficient transmission. According to the RFC 8032 standard [14], an element of the scalar field \((\bmod\ell)\) is encoded with a 256-bit little-endian string. If the scalar is reduced \(\bmod\ell\) it is considered to be a canonical encoding; otherwise, it is non-canonical. A point \(P=(x,y)\in E_{a,d}\) is also encoded as a 256-bit string, with 255 bits devoted to the encoding of \(y\) in little-endian format and a single bit devoted to the encoding the sign of \(x\). Given a serialisation of \(P\) the \(x\) coordinate is restored _via_\(x:=\pm\sqrt{(y^{2}-1)/(dy^{2}+1)}\). If the \(y\) coordinate is reduced mod \(p\) encoding is canonical; otherwise, it is non-canonical.
### _EdDSA Signature Verification_
The EdDSA verification algorithm given in Algorithm 2 generally conforms to both the RFC 8032 standard and [14], and the NIST FIPS 186-5 standard [15], while also providing the strongest notion of security defined by Brendel _et al._[17] and Chalkias _et al._[18]: Strong UnForgeability under Chosen Message Attacks (SUF-CMA) with Strongly Binding Signatures (SBS). This means that efficient adversaries cannot output valid signatures on new messages nor find a new signature for old messages. Furthermore, messages are bound to the public key, a property shown to be lacking in the RFC 8032 variant of EdDSA [17].
The verification algorithm given in Algorithm 2 performs several checks to ensure the scheme's security against various attacks. First, the generic check to ensure that the public key \(pk\) and the point \(R\) from the signature \((R,S)\) are valid points on the curve \(E_{a,d}\) is performed. The algorithm then ensures that the scalar value \(S\) is not any of the values \(0,\ldots,\ell-1\), since \(S^{\prime}:=S+n\cdot\ell\) would also satisfy the verification algorithm for \(n\in\mathbb{Z}\), Checking the value of \(S\) ensures that the scheme satisfies the requirements for SUF-CMA [17]. The algorithm then rejects any non-canonical encodings of \(pk\) and \(R\). Rejecting non-canonical encodings is required by both RFC 8032 [14] and FIPS 186-5 [15].
```
Input:\(m\), \((R,S)\), H, \(G\), and \(pk\) Output:\(b\in\{0,1\}\)
1if\(pk\notin E_{a,d}\)or \(R\notin E_{a,d}\)then
2return\(0\)
3 endif
4if\(S\notin\{0,\ldots\ell-1\}\)or \(|pk|\geq\ell\)or \(|R|\geq\ell\)then
5return\(0\)
6 endif
7if\(pk=t\cdot T_{c}\) for \(\forall t\in\{0,\ldots,c-1\}\) and \(\forall T_{c}\in E_{a,d}\)then
8return\(0\)
9 endif
10if\(c\cdot S\cdot G=c\cdot R+c\cdot\text{H}(R\mid\mid pk\mid\mid m)\cdot pk\)then
11return\(1\)
12 endif
13return\(0\)
```
**Algorithm 2**EdDSA Verification
The final check ensures that the public key \(pk\) used to sign the message is not one of a set of small order points on the curve \(E_{a,d}\). This check is not part of any standard and rarely appears in practical implementations. This additional check aims to ensure public keys are strongly bound to the signature to achieve the SBS security notion. This is because if \(pk\) is a \(c\)-torsion point, an adversary can any value for their signature such that \(S\cdot G=R\) and the resulting signature verifies under any message. Bernstein _et al._[11] identified this vulnerability in work but regarded it as unproblematic. However, some specific cases have arisen where checking for small-order keys becomes important, specifically for building specialised protocols [17]. While this may seem a cumbersome addition to the verification process, the number of small-order public keys is quite small and can be pre-computed and stored for fast verification [18].
Verification of an EdDSA signature can be done either cofactored or cofactorless. The verification described by Algorithm 2 is cofactored. If the implementation uses cofactoredness implementation, then it is required to reduce \(\text{H}(R\mid\mid pk\mid\mid m)\) to the range \([0,\ell)\) before multiplication by \(pk\). Not doing so may cause implementations to disagree on the validity of signatures generated by mixed-order public keys. When performing cofactored verification, multiplication by \(c\) should be performed as a separate scalar-by-point multiplication. Failing to ensure separate scalar-by-point multiplication can cause the result in \(c\cdot\text{H}(R\mid\mid pk\mid m)\bmod\ell\) not being divisible by \(c\) and thus, not clear the low order component in \(pk\) if it exists. While Bernstein _et al._[11] originally proposed to use
cofactorless verification, EdDSA standards recommend using the cofactored verification algorithm [14, 15].
## III Double Public Key Signing Function Oracle Attack
The discovered vulnerability takes the form of an oracle attack. The oracle uses the signing function of a deterministic signature scheme with a fixed secret key and message parameters to compute a signature given an arbitrary public key. If given access to this type of oracle, an adversary can use it to recover the secret key by submitting two different public keys. In this section, an attack methodology is described, and a list of affected libraries and their current status on fixing the issue is given.
### _Attacking EdDSA_
According to both the RFC 8032 [14] and FIPS 186-5 [15] standards, EdDSA signatures are deterministic. This means that for the same message \(m\in\{0,1\}^{*}\) input to a signing function with public key \(pk\) and secret key \(sk\), a unique signature \((R,S)\) is generated. An important detail of the signing function given in Algorithm 1 is that the signer's public key is used in the deterministic computation of the scalar value \(S\) but not the point on the curve \(R\) in the signature \((R,S)\). The implication of this is that if an adversary was able to use the signing function as an oracle that expects arbitrary public key inputs, they could compute two signatures \((R,S)\) and \((R,S^{\prime})\) corresponding to the same \(m\).
Assuming access to a signing oracle \(\mathcal{O}_{\mathsf{sign}_{sk,m}}\) with fixed parameters \(m\) and \(sk\), the adversary would perform the following steps to recover \(sk\):
**Step 1:** The adversary queries the oracle with two public keys \(pk\) and \(pk^{\prime}\). The public keys need not be paired to the fixed secret key \(sk\). The resulting signatures share the same \(R\) value and differ on the \(S\) values.
1. Compute \(pk:=s\cdot G\) and \(pk^{\prime}:=s^{\prime}\cdot G\) where \(s\neq s^{\prime}\). Note that \(pk\) and \(pk^{\prime}\) should satisfy the requirements of the EdDSA scheme.
1. Query \(\mathcal{O}_{\mathsf{sign}_{sk,m}}\) with \(pk\) and \(pk^{\prime}\) and receive the two signatures \(\sigma\) and \(\sigma^{\prime}\).
1. Check that for \(\sigma=(R,S)\) and \(\sigma^{\prime}=(R^{\prime},S^{\prime})\) the values \(R=R^{\prime}\).
**Step 2:** With the two signatures \(\sigma\) and \(\sigma^{\prime}\) corresponding to \(pk\) and \(pk^{\prime}\) the adversary can now attempt to recover \(sk\). When signing a message, the signing algorithm computes the \(S\) value as \(S=r+\text{H}(R\mid\mid pk\mid m)\cdot s\pmod{\ell}\). Because \(r\) is derived from \(sk\) which is the same for both signatures, and \(R\), \(pk\), and \(m\) are all known to the adversary, they can use this to compute \(s\).
1. The adversary computes \(e:=\text{H}(R\mid\mid pk\mid m)\) and \(e^{\prime}:=\text{H}(R\mid\mid pk^{\prime}\mid m)\).
2. Because \(S=r+e\cdot s\pmod{\ell}\) and \(S^{\prime}=r+e^{\prime}\cdot s\pmod{\ell}\), subtracting \(S^{\prime}\) from \(S\) gives the following
\[S-S^{\prime} =r+e\cdot s-r+e^{\prime}\cdot s \pmod{\ell}\] \[=e\cdot s-e^{\prime}\cdot s \pmod{\ell}\] \[=s\cdot(e-e^{\prime}) \pmod{\ell}\]
### Dividing \(S-S^{\prime}\) through by \(e-e^{\prime}\) to recovers the value
\[s=(S-S^{\prime})(e-e^{\prime})^{-1}\pmod{\ell}\]
After completing step 2, the adversary has access to the secret integer \(s\), which can be used to arbitrarily compute values of \(S\). Even if \(s\) is known, it remains impossible to compute a \(r\) value for a new message since the values \(h_{b},\ldots,h_{2b-1}\) are unknown to the adversary. However, selecting any random value of \(r\) can computing a new signature \(\sigma=(R,S)\) for any message \(m\) still satisfies
\[c\cdot S\cdot G =c\cdot(r+\text{H}(R\mid\mid pk\mid\mid m)\cdot s)\cdot G\] \[=c\cdot R+c\cdot\text{H}(R\mid\mid pk\mid\mid m)\cdot s\cdot G\] \[=c\cdot R+c\cdot\text{H}(R\mid\mid pk\mid\mid m)\cdot pk\]
meaning the verification algorithm still holds. Therefore, it is still possible to sign arbitrary messages, effectively breaking the SUF-CMA security with the SBS notion of security that EdDSA guarantees.
### _Vulnerable Libraries_
There are a huge number of software implementations of EdDSA across many different languages. To give an idea of how common this vulnerability can be, a table of vulnerable libraries can be seen in Table II. Most of these libraries are taken from the IANIX list of "Things that use Ed25519" [26]. These libraries have been notified of the issues, and their current status of fixing the vulnerability at the time of publication is included in the table 1.
Footnote 1: A comprehensive list of libraries and requested fixes can be found here: [https://github.com/MystenLabs/ed25519-unsafe-libs](https://github.com/MystenLabs/ed25519-unsafe-libs)
## IV Countermeasures
Fortunately, due to the nature of the oracle attack, the majority of applications with dependencies on the libraries listed in Table II probably are safe due to not publicly exposing affected signing functions. That said, due to the nature of these libraries, a user can inadvertently expose the attack surface when building their application, as was the case with Monero in 2017 [19]. Therefore, it is recommended that the implementer of the EdDSA standard follow one of two methods to prevent this attack: Correctly storing the public key along with the secret key or re-deriving the public key from the secret key each time the signing function is invoked.
``` voided25519_sign(unsignedchar*signature, unsignedchar*message,size_tmessage_len,
The double public key signing function oracle attack occurs primarily due to insecure APIs around the signing function of the deterministic signature scheme. For example, the Ed25519 signing function taken from the OpenGNB library shown in Listing 1 has two separate arguments for the public and private keys. If an application using this library exposes this API publicly or mishandles the management of the keys, it could expose itself to the attack. The solution to this is to redesign the API to ensure that the secret and public keys pair are always tied together. It is common practice in many libraries to only accept a secret key in the signing function. For example, the Ed25519 signing function taken from the Libsodium library shown in Listing 2 accepts only the signature, message, and secret key as an argument. However, the public key is still required by the signing algorithm. There are two solutions to this.
[ht]
```
intcrypto_sign(unsignedchar*sm, unsignedlonglong*smlen_p, unsignedcharconst*m, unsignedlonglongmlen, unsignedcharconst*sk);
```
Listing 2: Libsodium public-key signature C interface.
### _Correct Key Storage_
The simplest solution is to ensure the public and private keys are stored together and accept them as a single argument to the signing function. This is also slightly more efficient computationally than the other option. Both the public and private keys for EdDSA are 32 or 57-byte for Ed25519 and Ed448, respectively. The solution found in the majority of libraries is to generate the public-private keypair and store the secret key as a 64-byte string. The first 32 bytes are the private key, with the remaining 32 bytes being the public key. The main downside of this is the private key is now 64 or 114 bytes for Ed25519 and Ed448. However, this increased storage space should be acceptable in all but the most extreme cases.
[ht]
```
voided25519_keypair(unsignedchar*pk, unsignedchar*sk){ unsignedcharseed[32]; randombytes(seed,32); sha512(sk,seed,32); gen_pk(pk,sk); memmove(sk,seed,32); memmove(sk+32,pk,32);
```
Listing 3: An example of safe Ed25519 key generation and storage.
In Listing 3, an example of safe key generation and storage is given based on the Libsodium Ed25519 implementation. In this example, the \(seed\) byte array is stored as the first 32 bytes of the private key \(sk\). This means that the users of the library can retrieve the initial random seed used to generate the public and private key pair. When invoking the signing function, the secret key would be derived by taking the first 32 bytes and
hashing them with the SHA-512 hash function, and the public key would be the remaining 32 bytes.
### _Public Key Re-derivation_
The other option is to receive the public key on every invoke of the signing function. Obviously, this consumes significantly more CPU cycles in the long term than storing the public key alongside the private key, as suggested in Section IV-A. However, the additional space requirements are no longer necessary, which may be more suitable for use cases with extreme memory restrictions. This solution is far less common to see in software implementations of EdDSA.
[ht]
```
voided25519_sign(unsignedchar*sig, unsignedchar*m,size*tmlen,unsignedchar*sk){ unsignedchar*s[2]; unsignedchar*pk[32]; sha512(s,sk,32); gen_pk(pk,s);... }
```
Listing 4: An example of signing with public key re-derivation.
In Listing 4, an example of an Ed25519 signature function with key re-derivation is given. In this example, the secret key \(sk\) given is expected to be a 32-byte \(seed\) array much like that from Listing 3. The secret key is then regenerated using the SHA-512 hash function, which is passed to the public key generation function, which performs the point multiplication. The rest of the Ed25519 signing function would then be implemented as per the standard.
## V Conclusion
In this work, an attack against the EdDSA standard is presented. Due to the deterministic nature of EdDSA signatures, an adversary with access to the signing function that accepts arbitrary public keys can recover the secret signing value by submitting as little as two different public keys. The adversary can sign arbitrary messages using this signing value, breaking the unforgeability security notion of digital signature schemes. The attack can occur primarily from software implementation APIs presenting an adversary's opportunity to submit multiple keys to the signing function, creating different signatures for the same message and private key. This attack presents a real threat if applications expose these APIs publicly or fail to manage public-private key pairs correctly. A list of libraries that implement the Ed25519 standard and are vulnerable to this attack is given. Additionally, two countermeasures are proposed to prevent the attack.
## References
* [1] N. Koblitz, "Elliptic curve cryptosystems," _Mathematics of Computation_, vol. 48, pp. 203-209, 1987.
* CRYPTO '85 Proceedings_, H. C. Williams, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986, pp. 417-426.
* [3] D. W. Krawitz, "Digital signature algorithm," May 1993, U.S. Patent US5231668A.
* [4] D. Johnson, A. Menezes, and S. Vanstone, "The elliptic curve digital signature algorithm (ecdss)," _International Journal of Information Security_, vol. 1, no. 1, pp. 36-63, Aug 2001. [Online]. Available: [https://doi.org/10.1007/s102070100002](https://doi.org/10.1007/s102070100002)
* [5] P. O. Nguyen and I. E. Shparfinski, "The insecurity of the elliptic curve digital signature algorithm with partially known nonces," _Designs, Codes and Cryptography_, vol. 30, no. 2, pp. 201-217, Sep 2003. [Online]. Available: [https://doi.org/10.1023/4025436905711](https://doi.org/10.1023/4025436905711)
* [6] M. Brengel and C. Rossouw, "Identifying key leakage of bitcoin users," in _Research in Attacks, Intrusions, and Defenses_, M. Bailey, T. Holz, M. Stamatogiannakis, and S. Ioomidis, Eds. Cham: Springer International Publishing, 2018, pp. 623-643.
* [7] A. K. Lenstra, H. W. Lenstra, and L. Lovasz, "Factoring polynomials with rational coefficients," _Mathematische Annalen_, vol. 261, no. 4, pp. 515-534, Dec 1982. [Online]. Available: [https://doi.org/10.1007/BF01457454](https://doi.org/10.1007/BF01457454)
* [8] D. Poulakis, "Some lattice attacks on dsa and ecdsa," _Applicable Algebra in Engineering, Communication and Computing_, vol. 22, no. 5, pp. 347-358, Dec 2011. [Online]. Available: [https://doi.org/10.1007/s00200-011-0154-4](https://doi.org/10.1007/s00200-011-0154-4)
* [9] J. Bettner and N. Heninger, "Biased nonces sense: Lattice attacks against weak ecdsa signatures in cryptocurrencies," in _Financial Cryptography and Data Security_, I. Goldberg and T. Moore, Eds. Cham: Springer International Publishing, 2019, pp. 3-20.
* PKC 2006_, M. Yung, Y. Dodis, A. Kiayias, and T. Takhia, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 207-228.
* [11] D. J. Bernstein, N. Duif, T. Lange, P. Schwabe, and B.-Y. Yang, "High-speed high-security signatures," _Journal of Cryptographic Engineering_, vol. 2, no. 2, pp. 77-89, Sep 2012. [Online]. Available: [https://doi.org/10.1007/s13389-012-0027-1](https://doi.org/10.1007/s13389-012-0027-1)
* CRYPTO '89 Proceedings_, G. Brassard, Ed. New York, NY: Springer New York, 1990, pp. 239-252.
* [13] D. J. Bernstein, S. Jostefsson, T. Lange, P. Schwabe, and B.-Y. Yang, "Edds for more curves," Cryptology ePrint Archive, Paper 2015/677, 2015. [Online]. Available: [https://eprint.iacr.org/2015/677](https://eprint.iacr.org/2015/677)
* [14] S. Jostefsson and I. Liusawa, "Edwards-curve digital signature algorithm (EdDSA)," Tech. Rep., jun 2017.
* [15] D. Moody, "Digital signature standard (DSS)," Tech. Rep., 2023.
* [16] D. J. Bernstein, S. Jostefsson, T. Lange, P. Schwabe, and B.-Y. Yang, "Eddsa for more curves, Cryptology ePrint Archive, Paper 2015/677, 2015. [Online]. Available: [https://eprint.iacr.org/2015/677](https://eprint.iacr.org/2015/677)
* [17] J. Brendel, C. Cremes, D. Jackson, and M. Zhao, "The provable security of ed25519: Theory and practice," in _2012 IEEE Symposium on Security and Privacy (SP)_, 2021, pp. 1659-1676.
* [18] K. Chalkias, F. Garlilot, and V. Nikolaenko, "Taming the many oddas," in _Security Standardisation Research_, T. van der Merwe, C. Mitchell, and M. Mehmezhad, Eds. Cham: Springer International Publishing, 2020, pp. 67-90.
* [19] bij1111 and Riccardo "fluffypony" Spagni, "Disclosure of a major bug in cryptote based currencies," May 2017. [Online]. Available: [https://www.genome.org/2017/05/17/disclosure-of-a-major-bug-in-cryptone-based-events/](https://www.genome.org/2017/05/17/disclosure-of-a-major-bug-in-cryptone-based-events/)
* CRY-RSA 2018_, N. P. Smart, Ed. Cham: Springer International Publishing, 2018, pp. 1-20.
* [21] L. Weisshard, S. Pieck, and L. Batina, "One trace is all its rules: Machine learning-based side-channel attack on eddsa," in _Security, Privacy, and Applied Cryptography Engineering_, S. Bhasiu, A. Mendelson, and M. Nandi, Eds. Cham: Springer International Publishing, 2019, pp. 86-105.
* [22] Y. Ronnallier and S. Pelissier, "Practical fault attack against the ed25519 and eddsa signature schemes," in _2017 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC)_, 2017, pp. 17-24.
* [23] D. Poddebniak, J. Somorovsky, S. Schinzel, M. Lochter, and P. Rosler, "Attacking deterministic signature schemes using fault attacks," in _2018 IEEE European Symposium on Security and Privacy (EuroS&P)_, 2018, pp. 338-352.
* CT-RSA 2022_, S. D. Galbraith, Ed. Cham: Springer International Publishing, 2022, pp. 169-195.
* [25] A. Langley, M. Hamburg, and S. Turner, "Elliptic curves for security," Tech. Rep., jan 2016.
* [26] IANIX, "Things that use Ed25519," Jun. 2023. [Online]. Available: [https://ianix.com/pub/ed25519-deployment.html](https://ianix.com/pub/ed25519-deployment.html) |
2301.06218 | Fixed points of the sum of divisors function on $F_2[x]$ | We work an analogue of a classical arithmetic problem over polynomials. More
precisely, we study the fixed points $F$ of the sum of divisors function
$\sigma : F_2[x] \mapsto F_2[x]$ (defined \emph{mutatis mutandi} like the usual
sum of divisors over the integers) of the form $F := A^2 \cdot S$, $S$
square-free, with $\omega(S) \leq 3$, coprime with $A$, for $A$ even, of
whatever degree, under some conditions. This gives a characterization of $5$ of
the $11$ known fixed points of $\sigma$ in $F_2[x]$ | Luis H. Gallardo | 2023-01-16T00:09:35Z | http://arxiv.org/abs/2301.06218v1 | # Fixed points of the sum of divisors function on \(\mathbb{F}_{2}[x]\)
###### Abstract
We work an analogue of a classical arithmetic problem over polynomials. More precisely, we study the fixed points \(F\) of the sum of divisors function \(\sigma:\mathbb{F}_{2}[x]\mapsto\mathbb{F}_{2}[x]\) (defined _mutatis mutandi_ like the usual sum of divisors over the integers) of the form \(F:=A^{2}\cdot S\), \(S\) square-free, with \(\omega(S)\leq 3\), coprime with \(A\), for \(A\) even, of whatever degree, under some conditions. This gives a characterization of 5 of the 11 known fixed points of \(\sigma\) in \(\mathbb{F}_{2}[x]\).
## 1 Introduction
We have all hear somewhere in our career that there are few positive integers \(n\) with the property that the sum of all positive divisors of \(n\) is a multiple of \(n\). Let write the sum as \(\sigma(n)\). Our claim becomes then the following. There are few solutions \(n\) of the following equation.
\[\frac{\sigma(n)}{n}\in\mathbb{N} \tag{1}\]
For example, when \(n\in\{6,120\}\) we have \(\frac{\sigma(6)}{6}=2\) and \(\frac{\sigma(120)}{120}=3\). In fact this happens since we have \(divisors(6)=\{1,2,3,6\}\) so that \(\sigma(6)=1+2+3+6=12\), and
\[divisors(120)=\{1,2,3,4,5,6,8,10,12,15,20,24,30,40,60,120\}\]
so that \(\sigma(120)=1+2+3+4+5+6+8+10+12+15+20+24+30+40+60+120=360\). Already here we see that we can compute \(\sigma(120)\) more efficiently as follows: Since \(120=2^{3}\cdot 3\cdot 5\) and \(\sigma(x\cdot y)=\sigma(x)\cdot\sigma(y)\) provided that \(x,y\) has no common factors, we can compute:
\[\sigma(120)=\sigma(8)\cdot\sigma(3)\cdot\sigma(5)=(1+2+4+8)\cdot(1+3)\cdot(1+5 )=360.\]
In a nutshell, in the present paper we study some arithmetic properties of an analogue to the function \(n\mapsto\sigma(n)\), in which we replace \(n\) by a polynomial \(A(x)\) with coefficients \(0\) and \(1\) only, and compute with \(0,1\) as usual, besides the rule \(1+1=0\) that replaces the usual rule \(1+1=2\). The field \(\mathbb{F}_{2}=\{0,1\}\) in which we compute the coefficients of \(A(x)\) is the simplest of all finite fields.
For readers less familiar with finite fields, we recommend to look first at section 2 for a simple computation with binary polynomials. Then, to look at subsections 1.1, and 1.2 below. And, finally, come back to look at the rest of this Introduction.
For all readers, we added some information about our choice of the finite field \(\mathbb{F}_{2}\) for the coefficients of our polynomials (see subsections 1.1, and 1.2) at the end of this Introduction. We also added a few comments about the role played by some small degree irreducible binary polynomials as prime factors of our perfect polynomials. This comes from an observation of one of the referees.
The paper being a little technical, we hope the following considerations will be helpful for the reader.
We now introduce some definitions and notation to explain the original arithmetic problem over the integers that motivated the study of our variant over the binary polynomials in \(\mathbb{F}_{2}[x]\), and the link between them as well.
Let \(A\in\mathbb{F}_{2}[x]\) be an irreducible polynomial, then we say that \(A\) is _prime_. A polynomial \(M\in\mathbb{F}_{2}[x]\) is _Mersenne_ (an analogue of a Mersenne number: \(2^{n}-1\)) if \(M+1\) is a product of powers of \(x\) and powers of \(x+1\). We say that \(M+1\)_splits_. When a Mersenne polynomial \(M\) is irreducible, we say that \(M\) is a _Mersenne prime_. Given a binary polynomial \(B\), a binary polynomial \(A\) in the sub-ring \(\mathbb{F}_{2}[B]\) of \(\mathbb{F}_{2}[x]\) is _complete in \(B\)_[11], if all coefficients of \(A\) are equal to \(1\); when \(B=x\), we say simply that \(A\) is _complete_. A binary polynomial \(B\) is _odd_ if \(B(0)=B(1)=1\), otherwise \(B\) is _even_. More standard notation follows. We let \(\omega(P)\) denote the number of pairwise distinct prime factors of \(P\in\mathbb{F}_{q}[x]\). Likewise, we let \(v_{P}(A)\) denote the valuation of the prime \(P\) in the binary polynomial \(A\), i.e., the least positive integer \(m\), such that \(P^{m}\mid A\) but \(P^{m+1}\nmid A\), we also write this as \(P^{m}||A\). Finally, we let \(\overline{\mathbb{F}_{2}}\) denote a fixed algebraic closure of \(\mathbb{F}_{2}\).
We recall that a binary _perfect_ polynomial \(A\) (see [11, 14, 16, 19, 26, 29, 31, 32, 33]) is defined by the equality \(\sigma(A)=A\), where \(\sigma(A)=\sum_{D|A}D\in\mathbb{F}_{2}[x]\) is the sum of all divisors of \(A\), including \(1\) and \(A\). For coprime binary polynomials \(X,Y\) one has, as over the integers \(\mathbb{Z}\), \(\sigma(XY)=\sigma(X)\sigma(Y)\). The \(\sigma\) function, that maps polynomials into polynomials, is more complex than the usual sum of divisors function \(\sigma_{1}\colon\mathbb{F}_{2}[x]\mapsto\mathbb{N}\) given by \(\sigma_{1}(A)=\sum_{D|A}2^{\deg(A)}\). For instance, some divisors \(D\) of \(A\) can sum up to \(0\), while always a sum over \(D\) of \(2^{\deg D}\) is \(>0\).
It is easy to check that \(0\) and \(1\) are perfect polynomials, and that for any non-negative integer \(n\), the polynomial \(T(n)=(x(x+1))^{2^{n}-1}\) is (_trivial_) perfect. There are
only \(11\) non-trivial (known) binary perfect polynomials (_sporadic_), and all of them are even (see list in Lemma 7). Some recent computations [14], show that new sporadic perfects must have degree exceeding \(200\).
Coming back to the integers, we observe that the binary perfect polynomials are a polynomial analogue of the multiperfect numbers over \(\mathbb{Z}\). A multiperfect number is a positive integer \(n\) such that
\[\sigma(n)/n\in\mathbb{Z}. \tag{2}\]
Of course, we know very few about these numbers. One see, by easy degree considerations, that for \(A\in\mathbb{F}_{2}[x]\),
\[\sigma(A)/A\in\mathbb{F}_{2}[x] \tag{3}\]
is equivalent to \(A=\sigma(A)\). Thus, this explains our interest in the _fixed_ points of \(\sigma\) on \(\mathbb{F}_{2}[x]\).
Technically, observe that the following problem has attracted some interest (see [1, 2, 3, 9, 10, 12, 13, 35, 36, 38, 39, 40, 41]). Given an irreducible polynomial \(f\) over a finite field \(\mathbb{F}_{q}\), given a polynomial \(g(x)\) over the same field. How to describe the _prime_ (irreducible) factors of \(f(g(x))\).
We contribute (in a special case) to this problem in the present paper, since our study of the fixed points of \(\sigma\) implies that some relations exist between the prime factors \(P\) of the square-free polynomial \(S\) in Lemma 8 and the prime factors \(\Phi_{2}(P)=1+P\) of \(\sigma(S)\). Namely, we have
\[A=\sigma(A), \tag{4}\]
in which we take \(A\) of a special form:
\[A=B^{2}\cdot S=B^{2}\cdot\prod_{j=1}^{r}P_{j}=\sigma(A)=\sigma(B^{2})\cdot \prod_{j=1}^{r}(1+P_{j}). \tag{5}\]
Therefore, equation (5) gives some information about the prime factors of \(\Phi_{2}(P)=1+P\) when \(P\) is an odd prime divisor of \(S\). See [34] for related results obtained using the cyclotomic polynomial \(\Phi_{3}(P)=1+P+P^{2}\).
More generally, solving equation (4) is a non-trivial problem of polynomial factorization in \(\mathbb{F}_{2}[x]\). See Lidl, Niederreiter [37], and Swan [42] for known results about this problem.
The contribution of the present paper consists of giving a simple generalization of some properties of five of these \(11\) known sporadic perfect polynomials. These polynomials share a special property not shared by the other six sporadic perfect polynomials. More precisely, (see Lemma 8), we characterize these \(5\) sporadic perfect \(A\) from some special properties of their factorization \(A=B^{2}\cdot S\), with \(B\) even, and \(S\) square-free, coprime with \(B\).
Observe that we do _not_ fix a bound on \(\omega(B)\) (so that potentially we consider many possible new even perfects (if any exists) \(A\) of degree \(\geq 200\) (see again [14])), nor on the degrees of prime factors \(P\) of \(S\). Moreover, \(P\) is not necessarily Mersenne (as was considered, e.g., in [26, 29, 31]). Thus, we are discarding in Theorem 1 much more _non_-perfect polynomials than in previous work (without a single computer computation).
Throughout the paper, the 1941 work of Canaday [11] (see Lemma 6 and Remark 5), is important.
Our main result is as follows:
**Theorem 1**.: _Let \(B\in\mathbb{F}_{2}[x]\) be an even polynomial. Assume that \(\gcd(B^{2},\sigma(B^{2}))=1\). Let \(A:=B^{2}P_{1}\cdots P_{r}\), with \(r\geq 1\) pairwise distinct odd prime \(P_{j}\) such that \(P_{j}\nmid B\). Assume that \(r\leq 3\). Then \(A\) perfect implies that_
\[A\in\{M_{5a},M_{5b},M_{16},M_{20a},M_{20b}\}, \tag{6}\]
_where_
\[M_{5a}:=x(x+1)^{2}(x^{2}+x+1),M_{5b}:=M_{5a}(x+1),\]
\[M_{16}:=x^{4}(x+1)^{4}(x^{4}+x^{3}+1)(x^{4}+x^{3}+x^{2}+x+1),\]
_and_
\[M_{20a}:=x^{4}(x+1)^{6}(x^{3}+x+1)(x^{3}+x^{2}+1)(x^{4}+x^{3}+x^{2}+x+1),\]
\[M_{20b}:=M_{20a}(x+1).\]
_Remark 2_.: For all five perfect polynomials considered in the theorem, one has the following two conditions.
\[B\text{ is even}, \tag{7}\]
and
\[\gcd(B^{2},\sigma(B^{2}))=1. \tag{8}\]
Moreover, observe the following.
_Remark 3_.: An even polynomial square \(B^{2}\) cannot be perfect [11, Theorem 14] so that \(B^{2}\neq\sigma(B^{2})\). This also follows from Lemma 6(a), since \(\sigma(B^{2})\) is odd. In Theorem 1 we need the stronger condition (8) on \(B\).
Furthermore, consider the following two remarks.
_Remark 4_.: By computations, it seems that for each degree \(d\) there are many polynomials \(B\) of degree \(d\) that satisfy conditions (7), and (8). More precisely, a quick computation of all even polynomials \(B\) up to degree \(21\) shows that more than \(68\) percent of them do satisfy (8). Thus, our result applies to many polynomials \(A\), as in the statement of the theorem. Therefore, our result cover many new cases, in which we do not know if the polynomial \(A\) of the theorem is perfect or not, without checking with the computer all the possible primes \(P_{j}\) that could divide \(A\). Unfortunately, we do not see how to use our result, or our proof of the result, to obtain new even perfect polynomials (if they exist) by computations.
As one of the referees, we believe that conditions (7), and (8) are so strong that it should imply, regardless of the value of \(r\), the following. If \(B\) satisfies the conditions, then \(A=B^{2}P_{1}\cdots P_{r}\) should be one of the \(5\) sporadic polynomials in the conclusion of Theorem 1. This, if true, seems to be a non-trivial fact. We were just able to prove it under the conditions of our theorem.
_Remark 5_.: For being able to get some progress on the remaining cases not considered in the theorem (i.e., the cases in which \(r>3\)) it should be necessary to generalize the results of Canaday in Lemma 6. This alone is a non-trivial task. Moreover, even if this task could be done, we will not be able to deduce anything about a characterization of the six other known sporadic perfects. The reason is that these 6 polynomials are _not_ of the form \(B^{2}P_{1}\cdots P_{r}\) (see Lemma 8). Moreover, the 6 remaining known sporadic perfects do _not_ seem to share some other interesting common property. In other words, the more general problem to characterize _all_ 11 sporadic perfects is highly non-trivial. After several years of work, we have (with Rahavandrainy) [20, 26, 29, 31, 32], merely obtained a characterization of all 11 sporadic perfect in a _very particular_ case. Namely, in the case in which every odd prime divisor \(P_{j}\) of an even perfect polynomial \(A\), is of the special form
\[P_{j}=x^{a_{j}}(x+1)^{b_{j}}+1\]
for some coprime exponents \(a_{j},b_{j}\) (i.e., each \(P_{j}\) is a Mersenne polynomial). Of course, prime divisors of \(A\) need _not_ be Mersenne polynomials.
Theorem 1 is a first (modest) step to study the new case in which we assume that the prime divisors \(P_{j}\) of an even perfect polynomial \(A\) are _not_ necessarily Mersenne polynomials.
Now, let us come back to the case \(r>3\) of our approach. We know that this approach works to characterize the 5 known sporadic perfects of the form \(B^{2}P_{1}\cdots P_{r}\). But fails to characterize all known sporadic perfects.
However, me may add the following. Essentially, (in the proof of the theorem) we use properties of the prime factors of general (not necessarily prime) Mersenne polynomials \(M\), i.e., polynomials with the property that \(M+1\) has all its roots in \(\mathbb{F}_{2}\). Now consider binary polynomials \(M_{g}\), with the property that all roots of \(M_{g}+1\) belong to an appropriate non-trivial extension field of \(\mathbb{F}_{2}\) (e.g., belong to \(\mathbb{F}_{4}\)). We believe that understanding the factorization of these _general Mersenne_ polynomials \(M_{g}\) can help to get some progress in the case when \(r>3\). However, even a simple preliminary study of this special case, appears to be a difficult non-trivial problem.
Finally, we discuss the following two matters suggested by a referee.
### Choice of \(\mathbb{F}_{2}\) as ground field for the coefficients of our polynomials
The first reason for the choice is that the ring \(\mathbb{F}_{2}[x]\) is considered as the closest analogue to the ring of integers \(\mathbb{Z}\) to work arithmetic problems.
The second (and more important) reason for the choice is the following. We have no analogue of Canaday's results [11] over \(\mathbb{F}_{2}[x]\) for other rings \(\mathbb{F}_{p}[x]\), for \(p\) an odd prime, nor for more general rings \(\mathbb{F}_{q}[x]\) with \(q\) a power of a prime. One reason for this is that the general problem of factorization into irreducible polynomials is much more complex when the characteristic of the ring is \(>2\). This happens, regardless of the existence of many papers on the subject (see [4, 5, 6, 7, 8, 15, 17, 18, 22, 21, 23, 24, 25, 27, 28, 30]).
### Role of small degree prime factors of even perfect polynomials in the present paper
First, observe that the irreducible polynomials of degree \(5\) or more of \(\mathbb{F}_{2}[x]\) do not play any role in the paper. For which reason? The simple reason is that the only known perfect polynomials over \(\mathbb{F}_{2}\) are all even and have irreducible factors of degrees \(1,2,3,4\) only (see Lemma 7). Of course, it may exist unknown binary perfect polynomials \(A\) with irreducible factors of any degree, but none such \(A\) is known with degree \(\leq 200\) (see [14]). Moreover, \(\omega(A)\geq 5\) (see [19, 20]). Furthermore, the main results used in the proof, namely the results in Lemma 6, have the following property. They reduce the study of irreducible factors of an even perfect polynomial of any degree to the study of small degree irreducible factors that all have degree less than \(5\).
Even perfect polynomials, by definition, should have at least one linear factor. Indeed, they are divisible by both linear factors \(x\) and \(x+1\). In particular, if they are divisible only by \(2\) irreducible factors they must be a product of a power of \(x\) by a power of \(x+1\). It is easy to prove that in fact the exponents must be equal, and of the form \(2^{n}-1\). Thus, these polynomials coincide with the trivial perfects \(T(n)\) (see also Section 2).
The linear factors \(x,x+1\) appear everywhere in the proof of the theorem. The reason is the following. For each odd irreducible factor \(P\) that divides exactly a binary even perfect \(A\) (i.e., such that \(P\) divides \(A\) but \(P^{2}\) do not divide \(A\)) we have that \(\sigma(P)=P+1\) divides also \(\sigma(A)=A\). Thus, by definition of odd polynomial it is easy to see that \(P+1\) is even, so that \(x(x+1)\) divides \(P+1\).
## 2 A simple computation with binary polynomials
We will work with polynomials over the smallest finite field. Namely, \(\mathbb{F}_{2}=\{0,1\}\). First, let us observe that since the list of all divisors of \(x\) is \([x,1]\), one has \(\sigma(x)=x+1\). By translation \(x\mapsto x+1\), we deduce that \(\sigma(x+1)=(x+1)+1=x+(1+1)=x+0=x\). Now, the property, \(\sigma(AB)=\sigma(A)\sigma(B)\), provided that \(A,B\) are coprime, implies that
\[\sigma(x(x+1))=\sigma(x)\sigma(x+1)=(x+1)x=x(x+1). \tag{9}\]
We have then found the perfect polynomial with the smallest degree \(>0\), namely \(T(1)=x(x+1)\). We can write \(T(1)\) as follows: \(T(1)=x^{2^{1}-1}(x+1)^{2^{1}-1}\). Following the same lines of computation, one proves easily by induction that if \(T(n)=x^{2^{n}-1}(x+1)^{2^{n}-1}\) is perfect, then the same holds for \(T(n+1)\).
Thus, we have infinitely many even perfect polynomials (that we call _trivial_ perfect). Unfortunately, we cannot obtain more perfect polynomials with similar methods. The list of all known perfect polynomials (see Lemma 7) was obtained by computer computations. We believe that this list cover _all_ perfect polynomials. However, we are very far to build a proof (or a disproof) of this. The present paper explores a small part of this problem, using elementary methods, like the preceding computation. We have no choice, there is _no_ (known) more sophisticated methods to treat this problem.
Tools
The following lemma contains a simple (new) observation in part (a), and summarizes some useful results of Canaday [11] in parts (b) to (f).
**Lemma 6**.:
1. _Let_ \(P\) _be prime, and let_ \(n\) _be a positive integer. Then_ \(\sigma(P^{2n})\) _is odd. In particular,_ \(\sigma(C^{2})\) _is odd, for any binary polynomial_ \(C\)_._
2. _If_ \(A=x^{h-1}+x^{h-2}+\cdots+1\) _is a complete polynomial and_ \((x+1)^{r}\) _divides_ \(A\) _but_ \((x+1)^{r+1}\) _does not, then_ \(r=2^{n}-1\) _and_ \(A=(x+1)^{2^{n}-1}B^{2^{n}}\) _where_ \(B\) _is complete._
3. _The only complete and irreducible polynomials of the form_ \(x(x+1)^{\beta}+1\) _are_ \(x^{2}+x+1\) _and_ \(x^{4}+x^{3}+x^{2}+x+1\)_._
4. _The only complete_ \(A=x^{2m}+\cdots+1\) _whose irreducible factors are of the form_ \(x^{\alpha}(x+1)^{\beta}+1\) _are_ \(x^{2}+x+1,x^{4}+x^{3}+x^{2}+x+1,(x^{3}+x+1)(x^{3}+x^{2}+1)\)_._
5. _It is impossible to have_ \(\sigma(x^{2k})=\sigma(P^{2})\) _or, more generally,_ \(\sigma(Q^{2m})=\sigma(P^{2n})\) _for irreducible polynomials_ \(P,Q\in\mathbb{F}_{2}[x]\)_._
6. _The polynomial_ \(P=x(x+1)^{2^{m}-1}+1\) _is irreducible only for_ \(m=1\) _and_ \(m=2\)_._
Proof.: We prove (a). One sees that \(S:=\sigma(P^{2n})\) is a sum of \(2n+1\) nonzero monomials \(P^{k}\). If \(\deg(P)>1\), we have \(P(0)=P(1)=1\) since \(P\) is prime, thus \(S(0)=S(1)=1\). If \(P=x\) then \(S(0)=P(0)=1\), \(S(1)=2n+1=1\) in \(\mathbb{F}_{2}\). Similarly, if \(P=x+1\) then \(S(1)=P(1)=1\), and \(S(0)=2n+1=1\) in \(\mathbb{F}_{2}\). Put \(C=\prod_{j}P_{j}^{n_{j}}\), for some primes \(P_{j}\), thus \(\sigma(C^{2})=\prod_{j}\sigma(P_{j}^{2n_{j}})\) is odd as product of odd polynomials.
Part (b) is [11, Lemma 1]. Part (c) is [11, Corollary]. Part (d) is [11, Theorem 8]. Likewise, part (e) is [11, Lemma 14], and part (f) is [11, Lemma 2].
The list of all known [11] sporadic perfect follows. Gallardo and Rahavandrainy [19, 20] proved that the list contains all the sporadic perfects \(M\) with \(\omega(M)\leq 4\). The case \(\omega(M)=5\) is open from 2009.
**Lemma 7**.: _With the primes_
\[Q_{2}:=x^{2}+x+1,\;Q_{3a}:=x^{3}+x+1,\;Q_{3b}:=x^{3}+x^{2}+1,\;Q_{4a}:=x^{4}+ x^{3}+1,\]
\[Q_{4b}:=x^{4}+x^{3}+x^{2}+x+1,Q_{4c}:=x^{4}+x+1;\]
_one has the \(11\) sporadic perfects known. Besides, \(M_{20a}\) and \(M_{20b}\), they are the unique sporadic perfects with at most four distinct prime divisors._
\[M_{5a}:=x(x+1)^{2}\cdot Q_{2},M_{5b}:=(x+1)x^{2}\cdot Q_{2},M_{11a}:=x(x+1)^ {2}\cdot Q_{2}^{2}\cdot Q_{4c},\]
\[M_{11b}:=x^{2}(x+1)\cdot Q_{2}^{2}\cdot Q_{4c},M_{11c}:=x^{3}(x+1)^{4}\cdot Q _{4a},M_{11d}:=x^{4}(x+1)^{3}\cdot Q_{4b},\]
\[M_{15a}:=x^{3}(x+1)^{6}\cdot Q_{3a}\cdot Q_{3b},M_{15b}:=x^{6}(x+1)^{3}\cdot Q _{3a}\cdot Q_{3b},M_{16}:=x^{4}(x+1)^{4}\cdot Q_{4a}\cdot Q_{4b},\]
\[M_{20a}:=x^{4}(x+1)^{6}\cdot Q_{3a}\cdot Q_{3b}\cdot Q_{4b},M_{20b}:=x^{6}(x+ 1)^{4}\cdot Q_{3a}\cdot Q_{3b}\cdot Q_{4a}.\]
With the same notations of Lemma 7, the list of the five sporadic perfects of a special form follows.
**Lemma 8**.: _Besides \(M_{20a}\) and \(M_{20b}\) the following polynomials \(A\) are the only sporadic perfects with \(\omega(A)\leq 4\), of the form_
\[A:=B^{2}\cdot S, \tag{10}\]
_where \(B\) is the even polynomial of higher degree, such that \(B^{2}|A\), and \(S\) is a square-free polynomial coprime with \(B\), i.e., one has \(\gcd(B,S)=1\)._
\[M_{5a}=(x+1)^{2}\cdot x\cdot Q_{2},M_{5b}=x^{2}\cdot(x+1)\cdot Q_{2},M_{11a}= ((x+1)Q_{2})^{2}\cdot x\cdot Q_{4c},\]
\[M_{11b}=(xQ_{2})^{2}\cdot(x+1)\cdot Q_{4c},M_{16}=(x^{2}(x+1)^{2})^{2}\cdot Q_ {4a}\cdot Q_{4b},\]
We easily check the following lemma. It is useful for the proof of the last part of the theorem.
**Lemma 9**.: _Let \(a=2^{n}k\) be an even number, where \(k\) is odd. For any binary polynomial \(A\), and positive integer \(r\), set \(S(A^{r}):=1+A+\cdots+A^{r}\). Then_
\[S(A^{a})+1=A\cdot(A+1)^{2^{n}-1}\cdot S(A^{k-1})^{2^{n}}.\]
## 4 Proof of Theorem 1
Remember that \(r\) is the number of odd prime divisors of the even perfect polynomial \(A\). We consider the cases \(r=1\), \(r=2\), and \(r=3\). In each of them we will work on the equality
\[A=\sigma(A),\]
with both \(A\) and \(\sigma(A)\) explicitly factored as product of primes in \(\mathbb{F}_{2}[x]\). We apply our lemmas in the section 3 to prove the result in each of these cases. Essentially, our method consists of using the uniqueness of the factorization into primes in the ring \(\mathbb{F}_{2}[x]\).
We assume that \(r=1\). Thus, for some prime \(P_{1}\) one has
\[\sigma(B^{2})(P_{1}+1)=B^{2}P_{1}. \tag{11}\]
Since \(\gcd(B^{2},\sigma(B^{2}))=1\) and \(P_{1}\) is prime, (11) implies that \(\sigma(B^{2})=P_{1}\). Thus, \(P_{1}=(1+B)^{2}\). This is impossible. Therefore, this case does not happen.
We assume that \(r=2\). For some primes \(P_{1},P_{2}\) we have
\[\sigma(B^{2})(P_{1}+1)(P_{2}+1)=B^{2}P_{1}P_{2}. \tag{12}\]
Equation (12) can also be written as
\[P_{1}P_{2}(B^{2}+\sigma(B^{2})=(P_{1}+P_{2}+1)\sigma(B^{2}). \tag{13}\]
Since \(\gcd(\sigma(B^{2}),B^{2})=1\), (12) implies that \(\sigma(B^{2})\mid P_{1}P_{2}\).
Case 1. We can assume that \(\sigma(B^{2})=P_{1}\). Thus, \(\omega(B^{2})=1\). Therefore, \(A=B^{2}P_{1}P_{2}\) is an even perfect polynomial with \(\omega(A)=3\). This implies that \(A\in\{M_{5a},M_{5b}\}\), by Lemma 8 and Lemma 7.
Case 2. We have then
\[\sigma(B^{2})=P_{1}P_{2}. \tag{14}\]
Since \(B^{2}\) is an even square, (14) together with Lemma 6 (a), imply that both \(P_{1}\) and \(P_{2}\) are odd. As before, (14) implies that \(\omega(B^{2})\leq 2\), so that \(A\) is an even perfect polynomial with \(\omega(A)\leq 4\). By Lemma 8 and Lemma 6, the only possibility is \(A=M_{16}\), for which \(B=x^{2}(x+1)^{2},P_{1}=x^{4}+x^{3}+x^{2}+x+1,P_{2}=x^{4}+x^{3}+1\).
We assume now that \(r=3\). We have then
\[\sigma(B^{2})(P_{1}+1)(P_{2}+1)(P_{3}+1)=B^{2}P_{1}P_{2}P_{3}. \tag{15}\]
Case 1. We have \(\omega(\sigma(B^{2}))=1\), say \(\sigma(B^{2})=P_{1}\). Thus, as before, \(\omega(B^{2})=1\). This implies that \(\omega(A)=4\). By Lemma 8, this case does not happen.
Case 2. We have \(\omega(\sigma(B^{2}))=2\). If \(\omega(B)=1\), as before, there is no solution by Lemma 8. We assume then that \(\omega(B)=2\). One sees that \(\omega(A)=5\) now, thus we cannot deduce the result from Lemma 8 again. In fact, we do not know if \(M_{20a}\) and \(M_{20b}\) are the unique even perfects \(M\) with \(\omega(M)=5\).
We have, by Lemma 6(a), and without loss of generality, that for odd primes \(P_{1},P_{2}\), for primes \(R_{1}\neq R_{2}\), and for positive integers \(a_{1},a_{2}\) the following holds.
\[\sigma(B^{2})=P_{1}P_{2}\text{, and }B=R_{1}^{a_{1}}R_{2}^{a_{2}}. \tag{16}\]
Moreover, (15) becomes
\[(P_{1}+1)(P_{2}+1)(P_{3}+1)=B^{2}P_{3}. \tag{17}\]
Assume that \(P_{3}\) is even. If \(P_{3}=x\), since \(\gcd(P_{3},B)=1\), and \(B\) is even, we have that \(R_{1}=x+1\), and \(R_{2}\) is odd. Moreover, \(P_{1}\) and \(P_{2}\) are odd, hence comparing valuations in (17) gives \(v_{x}((P_{1}+1)(P_{2}+1)(P_{3}+1))\geq 2\), while \(v_{x}(BP_{3})=1\). Thus, \(P_{3}\neq x\). By translation, \(x\) to \(x+1\), \(P_{3}\neq x+1\). Therefore, \(\deg(P_{3})>1\). Since \(P_{1},P_{2},P_{3}\) are all odd, it follows from (17) that, say, \(R_{1}=x\) and \(R_{2}=x+1\), \(B\) is even, \(\gcd(B,P_{3})=1\), and \(\omega(B)=2\). It follows from (16) that we can take \(\sigma(x^{2a_{1}})=P_{1}\) and \(\sigma((x+1)^{2a_{2}})=P_{2}\), so that
\[P_{1}+1=x(1+x+\cdots+x^{2a_{1}-1}), \tag{18}\]
and
\[P_{2}+1=x(1+x+\cdots+x^{2a_{2}-1}). \tag{19}\]
From (18) and (19) we get \(v_{x}(P_{1}+1)=v_{x}(P_{2}+1)=1\). Since \(x,x+1\) and \(P_{3}\) are the only primes that divide \(B^{2}P_{3}\), we can assume that, say, \(P_{3}\mid P_{1}+1\) and \(P_{3}\nmid P_{2}+1\). Write, \(P_{1}+1=x^{c_{1}}(x+1)^{c_{2}}P_{3}\), \(P_{2}+1=x^{d_{1}}(x+1)^{d_{2}}\), and \(P_{3}+1=x^{e_{1}}(x+1)^{e_{2}}\). From (18) and (19) we get \(c_{1}=1\) and \(d_{1}=1\).
Since \(P_{1}=1+x(x+1)^{c_{2}}P_{3}\) we have from (18)
\[(P_{1}+1)/x=\sigma(x^{2a_{1}-1})=1+x+\cdots+x^{2a_{2}-1}=(x+1)^{c_{2}}P_{3}. \tag{20}\]
Thus, \((x+1)^{c_{2}}P_{3}\) is complete. It follows from Lemma 6(b) that \(c_{2}=2^{n}-1\) for some positive integer \(n\), since \(P_{1}\) is odd. In other words, for \(K\) complete, we have the following equality:
\[(P_{1}+1)/x=(x+1)^{2^{n}-1}K^{2^{n}}. \tag{21}\]
It follows from (21) and (20) that \(P_{3}=K^{2^{n}}\). Hence, \(n=0\). This is impossible. Therefore, Case 2 does not happen.
Case 3. We have \(\omega(\sigma(B^{2}))=3\). Thus, we consider again (15), i.e.,
\[\sigma(B^{2})(P_{1}+1)(P_{2}+1)(P_{3}+1)=B^{2}P_{1}P_{2}P_{3}. \tag{22}\]
Equation (22) implies immediately
\[\sigma(B^{2})=P_{1}P_{2}P_{3}, \tag{23}\]
and
\[(P_{1}+1)(P_{2}+1)(P_{3}+1)=B^{2}. \tag{24}\]
Since \(P_{1},P_{2}\) and \(P_{3}\) are all odd, (24) implies that \(x(x+1)\mid B\). In particular, \(\omega(B)\geq 2\). Since \(B^{2}\) is an even square, \(\sigma(B^{2})\) is odd, so that (23) implies that \(P_{1},P_{2}\) and \(P_{3}\) are all odd. Thus, (23) implies that \(\omega(B)=\omega(B^{2})<4\).
If \(\omega(B)=2\), one has \(B=x^{a}(x+1)^{b}\) for positive integers \(a,b\). Since \(P_{1}P_{2}P_{3}\) is square free, we can assume from (23) that, say
\[\sigma(x^{2a})=P_{3}\;,\;\mbox{and}\;\sigma((x+1)^{2b})=P_{1}P_{2}. \tag{25}\]
Moreover, since \(P_{3}\) is odd, for some positive integers \(c,d\) we have
\[1+P_{3}=x^{c}(x+1)^{d}. \tag{26}\]
From (25) and (26) we obtain \(c=1\), since \(1+P_{3}=x(1+\cdots x^{2a-1})\). Putting \(K_{3}=(1+P_{3})/x\), one sees that
\[K_{3}=1+\cdots+x^{2a-1}=x^{c-1}(x+1)^{d}=(x+1)^{d}. \tag{27}\]
Equation (27) says that \(K_{3}\) is complete, thus, as before, Lemma 6(b) implies that for some positive integer \(n\) one has \(d=2^{n}-1\), and \(K_{3}=(x+1)^{2^{n}-1}C^{2^{n}}\), with \(C\) complete. This forces \(C=1\). Hence,
\[P_{3}=1+x(x+1)^{2^{n}-1}. \tag{28}\]
Since \(P_{3}\) is prime, Lemma 6(c) implies that
\[P_{3}\in\{x^{2}+x+1,x^{4}+x^{3}+x^{2}+x+1\}. \tag{29}\]
Assume that \(P_{3}=x^{2}+x+1\). Thus, from \(\sigma(x^{2a})=P_{3}\) we get \(a=1\). In other words, \(B=x(x+1)^{b}\). From (24) and (25) we obtain
\[(P_{1}+1)(P_{2}+1)=x(x+1)^{2b-1}. \tag{30}\]
Equation (30) is impossible since \(v_{x}((P_{1}+1)(P_{2}+1))\geq 2\), while \(v_{x}(x(x+1)^{2b-1})=1\). Thus \(P_{3}\neq x^{2}+x+1\). Assume then that we have \(P_{3}=x^{4}+x^{3}+x^{2}+x+1\). We
claim that \(A=M_{20b}\) (\(M_{20a}\) is obtained by the same method, switching \(x\) and \(x+1\)). In order to prove the claim, observe that \(P_{3}+1=x(x+1)^{3}\), thus (24) becomes
\[(P_{1}+1)(P_{2}+1)=x^{2a-1}(x+1)^{2b-3}. \tag{31}\]
From \(\sigma(x^{2a})=P_{3}\) we get \(a=2\). This together (25) gives \(B=x^{2}(x+1)^{b}\) and
\[(P_{1}+1)(P_{2}+1)=x^{3}(x+1)^{2b-3}. \tag{32}\]
We can take in (32), with positive integers \(b_{1},b_{2}\); \(b_{2}\) odd since \(P_{2}\) is not a square, and \(b_{1}\) even since \(b_{1}+b_{2}=2b-3\). Thus,
\[P_{1}+1=x(x+1)^{b_{1}},P_{2}+1=x^{2}(x+1)^{b_{2}}. \tag{33}\]
Since \(\sigma((x+1)^{2b})\) is complete in \(x+1\), and since \(P_{1},P_{2}\) are Mersenne one sees that (25) together with Lemma 6(d) implies that
\[\sigma((x+1)^{2b})\in\{x^{2}+x+1,x^{4}+x^{3}+1,(x^{3}+x+1)(x^{3}+x^{2}+1)\}. \tag{34}\]
But \(\omega(\sigma((x+1)^{2b})=2\), since \(\sigma((x+1)^{2b})=P_{1}P_{2}\). Thus, the only possibility allowed by (34) is that \(\sigma((x+1)^{2b})=(x^{3}+x+1)(x^{3}+x^{2}+1)\). Therefore, \(P_{1}=x^{3}+x+1,P_{2}=x^{3}+x^{2}+1\), i.e., \(b=3\). Thus, \(B=x^{2}(x+1)^{3}\). In other words, we have
\[B^{2}P_{1}P_{2}P_{3}=M_{20b}. \tag{35}\]
This finishes the case in which \(\omega(B)=2\).
We claim that the remaining case, namely \(\omega(B)=3\) does not happen. To prove the claim, we assume that, on the contrary, \(B=R_{1}^{a_{1}}R_{2}^{a_{2}}R_{3}^{a_{3}}\) with some positive integers \(a_{1},a_{2},a_{3}\). Observe that the perfect polynomial \(A=B^{2}P_{1}P_{2}P_{3}\) has \(\omega(A)=6\) so that, as before, we cannot rely on Lemma 8 for the proof. But, we can, and do, assume that \(R_{1}=x\), \(R_{2}=x+1\), and that \(R_{3}\) is odd, since \(x(x+1)\mid B\) (see (24)). Thus, (23) becomes
\[\sigma(x^{2a_{1}})\sigma(x^{2a_{2}})\sigma(R_{3}^{2a_{3}})=P_{1}P_{2}P_{3}. \tag{36}\]
Since \(P_{1}P_{2}P_{3}\) is square-free, the three factors on the left-hand side of (36) are pairwise coprime, so that we can take
\[\sigma(x^{2a_{1}})=P_{1},\sigma(x^{2a_{2}})=P_{2},\sigma(R_{3}^{2a_{3}})=P_{3}. \tag{37}\]
Put, \(2a_{1}=2^{n_{1}}k_{1},2a_{2}=2^{n_{2}}k_{2},2a_{3}=2^{n_{3}}k_{3}\), for odd numbers \(k_{1},k_{2},k_{3}\). From Lemma 9 we get
\[P_{1}+1=\sigma(x^{2a_{1}})+1=x(x+1)^{2^{n_{1}}-1}(1+x+\cdots+x^{k_{1}-1})^{2^{ n_{1}}}. \tag{38}\]
\[P_{2}+1=\sigma(x^{2a_{2}})+1=x(x+1)^{2^{n_{2}}-1}(1+x+\cdots+x^{k_{2}-1})^{2^{ n_{2}}}. \tag{39}\]
\[P_{3}+1=\sigma(R_{3}^{2a_{3}})+1=R_{3}(R_{3}+1)^{2^{n_{3}}-1}(1+R_{3}+\cdots+R _{3}^{k_{3}-1})^{2^{n_{3}}}. \tag{40}\]
On the other hand, (24) implies
\[P_{1}+1=x^{u_{1}}(x+1)^{u_{2}}R_{3}^{u_{3}}, \tag{41}\]
\[P_{2}+1=x^{v_{1}}(x+1)^{v_{2}}R_{3}^{v_{3}}, \tag{42}\]
and
\[P_{3}+1=x^{w_{1}}(x+1)^{w_{2}}R_{3}^{w_{3}}. \tag{43}\]
Assume, first, that \(k_{1}=k_{2}=1\). Thus, from (41) and (42) we get
\[P_{1}+1=x(x+1)^{2n_{1}-1}, \tag{44}\]
and
\[P_{2}+1=(x+1)x^{2n_{2}-1}. \tag{45}\]
But from (40) and (43) we have \(w_{3}=1\). Thus, (44) and (45) implies that
\[v_{R_{3}}(P_{1}+1)(P_{2}+1)(P_{3}+1)=1. \tag{46}\]
Clearly, (46) contradicts (24). Thus, the case \(k_{1}=k_{2}=1\) does not happen.
We claim that the case \(k_{1}>1\) and \(k_{2}>1\) also does not happen.
Since from (24) \((P_{1}+1)(P_{2}+1)(P_{3}+1)=B^{2}\) and since \(\omega(B)=3\) for all \(j\), one has \(2\leq\omega(P_{j}+1)\leq 3\). Moreover, one sees that \(\omega(P_{1}+1)=2\) is equivalent to \(k_{1}=1\), and \(\omega(P_{2}+1)=2\) is equivalent to \(k_{2}=1\). Thus, \(k_{1}>1\) and \(k_{2}>1\) forces \(R_{3}=1+x+\cdots+x^{k_{1}-1}\), and \(R_{3}=1+(x+1)+\cdots+(x+1)^{k_{2}-1}\). In other words, we have \(\sigma(x^{k_{1}-1})=\sigma((x+1)^{k_{2}-1})\). This is impossible by Lemma 6(e).
By the same argument, one sees that it remains only two possibilities, either Case A, or Case B:
Case A. One has \(k_{1}=1\), \(k_{2}>1\), and \(R_{3}=1+(x+1)+\cdots+(x+1)^{k_{2}-1}\). Case B. One has \(k_{1}>1\), \(k_{2}=1\), and \(R_{3}=1+x+\cdots+x^{k_{1}-1}\).
We work now Case A: We have \(2a_{1}=2^{n_{1}}\), with \(n_{1}\geq 1\). We have \(P_{1}=1+x(x+1)^{2^{n_{1}-1}}\). It follows from Lemma 6(f) that \(n_{1}\in\{1,2\}\), i.e., that \(a_{1}\in\{1,2\}\). Thus,
\[P_{1}\in\{x^{2}+x+1,x^{4}+x^{3}+x^{2}+x+1\}.\]
Case A1. Assume that \(P_{1}=x^{2}+x+1\). Thus, \(n_{1}=1=a_{1}\), so that \(B=x(x+1)^{a_{2}}R_{3}^{a_{3}}\). Thus, (24) becomes
\[(P_{2}+1)(P_{3}+1)=x(x+1)^{2a_{2}-1}R_{3}^{2a_{3}}. \tag{47}\]
I will now recall that (37) implies \(\sigma(x^{2})=P_{1},\sigma((x+1)^{2a_{2}})=P_{2}\), and \(\sigma(R_{3}^{2a_{3}})=P_{3}\), with \(2a_{2}=2^{n_{2}}k_{2},2a_{3}=2^{n_{3}}k_{3}\), \(k_{2}>1\) is odd, and \(k_{3}\geq 1\) is odd.
But \(P_{2}\) and \(P_{3}\) are both odd, thus \(v_{x}((P_{2}+1)(P_{3}+1))\geq 2\), while (47) implies that \(v_{x}(x(x+1)^{2a_{2}-1}R_{3}^{2a_{3}})=1\). This is impossible. Thus, Case A1 does not happen.
Case A2. Assume that \(P_{1}=x^{4}+x^{3}+x^{2}+x+1\). Thus, \(a_{1}=2\), so that \(B=x^{2}(x+1)^{a_{2}}R_{3}^{a_{3}}\). Thus, after division of both sides by \(x(x+1)^{3}\), equation (24) becomes
\[x^{3}(x+1)^{2a_{2}-3}R_{3}^{2a_{3}}=(P_{2}+1)(P_{3}+1), \tag{48}\]
with \(a_{2}\geq 2\). Here, we have from (37), \(\sigma(x^{4})=P_{1},\sigma((x+1)^{2a_{2}})=P_{2}\), and \(\sigma(R_{3}^{2a_{3}})=P_{3}\). By (39) and (40) we have \(v_{R_{3}}(P_{2}+1)=2^{n_{2}}\) and \(v_{R_{3}}(P_{3}+1)=1\).
Thus, \(v_{R_{3}}((P_{2}+1)(P_{3}+1)=v_{R_{3}}(P_{2}+1)+v_{R_{3}}(P_{3}+1)=2^{n_{2}}+1\). On the other hand, from (38) we obtain \(v_{R_{3}}(x^{3}(x+1)^{2a_{2}-3}R_{3}^{2a_{3}})=2a_{3}\). Thus, \(2a_{3}=2^{n_{2}}+1\). This is impossible, thus Case A2 does not happen.
Thus, Case A does not happen.
Case B. We have now, \(k_{2}=1\) and \(k_{1}>1\). Thus, \(2a_{2}=2^{n_{2}}\), \(2a_{1}=2^{n_{1}}k_{1}\) and
\[P_{1}=1+x(x+1)^{2^{n_{1}}-1}(1+\cdots+x^{k_{1}-1})^{2^{n_{1}}}.\]
Since \(k_{2}=1\) one has
\[P_{2}=1+x^{2^{n_{2}}-1}(x+1). \tag{49}\]
Since \(P_{2}\) is prime, Lemma 6(f), (49), and switching \(x\) and \(x+1\) gives \(n_{2}\in\{1,2\}\). If \(n_{2}=1\) then \(a_{2}=1\) so that \(P_{2}=x^{2}+x+1\), while if \(n_{2}=2\) then \(a_{2}=2\) and \(P_{2}=x^{4}+x^{3}+1\).
Case B1. We have \(P_{2}+1=x(x+1)\). In particular, \(k_{2}=1\) and \(n_{2}=1\). More precisely, we have \(2a_{1}=2^{n_{1}}k_{1},2a_{2}=2^{n_{2}}k_{2}=2,2a_{3}=2^{n_{3}}k_{3}\).
We have also
\[P_{1}+1=x(x+1)^{2^{n_{1}}-1}(1+\cdots+x^{k_{1}-1})^{2^{n_{1}}},\]
and
\[P_{3}+1=R_{3}(R_{3}+1)^{2^{n_{3}}-1}(1+\cdots+R_{3}^{k_{3}-1})^{2^{n_{3}}}.\]
We have thus, by definition of \(B\)
\[B^{2}=x^{2^{n_{1}}k_{1}}(x+1)^{2}R_{3}^{2^{n_{3}}k_{3}}. \tag{50}\]
Divide now both sides of (24) by \(x(x+1)=P_{2}+1\) to get
\[(P_{1}+1)(P_{3}+1)=x^{2^{n_{1}}k_{1}}(x+1)R_{3}^{2^{n_{3}}k_{3}}. \tag{51}\]
Since \(P_{3}\) and \(P_{1}\) are odd primes (51) implies
\[2\leq v_{x+1}((P_{1}+1)(P_{3}+1))=v_{x+1}(x^{2^{n_{1}}k_{1}}(x+1)R_{3}^{2^{n_{3 }}k_{3}})=1. \tag{52}\]
Since (52) is impossible, we obtain that Case B1 does not happen.
Case B2. Here, \(P_{2}+1=x^{3}(x+1)\). In particular, \(k_{2}=1\) and \(n_{2}=2\). More precisely, we have \(2a_{2}=2^{n_{2}}k_{2}=4\). As before, we have by definition of \(B\)
\[B^{2}=x^{2a_{1}}(x+1)^{4}R_{3}^{2a_{3}}. \tag{53}\]
Divide now both sides of (24) by \(x^{3}(x+1)=P_{2}+1\) to get
\[x^{2a_{1}-3}(x+1)^{3}R_{3}^{2a_{3}}=(P_{1}+1)(P_{3}+1). \tag{54}\]
We have now
\[P_{1}+1=x(x+1)^{2^{n_{1}}-1}(1+\cdots+x^{k_{1}-1})^{2^{n_{1}}},\]
\[P_{3}+1=R_{3}(R_{3}+1)^{2^{n_{3}}-1}(1+\cdots+R_{3}^{k_{3}-1})^{2^{n_{3}}}.\]
Computing the valuation in \(R_{3}\) in both sides of (54) we obtain
\[2a_{3}=2^{n_{1}}+1. \tag{55}\]
Since (55) is impossible, we obtain that Case B2 does not happen. This finish the proof that the case \(\omega(B)=3\) does not happen. Thus, we proved the theorem. |
2305.12942 | Partitioning zero-divisor graphs of finite commutative rings into global
defensive alliances | For a commutative ring $R$ with identity, the zero-divisor graph of $R$,
denoted $\Gamma(R)$, is the graph whose vertices are the non-zero zero divisors
of $R$ with two distinct vertices $x$ and $y$ are adjacent if and only if
$xy=0$. In this paper, we are interested in partitioning the vertex set of
$\Gamma(R)$ into global defensive alliances for a finite commutative ring $R$.
This problem has been well investigated in graph theory. Here we connected it
with the ring theoretical context. We characterize various commutative finite
rings for which the zero divisor graph is partitionable into global defensive
alliances. We also give several examples to illustrate the scopes and limits of
our results. | Driss Bennis, Brahim El Alaoui | 2023-05-22T11:43:42Z | http://arxiv.org/abs/2305.12942v1 | # Partitioning zero-divisor graphs of finite commutative rings into global defensive alliances
# Partitioning zero-divisor graphs of finite commutative rings into global defensive alliances
**Driss Bennis\({}^{1,a}\), Brahim El Alaoui\({}^{1,b}\)**
\({}^{1}\) Department of Mathematics, Faculty of Sciences, Mohammed V University in Rabat, Morocco.
\({}^{a}\) [email protected]; [email protected];
\({}^{b}\) [email protected]; [email protected]
**Abstract.**
For a commutative ring \(R\) with identity, the zero-divisor graph of \(R\), denoted \(\Gamma(R)\), is the graph whose vertices are the non-zero zero divisors of \(R\) with two distinct vertices \(x\) and \(y\) are adjacent if and only if \(xy=0\). In this paper, we are interested in partitioning the vertex set of \(\Gamma(R)\) into global defensive alliances for a finite commutative ring \(R\). This problem has been well investigated in graph theory. Here we connected it with the ring theoretical context. We characterize various commutative finite rings for which the zero divisor graph is partitionable into global defensive alliances. We also give several examples to illustrate the scopes and limits of our results.
**Key words and phrases:** Zero-divisor graph, defensive alliance, dominating set, partitioning a zero-divisor graph.
**2020 Mathematics Subject Classification :** 13M05, 05C25
## 1 Introduction
Within this paper, \(R\) will be a commutative ring with \(1\neq 0\), \(Z(R)\) be its set of zero-divisors and \(U(R)\) be its set of units. Let \(x\) be an element of \(R\), the annihilator of \(x\) is defined as \(\mbox{Ann}(x):=\{y\in R|\ xy=0\}\). For an ideal \(I\) of \(R\), \(\sqrt{I}\) means the radical of \(I\).
An element \(x\) of \(R\) is called nilpotent if \(x^{n}=0\) for some positive integers \(n\). The set of all nilpotent elements is denoted \(\mathrm{Nil}(R):=\sqrt{0}\). A ring \(R\) is called reduced if \(\mathrm{Nil}(R)=\{0\}\). The ring \(\mathbb{Z}/n\mathbb{Z}\) of the residues modulo an integer \(n\) will be noted by \(\mathbb{Z}_{n}\). For a subset \(X\) of \(R\), we denoted \(X^{*}=X\setminus\{0\}\). For any real number \(r\), let \(\lceil r\rceil\) (resp., \(\lfloor r\rfloor\)) denote the ceiling of \(r\), that is, the least integer greater than or equal to \(r\) (resp., the floor of \(r\), that is the greatest integer less than or equal to \(r\)).
We assume the reader has at least a basic familiarity with the zero-divisor graph theory. For general background on the zero-divisor graph theory, we refer the reader to [3, 4, 5, 6, 7, 8, 1, 9, 10]. The concept of the zero-divisor graph of a commutative ring was first introduced by Beck [10], to investigate the structure of commutative rings. For a given commutative ring \(R\), Beck's zero-divisor graph is a simple graph with vertex set all elements of \(R\), such that two distinct vertices \(x\) and \(y\) are adjacent if and only if \(xy=0\). Beck was mainly interested in colorings. In 1999, Anderson and Livingston defined a simplified version \(\Gamma(R)\) of Beck's zero-divisor graph by including only nonzero zero-divisors of \(R\) in the vertex set and leaving the definition of edges the same [4]. The reason of this simplification was to better capture the essence of the zero-divisor structure of the ring. Several properties of \(\Gamma(R)\) have been investigated, such as connectedness, diameter, girth, chromatic number, etc. [4, 1]. In addition, the isomorphism problem for such graphs has been solved for finite reduced rings [3]. Several authors have also investigated rings \(R\) whose graph \(\Gamma(R)\) belongs to a certain family of graphs, such as star graphs [1], complete graphs [4], complete \(r\)-partite graphs and planar graphs [2, 25].
This paper deals with defensive alliance notions of graphs. In a graph \(\Gamma\), a nonempty set of vertices \(S\) is called a defensive alliance if any vertex \(v\) in \(S\) has at least one more neighbors in \(S\) than it has in the complement of \(S\) (see Section 2 for more definitions of related notions). These notions were motivated by the study of alliances between different parts in a population. Since its introduction by Kristiansen, Hedetniemi, and Hedetniemi in [18, 19], the alliances have attracted the attention of many authors. Recently, several authors have been studied these notions in the context of zero-divisor graphs of finite commutative rings (see for instance [12, 13, 20]). In this paper, we focus our attention on the problem of partitioning zero-divisor graphs of some kind of finite commutative rings into global defensive alliances. This problem has been subject of several researches for arbitrary graphs (see for instance [23, 24, 26]).
This paper is organized as follows:
In Section 2, we recall the global defensive alliance of graphs as well as some notions related to it.
In Section 3, we study when zero-divisor graphs of some kind of direct products of finite fields with finite local rings is partitionable into global defensive alliances and we calculate
the global defensive alliance partition number, \(\psi_{g}(\Gamma(R))\), for each one of them. We give also complete characterizations for partitioning zero-divisor graphs of finite rings with \(\gamma_{a}(\Gamma(R))=1,2\). Namely, we prove that a zero-divisor graph, \(\Gamma(R)\), of a finite ring \(R\) with \(\gamma_{a}(\Gamma(R))=1\) is partitionable into global defensive alliances if and only if \(R\) is isomorphic to one of the rings \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\), \(\mathbb{Z}_{9}\), \(\mathbb{Z}_{3}[X]/(X^{2})\) (see Theorem 3.19), and we prove that for a zero-divisor graph of a finite ring with \(\gamma_{a}(\Gamma(R))=2\), \(\Gamma(R)\) is partitionable into global defensive alliances if and only if \(R\) is isomorphic to one of the rings \(\mathbb{Z}_{2}\times\mathbb{Z}_{4}\), \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}[X]/(X^{2})\), \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\), \(\mathbb{Z}_{3}\times\mathbb{F}_{4}\), \(\mathbb{Z}_{25}\), \(\mathbb{Z}_{5}[X]/(X^{2})\) and \(\mathbb{F}_{4}\times\mathbb{F}_{4}\) (see Theorem 3.21 and Corollary 3.22).
## 2 Preliminaries
In this section we deal with the alliance notion of graphs. We suppose some familiarity with some basic concepts in graph theory. For the convenience of the reader, we recall this concept and some useful concepts, including the concept of the dominating set, which is a very important concept in graph theory. In fact, many interesting properties related to this term are still in the spotlight of some researchers. (see for instance [14, 15, 16]).
Let \(\Gamma=(V,E)\) be a finite simple graph that is a finite graph without loop or multiple edges. Then, we will use the notation \(x-y\) to mean the edge between the two adjacent vertices \(x\) and \(y\).
Let \(x\in V\) be a vertex in \(\Gamma\), the open neighborhood of \(x\) is defined as \(N(x):=\{y\in V|\ x-y\in E\}\), and the closed neighborhood of \(x\) is defined by \(N[x]:=N(x)\cup\{x\}\). In general, for a nonempty subset \(S\subseteq V\), the open neighborhood of \(S\) is defined as \(N(S)=\cup_{x\in S}N(x)\) and its closed neighborhood by \(N[S]=N(S)\cup S\).
The degree of the vertex \(x\in V\), denoted by \(deg(x)\), is the cardinality of its open neighborhood. Namely, \(deg(x):=|N(x)|\). In general, for every nonempty subset \(S\subseteq V\) and every vertex \(x\in S\), we define the degree of \(x\) over \(S\) as \(deg_{S}(x):=|S\cap N(x)|\). So, \(deg_{V}(x)=deg(x)\).
A nonempty set \(S\subseteq V\) is a dominating set in \(\Gamma\) if for every vertex \(v\in\bar{S}\), \(deg_{S}(v)>0\). The domination number of \(\Gamma\), denoted \(\gamma(\Gamma)\), is the minimum cardinality of a dominating set in \(G\).
A non-empty set of vertices \(S\subseteq V\) is called a defensive alliance if for every \(x\in S\), \(|N[x]\cap S|\geq|N(x)\cap\bar{S}|\), in other words, \(deg_{S}(x)+1\geq deg_{\bar{S}}(x)\), where \(\bar{S}=V\setminus S\) (i.e., \(\bar{S}\) is the complement of \(S\) in \(V\)) or equivalently \(deg(v)+1\geq 2deg_{\bar{S}}(v)\).
A defensive alliance \(S\) is called strong if for every vertex \(x\in S\), \(|N[x]\cap S|>|N(x)\cap\bar{S}|\)
in other words, \(deg_{S}(x)\geq deg_{S}(x)\). In this case we say that every vertex in \(S\) is strongly defended. A defensive alliance \(S\) is global if it forms a dominating set.
The global defensive alliance partition number of \(\Gamma\), denoted \(\psi_{g}(\Gamma)\), is defined to be the maximum number of sets in a partition of \(V\) such that each set is a global defensive alliance. A graph \(\Gamma\) is partitionable into global defensive alliances if \(\psi_{g}(\Gamma)\geq 2\).
## 3 Partitioning into global defensive alliances
In this section we study when zero-divisor graphs of some kind of commutative rings can be partitionable into global defensive alliances and calculate their global defensive alliance partition numbers. First we start by giving a lower bound of the cardinality of the set of zero divisors \(Z(R)\) in term of the global defensive alliance partition number of \(\Gamma(R)\).
**Proposition 3.1**: _Let \(R\) be a ring. Then,_
\[|Z(R)|\geq\psi_{g}(\Gamma(R))^{2}-\psi_{g}(\Gamma(R))+1.\]
**Proof.** If \(\Gamma(R)\) is not partitionable into global defensive alliances, then \(\psi_{g}(\Gamma(R))=1\) and so \(|Z(R)|\geq\psi_{g}(\Gamma(R))^{2}-\psi_{g}(\Gamma(R))+1=1\) it is true. So, we assume that \(\Gamma(R)\) is partitionable into global defensive alliances. Let \(r=\psi_{g}(\Gamma(R))\) and \(\{S_{1},\ldots,S_{r}\}\) be a partition of \(\Gamma(R)\) into global defensive alliances. Since, for every \(i\in\llbracket 1;r\rrbracket\), \(S_{i}\) is a dominating set, \(deg_{S_{i}}(x)\geq r-1\) for every \(x\in S_{i}\) and so \(|S_{i}|-1\geq\deg_{S_{i}}(x)\geq deg_{S_{i}}(x)-1\) (since \(S_{i}\) is a defensive alliance). Thus, \(|S_{i}|-1\geq r-2\) and hence \(|Z(R)|-1=\sum_{i=1}^{r}|S_{i}|\geq r^{2}-r\). Then, \(|Z(R)|\geq r^{2}-r+1\). On the other hand, if \(\Gamma(R)\) is not partitionable into global defensive alliances, that is \(\psi_{g}(\Gamma(R))=1\), we have \(|Z(R)|\geq 1^{2}-1+1=1\). Hence, \(|Z(R)|\geq\psi_{g}(\Gamma(R))^{2}-\psi_{g}(\Gamma(R))+1\).
The following example shows that this bound is sharp.
**Example 3.2**: _Let \(R=\mathbb{Z}_{9}\). Then, the zero-divisor graph is just an edge joining \(\bar{3}\) and \(\bar{6}\). So, \(\{S_{1},S_{2}\}\) where \(S_{1}=\{\bar{3}\}\) and \(S_{2}=\{\bar{6}\}\), is a partition of \(\Gamma(R)\) into two global defensive alliances. Then, \(\psi_{g}(\Gamma(R))=2\) and so \(|Z(R)|=\psi_{g}(\Gamma(R))^{2}-\psi_{g}(\Gamma(R))+1=2^{2}-2+1=3\)._
For a finite local ring \((R,M)\) which is not a field, \(M=Z(R)=\mathrm{Ann}(x)\) for some \(x\in Z(R)^{*}\) and \(|R|=p^{nr}\) and \(|M|=p^{(n-1)r}\) for some prime number \(p\) and positive integers \(n\) and \(r\). However, we know that for a finite local ring \((R,M)\), \(\Gamma(R)\) is complete if and only if \(Z(R)=M\) with \(M^{2}=0\), [4, Theorem 2.8]. So, we have the following result for this simple case.
**Proposition 3.3**: _Let \((R,M)\) be a finite local ring such that its maximal ideal \(M\) is nilpotent of index \(2\). Then,_
1. _if_ \(|M|\) _is odd,_ \(\Gamma(R)\) _is partitionable into global defensive alliances with_ \(\psi_{g}(\Gamma(R))=2\)_._
2. _if_ \(|M|\) _is even,_ \(\Gamma(R)\) _is not partitionable into global defensive alliances._
**Proof.** (1)- Let \(S_{1}\) and \(S_{2}\) be two distinct subsets of \(M^{*}\) such that \(|S_{1}|=|S_{2}|=\frac{|M|-1}{2}\). Then, \(\{S_{1},S_{2}\}\) is a partition of \(\Gamma(R)\) into global defensive alliances and so \(\psi_{g}(\Gamma(R))\geq 2\). Since, \(\gamma_{a}(\Gamma(R))=\frac{|M|-1}{2}\) and \(\gamma_{a}(\Gamma(R))\times\psi_{g}(\Gamma(R))\leq|M|-1\), \(\psi_{g}(\Gamma(R))\leq 2\). Hence, \(\psi_{g}(\Gamma(R))=2\).
(2)- Suppose that \(\Gamma(R)\) is partitionable into global defensive alliances, then \(\psi_{g}(\Gamma(R))\geq 2\) and so, by [12, Proposition 3.6], \(\left\lceil\frac{|M|-1}{2}\right\rceil\times 2\leq\gamma_{a}(\Gamma(R)) \times\psi_{g}(\Gamma(R))\leq|M|-1\). Thus, \(|M|\leq|M|-1\), a contradiction.
**Corollary 3.4**: _Let \(p\) be a prime number. Then, we have two cases:_
1. _if_ \(p=2\)_, then_ \(\Gamma(\mathbb{Z}_{p^{2}})\) _has only one vertex and so it is not partitionable into global defensive alliances._
2. _if_ \(p\neq 2\)_, then_ \(\Gamma(\mathbb{Z}_{p^{2}})\) _is partitionable into global defensive alliances and_ \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{2}}))=2\)_._
In the following theorem we study when \(\Gamma(\mathbb{Z}_{p^{n}})\) is partitionable into global defensive alliances for a prime number \(p\) and a positive integer \(n\geq 3\).
**Theorem 3.5**: _Let \(p\) be a prime number and \(n\geq 3\) be a positive integer. Then,_
1. _If_ \(p=2\)_, then_ \(\Gamma(\mathbb{Z}_{p^{n}})\) _is not partitionable into global defensive alliances._
2. _If_ \(p\geq 3\)_, then_ \(\Gamma(\mathbb{Z}_{p^{n}})\) _is partitionable into global defensive alliances and_ \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}}))=2\)_._
**Proof.** We have \(Z:=Z(\mathbb{Z}_{p^{n}})=\{\overline{mp}|\ 0\leq m<p^{n-1}\}\) and \(|Z(\mathbb{Z}_{p^{n}})|=p^{n-1}\). Then,
(1)- Suppose that \(\Gamma(\mathbb{Z}_{p^{n}})\) is partitionable into global defensive alliances. Then, \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}}))\geq 2\). Since \(\gamma_{a}(\Gamma(\mathbb{Z}_{p^{n}}))=2^{n-2}\), by [20, Theorem 2.9 ], then \(2^{n-2}\times 2\leq\gamma_{a}(\Gamma(\mathbb{Z}_{p^{n}}))\psi_{g}(\Gamma(\mathbb{ Z}_{p^{n}}))\leq 2^{n-1}-1\), a contradiction.
(2)-For each \(1\leq k\leq n-1\), set \(A_{k}=\{\overline{ap^{k}}\in Z|\ p\ \mbox{does not divide}\ a\}\). The sets \(A_{k}\) are disjoints, \(|A_{k}|=p^{n-k}-p^{n-k-1}\) which is an even number, and \(Z^{*}=\cup_{k=1}^{n-1}A_{k}\). Let \(S_{1}=\cup_{k=1}^{n-1}A_{k}^{\prime}\) and \(S_{2}=\cup_{k=1}^{n-1}A_{k}^{\prime\prime}\) such that \(A_{k}^{\prime}\) is a half of elements of \(A_{k}\) and \(A_{k}^{\prime\prime}\) the other half. By the proof of [20, Theorem 2.9], \(S_{1}\) and \(S_{2}\) are two global defensive alliances and so \(\{S_{1},S_{2}\}\) is a partition of \(\Gamma(\mathbb{Z}_{p^{n}})\) into global defensive alliances. Then, \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}}))\geq 2\). On the other hand \(\gamma_{a}(\Gamma(\mathbb{Z}_{p^{n}}))=\left\lceil\frac{p^{n-1}-1}{2}\right\rceil\), by [20, Theorem 2.9 ], and since \(\gamma_{a}(\Gamma(\mathbb{Z}_{p^{n}}))\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}})) \leq p^{n-1}-1\), \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}}))\leq 2\). Hence, \(\psi_{g}(\Gamma(\mathbb{Z}_{p^{n}}))=2\)
The following result characterizes when a zero-divisor graph of \(\mathbb{Z}_{2}\times F\), for a finite field \(F\), is partitionable into global defensive alliances.
**Theorem 3.6**: _Let \(F\) be a finite field. Then, \(\Gamma(\mathbb{Z}_{2}\times F)\) is partitionable into global defensive alliances if and only if \(F\cong\mathbb{Z}_{2}\)._
**Proof.**\(\Leftarrow\)) Let \(S_{1}=\{(1,0)\}\) and \(S_{2}=\{(0,1)\}\), then \(S_{1}\) and \(S_{2}\) are both global defensive alliances and so \(\{S_{1},S_{2}\}\) is the only partition into global defensive alliances of \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2})\). Then, \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}))=2\).
\(\Rightarrow\)) Assume that \(|F|\geq 3\) and suppose that \(\Gamma(\mathbb{Z}_{2}\times F)\) is partitionable into global defensive alliances. Let \(S_{1}\) be a global defensive alliance in a partition of \(\Gamma(\mathbb{Z}_{2}\times F)\) such \((1,0)\notin S_{1}\), then \(\{0\}\times F^{*}\subset S_{1}\) (since, \(S_{1}\) is a dominating set) and so \(S_{2}=\{(1,0)\}\) is the other global defensive alliance such that \(\{S_{1},S_{2}\}\) is a partition of \(\Gamma(\mathbb{Z}_{2}\times F)\) into global defensive alliances. Then, \(deg_{S_{2}}(1,0)+1\geq deg_{\tilde{S}_{2}}(1,0)\) and so \(1\geq|F|-1\), a contradiction.
**Corollary 3.7**: _Let \(p\) be a prime number. Then,_
1. _If_ \(p=2\)_, then_ \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{p})\) _is partitionable into global defensive alliances and_ \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{p}))=2\)_._
2. _If_ \(p\neq 2\)_, then_ \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{p})\) _is not partitionable into global defensive alliances._
The following theorem characterizes when \(\Gamma(\mathbb{Z}_{2}\times R)\), for a finite local ring \(R\) which is not a field, is partitionable into global defensive alliances.
**Theorem 3.8**: _Let \(R\) be a finite local ring which is not a field. Then, \(\Gamma(\mathbb{Z}_{2}\times R)\) is partitionable into global defensive alliances if and only if \(R\cong\mathbb{Z}_{4}\) or \(R\cong\mathbb{Z}_{2}[X]/(X^{2})\)._
_Moreover, if \(\Gamma(\mathbb{Z}_{2}\times R)\) is partitionable into global defensive alliances, then \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times R))=2\)._
**Proof.**\(\Leftarrow\)) Assume that \(R\cong\mathbb{Z}_{4}\). The zero-divisor graph of \(\mathbb{Z}_{2}\times\mathbb{Z}_{4}\) is illustrated in Figure 1.
Set \(S_{1}=\{(1,0),(0,2)\}\) and \(S_{2}=\{(0,1),(0,3),(1,2)\}\). So, \(S_{1}\) and \(S_{2}\) are two global defensive alliances since \(S_{1}\) and \(S_{2}\) are dominating sets and
\[\left\{\begin{array}{rl}°_{S_{1}}((\bar{1},\bar{0}))+1=2\geq deg_{\tilde{ S_{1}}}((\bar{1},\bar{0}))=2,\\ °_{S_{1}}((\bar{0},\bar{2}))+1=2\geq deg_{\tilde{S_{1}}}((\bar{0},\bar{2}))= 1,\\ °_{S_{2}}((\bar{0},\bar{1}))+1=1\geq deg_{\tilde{S_{2}}}((\bar{0},\bar{1})) =1,\\ °_{S_{2}}((\bar{0},\bar{3}))+1=1\geq deg_{S_{2}}((\bar{0},\bar{3}))=1,\\ °_{S_{2}}((\bar{1},\bar{2}))+1=1\geq deg_{S_{2}}((\bar{1},\bar{2}))=1.\\ \end{array}\right.\]
On the other hand \(Z(\mathbb{Z}_{2}\times\mathbb{Z}_{4})^{*}=S_{1}\cup S_{2}\) and \(S_{1}\cap S_{2}\neq\emptyset\) and so \(\{S_{1},S_{2}\}\) form a partition of \(\Gamma(\mathbb{Z}_{2}\times R)\) into global defensive alliances. Then, \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times R))\geq 2\). Thus, by [26, Theorem 2.1 ], \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times R))\leq\left\lfloor\frac{\delta+1+2}{2} \right\rfloor=2\) and so \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times R))=2\). Similarly, when \(R\cong\mathbb{Z}_{2}[X]/(X^{2})\), we take the partition \(\{S_{1},S_{2}\}\) with \(S_{1}=\{(1,\bar{0}),(\bar{0},\bar{X})\}\) and \(S_{2}=\{(\bar{0},\bar{1}),(\bar{0},\bar{1}+\bar{X}),(\bar{1},\bar{X})\}\).
\(\Rightarrow\)) Assume that \(R\ncong\mathbb{Z}_{4}\) and \(R\ncong\mathbb{Z}_{2}[X]/(X^{2})\). Suppose that \(\Gamma(\mathbb{Z}_{2}\times R)\) is partitionable into two global defensive alliances \(S_{1}\) and \(S_{2}\). So, assume that \((1,0)\in S_{1}\). Then, we have two cases:
Case 1: \(\{0\}\times U(R)\subset S_{2}\) and so \(deg_{S_{1}}((\bar{1},0))+1\geq deg_{S_{1}}((\bar{1},0))\). Then, \(|Z(R)|-1+1\geq deg_{S_{1}}((\bar{1},0))+1\geq deg_{S_{1}}((\bar{1},0))\geq|U (R)|\), a contradiction.
Case 2:there exists \(u\in U(R)\) such that \((0,u)\in S_{1}\). Then, \(S_{2}\) is not a dominating set since there is no vertex adjacent to \((0,u)\) other than \((1,0)\), a contradiction.
Hence, \(\Gamma(\mathbb{Z}_{2}\times R)\) is not partitionable into global defensive alliances.
**Corollary 3.9**: _Let \(p\) be a prime number and \(n\geq 2\) be a positive integer. Then, \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{p^{n}})\) is partitionable into global defensive alliances if and only if \(p=n=2\)._
In the following result, we study when zero-divisor graphs of a direct product of two finite fields is partitionable into global defensive alliances.
**Theorem 3.10**: _Let \(F\) and \(K\) be two finite fields such that \(|F|,|K|\geq 3\). Then, \(\Gamma(F\times K)\) is partitionable into global defensive alliances and_
\[\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{cl}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\]
**Proof.** We have the following cases:
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 2.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 3.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 4.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 5.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 6.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 7.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 8.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 9.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 10.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(0,1),(1,0)\}\}\) is a partition of \(\Gamma(F\times K)\) into global defensive alliances, then \(\psi_{g}(\Gamma(F\times K))\geq 3\). On the other hand, \(\gamma_{a}(\Gamma(F\times K))\psi_{g}(\Gamma(F\times K))=\left\{\begin{array}{ l}&3&\mbox{if }|F|=|K|=4,\\ &2&\mbox{otherwise.}\end{array}\right.\)
**Case \(|F|=|K|=4\)**: The zero-divisor graph of \(F\times K\) is illustrated in Figure 11.
So, \(\{\{(0,a),(a^{\prime},0)\},\{(0,b),(b^{\prime},0)\},\{(
\((F\times K)\leq 6\). Thus, \(\psi_{g}(\Gamma(F\times K))=3\).
**Case \(|F|\neq 4\)** or \(|K|\neq 4\): Let \(S_{1}=F_{1}\times\{0\}\cup\{0\}\times K_{1}\) and \(S_{2}=F_{2}\times\{0\}\cup\{0\}\times K_{2}\) such that \(F_{1},F_{2}\subset F^{*}\) and \(K_{1},K_{2}\subset K\) with
\[\left\{\begin{array}{ll}&F_{1}\cap F_{2}=K_{1}\cap K_{2}=\emptyset,\\ &|F_{1}|=\left\lfloor\frac{|F|-1}{2}\right\rfloor,\\ &|F_{2}|=|F|-1-\left\lfloor\frac{|F|-1}{2}\right\rfloor,\\ &|K_{1}|=\left\lfloor\frac{|K|-1}{2}\right\rfloor,\\ &|K_{2}|=|K|-1-\left\lfloor\frac{|K|-1}{2}\right\rfloor.\end{array}\right.\]
So, \(S_{1}\) and \(S_{2}\) are two global defensive alliances and \(Z(F\times K)^{*}=S_{1}\cup S_{2}\) and \(S_{1}\cap S_{2}\neq\emptyset\). Thus, \(\Gamma(F\times K)\) is partitionable into global defensive alliances and so \(\psi_{g}(\Gamma(F\times K))\geq 2\). Now, suppose that \(\psi_{g}(\Gamma(F\times K))\geq 3\), then \(3\times\gamma_{a}(\Gamma(F\times K))\leq\gamma_{a}(\Gamma(F\times K))\psi_{g} (\Gamma(F\times K))\leq|F|+|K|-2\) and since \(\gamma_{a}(\Gamma(F\times K))=\left\lfloor\frac{|F|-1}{2}\right\rfloor+ \left\lfloor\frac{|K|-1}{2}\right\rfloor\), by [20, Proposition 2.3], then \(3\times(\left\lfloor\frac{|F|-1}{2}\right\rfloor+\left\lfloor\frac{|K|-1}{2} \right\rfloor)\leq|F|+|K|-2\). So, we have four sub-cases to discuss:
**sub-case 1**; \(2\mid_{|F|}\) and \(2\mid_{|K|}\): Then, \(|F|+|K|\leq 8\), a contradiction since one of the \(|F|\) and \(|K|\) is different from \(4\).
**sub-case 2**; \(2\mid_{|F|}\) and \(2\mid_{|K|}\): Then, \(|F|+|K|\leq 5\), a contradiction.
**sub-case 3**; \(2\nmid_{|F|}\) and \(2\mid_{|K|}\): Similar to sub-case 2.
**sub-case 4**; \(2\nmid_{|F|}\) and \(2\nmid_{|K|}\): Then, \(|F|+|K|\leq 2\), a contradiction.
Hence, \(\psi_{g}(\Gamma(F\times K))=2\).
**Corollary 3.11**: _Let \(p,q\geq 3\) be two prime numbers. Then, \(\Gamma(\mathbb{Z}_{p}\times\mathbb{Z}_{q})\) is partitionable into global defensive alliances and \(\psi_{g}(\Gamma(\mathbb{Z}_{p}\times\mathbb{Z}_{q}))=2\)._
**Theorem 3.12**: _Let \(R\) be a finite local ring such that its maximal ideal is nilpotent of index \(2\) and \(F\) be a finite field with \(|F|\geq 3\). Then,_
1. _If_ \(|Z(R)|\) _is odd, then_ \(\Gamma(F\times R)\) _is partitionable into global defensive alliances and_ \(\psi_{g}(\Gamma(F\times R))=2\)_._
2. _If_ \(|Z(R)|\) _is even, then_ \(\Gamma(F\times R))\) _is not partitionable into global defensive alliances._
**Proof.** (1)- Assume that \(|Z(R)|\) is odd and set \(S_{1}=A_{1}\times\{0\}\cup\{0\}\times A_{2}\cup\{0\}\times A_{3}\cup A_{1} \times A_{3}\) and \(S_{2}=B_{1}\times\{0\}\cup\{0\}\times B_{2}\cup\{0\}\times B_{3}\cup B_{1} \times B_{3}\) such that
\[\left\{\begin{array}{l}A_{1},B_{1}\subset F^{*}\mbox{ with }|A_{1}|=\left\lceil\frac{|F|-1}{2}\right\rceil,\ |B_{1}|=|F|-1-|A_{1}|\mbox{ and }A_{1}\cap B_{1}=\emptyset,\\ A_{2},B_{2}\subset U(R)\mbox{ with }|A_{2}|=\left\lceil\frac{|U(R)|}{2}\right\rceil,\ |B_{2}|=|U(R)|-|A_{2}|\mbox{ and }A_{2}\cap B_{2}= \emptyset,\\ A_{3},B_{3}\subset Z(R)^{*}\mbox{ with }|A_{3}|=\left\lceil\frac{|Z(R)|-1}{2} \right\rceil,\ |B_{3}|=|Z(R)|-1-|A_{3}|\mbox{ and }A_{3}\cap B_{3}= \emptyset.\end{array}\right.\]
It is clear that \(S_{1}\) is a dominating set. So, let prove that \(S_{1}\) is a defensive alliance. Let \((x,0)\in A_{1}\times\{0\}\), then and so \(deg_{S_{1}}((x,0))+1\geq deg_{\bar{S_{1}}}((x,0))\). Let \((0,y)\in\{0\}\times A_{2}\), then \(deg_{S_{1}}((0,y))+1=|A_{1}|=\left\lceil\frac{|F|-1}{2}\right\rceil+1\) and \(deg_{\bar{S_{1}}}((0,y))+1\geq deg_{\bar{S_{1}}}((0,y))\). Let \((0,y)\in\{0\}\times A_{3}\), then \(deg_{S_{1}}((0,y))+1=|A_{1}|+|A_{3}|-1+|A_{1}||A_{3}|+1=|A_{1}|+|A_{3}|+|A_{1} ||A_{3}|\) and \(deg_{S_{1}}((0,y))=|B_{1}|+|B_{3}|+|B_{1}||B_{3}|=|F||Z(R)|-|F||A_{3}|-|A_{1}| ||Z(R)|+|A_{1}||A_{3}|-1\). Then, if 2 divide \(|F|\), then \(deg_{S_{1}}((0,y))=\frac{|F|}{2}+\frac{|Z(R)|-1}{2}+|A_{1}||A_{3}|\) and \(deg_{\bar{S_{1}}}((0,y))=\frac{|F|}{2}+|A_{1}||A_{3}|-1\) (since \(|Z(R)|\) is odd) and so \(deg_{S_{1}}((0,y))+1\geq deg_{\bar{S_{1}}}((0,y))\), otherwise \(deg_{S_{1}}((0,y))=\frac{|F|-1}{2}+\frac{|Z(R)|-1}{2}+|A_{1}||A_{3}|\) and \(deg_{\bar{S_{1}}}((0,y))=\frac{|F|}{2}+\frac{|Z(R)|}{2}+|A_{1}||A_{3}|-1\) (since \(|Z(R)|\) is odd) and so \(deg_{S_{1}}((0,y))+1\geq deg_{\bar{S_{1}}}((0,y))\). Finally, let \((x,y)\in A_{1}\times A_{3}\), then \(deg_{S_{1}}((x,y))+1=|A_{3}|+1=\left\lceil\frac{|Z(R)|-1}{2}\right\rceil+1= \frac{|Z(R)|+1}{2}\) (since \(|Z(R)|\) is odd) and \(deg_{\bar{S_{1}}}((x,y))=|B_{3}|=|Z(R)|-1-|A_{3}|=\frac{|Z(R)|-1}{2}\) (since \(|Z(R)|\) is odd), then \(deg_{S_{1}}((x,y))+1\geq deg_{\bar{S_{1}}}((x,y))\). Then, \(S_{1}\) is a global defensive alliance. Similarly, we prove that \(S_{2}\) is a global defensive alliance. Since \(S_{1}\cap S_{2}=\emptyset\) and \(S_{1}\cup S_{2}=Z(F\times R)^{*}\), \(\{S_{1},S_{2}\}\) is a partition of \(\Gamma(F\times R)\) into global defensive alliances. Thus, \(\psi_{g}(\Gamma(F\times R))\geq 2\). Now, suppose that \(\psi_{g}(\Gamma(F\times R))\geq 3\). Then, \(3\times\gamma_{a}(\Gamma(F\times R))\leq|Z(F\times R)|-1\) and so by [20, Theorem 2.5 ] and since \(|Z(R)|\) is odd, \(1\leq\frac{|R|}{2}+\frac{(|F|-2)|Z(R)-1|}{2}+|Z(R)|\leq 0\), a contradiction. Hence, \(\psi_{g}(\Gamma(F\times R))=2\).
(2)- Suppose that \(\Gamma(F\times R)\) is partitionable into two global defensive alliances, \(S_{1}\) and \(S_{2}\)
There are two cases to discuss:
**Case \(|Z(R)|=2\)** (assume that \(R\cong\mathbb{Z}_{4}\)): Then, let assume that \((0,\bar{2})\in S_{1}\), then \(F^{*}\times\{\bar{2}\}\subset S_{2}\) (since \(S_{2}\) is a dominating set). Since \(deg_{S_{2}}((0,\bar{2}))+1\geq deg_{\bar{S_{2}}}((0,\bar{2}))\), \(S_{1}\) contains at least \(|F^{*}|-1\) elements from \(F^{*}\times\{\bar{0}\}\). If \(F^{*}\times\{\bar{0}\}\subset S_{1}\), then either \((0,\bar{1})\) and \((0,\bar{3})\) are both in \(S_{1}\) or one of them is in \(S_{2}\), if \(\{(0,\bar{1}),(0,\bar{3})\}\subset S_{1}\), then \(S_{2}\) is not a dominating set, a contradiction, if one of them (i.e., \((0,\bar{1})\) or \((0,\bar{3})\) ), say \((0,\bar{1})\), is in \(S_{2}\), then \(deg_{S_{2}}((0,\bar{1}))+1\geq deg_{\bar{S_{2}}}((0,\bar{1}))\) and so \(|F|\leq 2\), a contradiction. Hence, there exists \((x,\bar{0})\in S_{2}\) for some \(x\in F^{*}\) and so \(deg_{S_{1}}((0,\bar{2}))+1\geq deg_{\bar{S_{1}}}((0,\bar{2}))\), which implies \(|F^{*}|-1+1\geq|F^{*}|+1\), a contradiction. Thus, \(\Gamma(F\times R)\) is not partitionable into global defensive alliances.
**Case \(|Z(R)|>2\)**: Suppose that \(\{0\}\times Z(R)^{*}\subset S_{1}\), then \(F^{*}\times Z(R)^{*}\subset S_{2}\) (Since \(S_{2}\) is dominating set). Thus, for every \((x,r)\in F^{*}\times Z(R)^{*}\cap S_{2}\), \(deg_{S_{2}}((x,r))+1\geq deg_{\bar{S_{2}}}((x,r))\) and so \(|Z(R)|\leq 2\), a contradiction. Then, \(\{0\}\times Z(R)^{*}\cap S_{2}\neq\emptyset\) (similarly, \(\{0\}\times Z(R)^{*}\cap S_{1}\neq\emptyset\)). Analogously, if \(F^{*}\times Z(R)^{*}\subset S_{2}\), then for every \((0,r)\in S_{1}\cap\{0\}\times Z(R)^{*}\), \(|F|-1+|Z(R)|-3+1\geq deg_{S_{1}}((0,r))+1\geq deg_{S_{1}}((0,r))\geq|F^{*}| |Z(R)^{*}|+1\) and so \(|F|+|Z(R)|\geq\frac{|F||Z(R)|+1}{2}+2\), a contradiction. Thus, \(F^{*}\times Z(R)^{*}\cap S_{2}\neq\emptyset\) and \(F^{*}\times Z(R)^{*}\cap S_{1}\neq\emptyset\). Since, for every \((x,r)\in F^{*}\times Z(R)^{*}\cap S_{2}\) and \((x^{\prime},r^{\prime})\in F^{*}\times Z(R)^{*}\cap S_{1}\), \(deg_{S_{2}}((x,r))+1\geq deg_{S_{2}}((x,r))\) and \(deg_{S_{1}}((x^{\prime},r^{\prime}))+1\geq deg_{\bar{S_{1}}}((x^{\prime},r^{ \prime}))\), then we can assume that \(|S_{1}\cap\{0\}\times Z(R)^{*}|=\frac{|Z(R)|}{2}\) and \(|S_{2}\cap\{0\}\times Z(R)^{*}|=\frac{|Z(R)|}{2}-1\). Now, suppose that \(\{0\}\times U(R)\subset S_{1}\). Then, there exists a \((x,0)\in S_{2}\) and so \(deg_{S_{2}}((x,0))+1\geq deg_{\bar{S_{2}}}((x,0))\) which implies that \(\frac{|Z(R)|}{2}-1+1\geq\frac{|Z(R)|}{2}+|U(R)|\), a contradiction. So, \(\{0\}\times U(R)\cap S_{1}\neq\emptyset\) and \(\{0\}\times U(R)\cap S_{2}\neq\emptyset\). Then, there exist \((0,u)\in\{0\}\times U(R)\cap S_{1}\) and \((0,u^{\prime})\in\{0\}\times U(R)\cap S_{2}\) such that \(deg_{S_{1}}((0,u))+1\geq deg_{\bar{S_{1}}}((0,u))\) and \(deg_{S_{2}}((0,u^{\prime}))+1\geq deg_{\bar{S_{2}}}((0,u^{\prime}))\). Then, \(|F^{*}\times\{0\}\cap S_{1}|=\left\lceil\frac{|F|-1}{2}\right\rceil\) and \(|F^{*}\times\{0\}\cap S_{2}|=|F|-1-\left\lceil\frac{|F|-1}{2}\right\rceil\) (or \(|F^{*}\times\{0\}\cap S_{2}|=\left\lceil\frac{|F|-1}{2}\right\rceil\) and \(|F^{*}\times\{0\}\cap S_{1}|=|F|-1-\left\lceil\frac{|F|-1}{2}\right\rceil\) ) and so for every \((0,r)\in S_{1}\cap\{0\}\times Z(R)^{*}\) and \((0,r^{\prime})\in S_{2}\cap\{0\}\times Z(R)^{*}\) one of the following inequalities does not hold: \(deg_{S_{1}}((0,r))+1\geq deg_{\bar{S_{1}}}((0,r))\) and \(deg_{S_{2}}((0,r^{\prime}))+1\geq deg_{\bar{S_{2}}}((0,r^{\prime}))\), a contradiction. Hence, \(\Gamma(F\times R))\) is not partitionable into global defensive alliances.
**Corollary 3.13**: _Let \(p\) and \(q\) be two prime numbers such that \(p\neq 2\). Then, \(\Gamma(\mathbb{Z}_{p}\times\mathbb{Z}_{q^{2}})\) is partitionable into global defensive alliances if and only if \(q\neq 2\). Namely,_
\[\psi_{g}(\Gamma(\mathbb{Z}_{p}\times\mathbb{Z}_{q^{2}}))=\left\{\begin{array}{ ccl}&2&&\mbox{if $q\neq 2$},\\ &1&&\mbox{otherwise.}\end{array}\right.\]
To give an example for the second assertion in Theorem 3.12, in the case \(|Z(R)|>2\), we will use the idealization. Recall that the idealization of an \(R-\)module \(M\) called also the
trivial extension of \(R\) by \(M\), denoted by \(R(+)M\), is the commutative ring \(R\times M\) with the following addition and multiplication: \((a,n)+(b,m)=(a+b,n+m)\) and \((a,n)(b,m)=(ab,am+bn)\) for every \((a,n),(b,m)\in R(+)M\).
**Example 3.14**: _Let \(n\geq 2\) be a positive integer, and \(p\) and \(q\) be two prime numbers. Then, \(\mathbb{Z}_{q}(+)(\mathbb{Z}_{q})^{n}\) is a finite local ring of maximal ideal \(0(+)(\mathbb{Z}_{q})^{n}.\) We have \((0(+)(\mathbb{Z}_{q})^{n})^{2}=0\) and so \(\Gamma(\mathbb{Z}_{p}\times(\mathbb{Z}_{q}(+)(\mathbb{Z}_{q})^{n}))\) is not partitionable into global defensive alliances if and only if \(n\times q\) is even. Namely,_
\[\psi_{g}(\Gamma(\mathbb{Z}_{p}\times(\mathbb{Z}_{q}(+)(\mathbb{Z}_{q})^{n}))) =\left\{\begin{array}{cl}&2&\mbox{if $q\neq 2$ \ and $n$ is odd}\;,\\ &1&\mbox{otherwise.}\end{array}\right.\]
**Theorem 3.15**: _Let \(F\) be a finite field. Then,_
1. _If_ \(|F|\leq 4\)_, Then,_ \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) _is partitionable into global defensive alliances with_ \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F))=2\)_._
2. _If_ \(|F|>4\)_, then_ \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) _is not partitionable into global defensive alliances._
**Proof.** (1)-If \(|F|=2\) that is \(F\cong\mathbb{Z}_{2}\), then we take the partition \(\{S_{1},S_{2}\}\) such that \(S_{1}=\{(1,0,0),(0,0,1),(0,1,0)\}\) and \(S_{2}=\{(1,1,0),(0,1,1),(1,0,1)\}\). It is easy to see that \(S_{1}\) and \(S_{2}\) are both global defensive alliances. Since \(Z(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)=\{(0,0,0)\}\cup S_{1}\cup S_{2}\), \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) is partitionable into global defensive alliances and \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}))=2\). Otherwise, we take the partition \(\{S_{1},S_{2}\}\) such that \(S_{1}=\{(1,0,0),(0,1,0)\}\cup\{(0,0,x)|\;x\in F-\{0,1\}\}\) and \(S_{2}=\{(0,0,1),(1,1,0)\}\cup\{(0,1,x)|\;x\in F^{*}\}\cup\{(1,0,x)|\;x\in F^{*}\}\). Hence, \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) is partitionable into global defensive alliances and \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F))=2\). (2)-\(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) is a simple graph of minimal degree \(\delta=1\), then by [26, Theorem 2.1], \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F))\leq 2\). Suppose that \(\{S_{1},S_{2}\}\) is a partition of \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) into global defensive alliances. Assume that \((0,1,0)\in S_{1}\). Then, we have the following two cases:
**Case 1**\((1,0,0)\in S_{1}\): If \(\{0\}\times\{0\}\times F^{*}\subset S_{1}\), then \((1,1,0)\in S_{2}\) (since \(S_{2}\) is a dominating set) and so \(deg_{S_{2}}((1,1,0))+1\geq deg_{\bar{S}_{2}}((1,1,0))\), then \(2\geq|F|\), a contradiction. Otherwise, there exists a vertex \((0,0,u)\in S_{2}\), then \((1,1,0)\in S_{2}\), otherwise \(deg_{S_{2}}((0,0,u))+1\geq deg_{\bar{S}_{2}}((0,0,u))\) implies \(1\geq 3\), a contradiction. On the other hand \(\{0\}\times\{1\}\times F^{*}\subset S_{2}\) (since \(S_{2}\) is a dominating set). Thus, if there exists \(u\neq v\in F^{*}\) such that \((0,0,v)\in S_{2}\), then \(deg_{S_{1}}((1,0,0))+1\geq deg_{\bar{S}_{1}}((1,0,0))\) and so \(-1\geq 0\), a contradiction, otherwise \(deg_{S_{2}}((1,1,0))+1\geq deg_{\bar{S}_{2}}((1,1,0))\) implies that \(4\geq|F|\), a contradiction.
**Case 2**\((1,0,0)\in S_{2}\): If \(\{0\}\times\{0\}\times F^{*}\subset S_{2}\), then \((1,1,0)\in S_{1}\) (since \(S_{1}\) is a dominating set) and so \(deg_{S_{1}}((1,1,0))+1\geq deg_{S_{1}}((1,1,0))\), thus \(2\geq|F|\), a contradiction. Then, there exists \(u\in F^{*}\) such that \((0,0,u)\in S_{1}\) and so \(deg_{S_{2}}((0,1,0))+1\geq deg_{\bar{S}_{2}}((0,1,0))\) which implies that \(0\geq 2\), a contradiction.
Hence, \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times F)\) is not partitionable into global defensive alliances.
**Corollary 3.16**: _Let \(p\) be a prime number. Then, \(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{p})\) is partitionable into global defensive alliances if and only if \(p\in\{2,3\}\). Namely,_
\[\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{p}))=\left\{ \begin{array}{cl}&2&\mbox{if \ $p\in\{2,3\}$},\\ &1&\mbox{otherwise.}\end{array}\right.\]
**Theorem 3.17**: _Let \(F\) and \(K\) be two finite fields such that \(|F|,|K|\geq 3\). Then, \(\Gamma(\mathbb{Z}_{2}\times F\times K)\) is not partitionable into global defensive alliances._
**Proof.**\(\Gamma(\mathbb{Z}_{2}\times F\times K)\) is a simple graph of minimal degree \(\delta=1\). Then, by [26, Theorem 2.1], \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times F\times K))\leq 2\). Suppose that \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times F\times K))=2\) that is \(\Gamma(\mathbb{Z}_{2}\times F\times K)\) is partitionable into two global defensive alliance, say \(S_{1}\) and \(S_{2}\). Assume that \((1,0,0)\in S_{1}\), then \(\{0\}\times F^{*}\times K^{*}\subset S_{2}\). Thus, \(deg_{S_{1}}((1,0,0))+1\geq deg_{\bar{S_{1}}}((1,0,0))\) implies that \(|F^{*}|+|K^{*}|+1\geq|F^{*}||K^{*}|\) (since \(deg_{S_{1}}((1,0,0))\leq|F^{*}|+|K^{*}|\) and \(deg_{\bar{S_{1}}}((1,0,0))\geq|F^{*}||K^{*}|\)). So we have the following cases:
**Case 1**\(|F|\geq 4\) and \(|K|\geq 4\): In this cases we get a contradiction.
**Case 2**\(|F|=|K|=3\): The zero-divisor graph of \(\mathbb{Z}_{2}\times F\times K\) is illustrated in Figure 3.
Then, \(S_{1}\) contains at least three vertices from the set \(\{(0,0,b),(0,0,b^{\prime}),(0,a,0),(0,a^{\prime},0)\}\), so we can assume that the \(\{(0,0,b),(0,0,b^{\prime}),(0,a,0)\}\subset S_{1}\). Then, \(\{(1,a,0),(1,a^{\prime},0)\}\subset S_{1}\).
\(S_{2}\) (since \(S_{2}\) is a dominating set). Thus, \(deg_{S_{2}}((1,a,0))+1\geq deg_{S_{2}}((1,a,0))\) and so \(1\geq 2\), a contradiction.
**Case 2**\(|F|=4\) and \(|K|=3\): The zero-divisor graph of \(\mathbb{Z}_{2}\times F\times K\) is illustrated in Figure 4.
Then, \(\{(0,0,b),(0,0,b^{\prime}),(0,0,b^{\prime\prime}),(0,a,0),(0,a^{\prime},0)\} \subset S_{1}\) (since \(deg_{S_{1}}((1,0,0))+1\geq deg_{S_{1}}((1,0,0))\) ). Thus, \(deg_{S_{2}}((1,a,0))+1\geq deg_{S_{2}}((1,a,0))\) implies that \(1\geq 3\), a contradiction.
Hence, \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times F\times K))<2\) and so \(\Gamma(\mathbb{Z}_{2}\times F\times K)\) is not partitionable into global defensive alliance.
**Corollary 3.18**: _Let \(p,q\geq 3\) be two prime numbers. Then, \(\psi_{g}(\Gamma(\mathbb{Z}_{2}\times\mathbb{Z}_{p}\times\mathbb{Z}_{q}))=1\)._
We end this paper by investigating rings with small global defensive alliance number and small global defensive alliance partition number. Namely, \(\gamma_{a}(\Gamma(R))=1,2\) and \(\psi_{g}(\Gamma(R))=2,3\).
**Theorem 3.19**: _Let \(R\) be a finite ring such that \(\gamma_{a}(\Gamma(R))=1\). Then, the following statements are equivalent:_
1. \(\Gamma(R)\) _is partitionable into global defensive alliances._
2. \(|Z(R)|=3\)
_._
3. \(R\) _is isomorphic to one of the rings_ \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)_,_ \(\mathbb{Z}_{9}\)_,_ \(\mathbb{Z}_{3}[X]/(X^{2})\)_._
**Proof.** (1) \(\Rightarrow\) (2) Assume that \(\Gamma(R)\) is partitionable into global defensive alliances, then \(\psi_{g}(\Gamma(R))\geq 2\) and so \(|Z(R)|=3\), by Proposition 3.1 and [20, Proposition 2.2].
(2) \(\Rightarrow\) (3) Follows from [11, Corollary 1].
(3) \(\Rightarrow\) (1) The zero-divisor graphs of these rings are isomorphic to a simple graph with two vertices and one edge. Hence, we get the result.
**Corollary 3.20**: _Let \(R\) be a finite ring such that \(\Gamma(R)\) is partitionable into global defensive alliances. If \(\gamma_{a}(\Gamma(R))=1\), then \(\psi_{g}(\Gamma(R))=2\)._
**Theorem 3.21**: _Let \(R\) be a finite ring such that \(\gamma_{a}(\Gamma(R))=2\). Then, \(\Gamma(R)\) is partitionable into global defensive alliances if and only if either \(\psi_{g}(\Gamma(R))=2\) or \(R\cong\mathbb{F}_{4}\times\mathbb{F}_{4}\)._
**Proof.**\(\Rightarrow\)) Assume that \(\Gamma(R)\) is partitionable into global defensive alliances and suppose that \(R\not\cong\mathbb{F}_{4}\times\mathbb{F}_{4}\). Then \(\psi_{g}(\Gamma(R))\geq 2\). So, we need just to prove the other equality (i.e., \(\psi_{g}(\Gamma(R))\leq 2\) ). We have \(\gamma_{a}(\Gamma(R))=2\), then by Proposition 3.1 and [20, Proposition 2.2], \(\psi_{g}(\Gamma(R))^{2}-\psi_{g}(\Gamma(R))-6\leq 0\) and so \(\psi_{g}(\Gamma(R))\) is either 2 or 3. Suppose that \(\psi_{g}(\Gamma(R))=3\), then by Proposition 3.1, \(|Z(R)|\geq 7\). Thus, using [20, Proposition 3.3], \(R\cong\mathbb{F}_{4}\times\mathbb{F}_{4}\), a contradiction by the hypothesis. Thus, \(\psi_{g}(\Gamma(R))=2\).
\(\Leftarrow\)) Its obvious.
**Corollary 3.22**: _Let \(R\) be a finite ring such that \(\gamma_{a}(\Gamma(R))=2\). Then, we have the following equivalents:_
1. \(\psi_{g}(\Gamma(R))=2\) _if and only if_ \(R\) _is isomorphic to one of the rings_ \(\mathbb{Z}_{2}\times\mathbb{Z}_{4}\)_,_ \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}[X]/(X^{2})\)_,_ \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\)_,_ \(\mathbb{Z}_{3}\times\mathbb{F}_{4}\)_,_ \(\mathbb{Z}_{25}\) _and_ \(\mathbb{Z}_{5}[X]/(X^{2})\)_._
2. \(\psi_{g}(\Gamma(R))=3\) _if and only if_ \(R\cong\mathbb{F}_{4}\times\mathbb{F}_{4}\)_._
**Proof.** It follows from Theorem 3.21 and [20, Proposition 3.3].
|
2310.16992 | How well can machine-generated texts be identified and can language
models be trained to avoid identification? | With the rise of generative pre-trained transformer models such as GPT-3,
GPT-NeoX, or OPT, distinguishing human-generated texts from machine-generated
ones has become important. We refined five separate language models to generate
synthetic tweets, uncovering that shallow learning classification algorithms,
like Naive Bayes, achieve detection accuracy between 0.6 and 0.8.
Shallow learning classifiers differ from human-based detection, especially
when using higher temperature values during text generation, resulting in a
lower detection rate. Humans prioritize linguistic acceptability, which tends
to be higher at lower temperature values. In contrast, transformer-based
classifiers have an accuracy of 0.9 and above. We found that using a
reinforcement learning approach to refine our generative models can
successfully evade BERT-based classifiers with a detection accuracy of 0.15 or
less. | Sinclair Schneider, Florian Steuber, Joao A. G. Schneider, Gabi Dreo Rodosek | 2023-10-25T20:43:07Z | http://arxiv.org/abs/2310.16992v1 | How well can machine-generated texts be identified and can language models be trained to avoid identification?
###### Abstract
With the rise of generative pre-trained transformer models such as GPT-3, GPT-NeoX, or OPT, distinguishing human-generated texts from machine-generated ones has become important. We refined five separate language models to generate synthetic tweets, uncovering that shallow learning classification algorithms, like Naive Bayes, achieve detection accuracy between 0.6 and 0.8.
Shallow learning classifiers differ from human-based detection, especially when using higher temperature values during text generation, resulting in a lower detection rate. Humans prioritize linguistic acceptability, which tends to be higher at lower temperature values. In contrast, transformer-based classifiers have an accuracy of 0.9 and above. We found that using a reinforcement learning approach to refine our generative models can successfully evade BERT-based classifiers with a detection accuracy of 0.15 or less.
Keywords:Language Models, Language Model Detection, Transformer Reinforcement Learning
## 1 Introduction
Improving transformer models has led to the creation of higher-quality machine-generated text. This has led to the question of whether a distinction between human-written and machine-generated text is reliably possible. The main concern is that it can be difficult for humans to make an accurate distinction because they focus on linguistic properties of the text rather than statistical features such as word probability, which are the focus of classification models.
According to Ippolito et al. (2020), humans are better in their judgment if the number of unlikely words increases, whereas classification models exhibit the opposite behavior, prioritizing statistical evidence. Similar studies, including Gehrmann et al. (2019), concluded that humans can detect fake texts with an accuracy of 54%. Without computer-aided tools, humans achieve only a 50% accuracy rate in identifying GPT-3 generated texts (Clark et al., 2021).
These findings highlight the need for a machine-guided decision process in reliably identifying artificially generated texts. This work, therefore, aims to investigate the accuracy and reliability of detection mechanisms when applied to GPT (Generative Pre-trained Transformer)-generated texts.
We conducted a study that involved comparing different temperatures, sampling methods, and sample sizes using basic classification algorithms. The outcome revealed that these methods were insufficient in distinguishing between human-written and machine-generated texts. However, when more advanced classifier models like BERT were utilized, the results were more consistent and reliable.
Various approaches exist to trick or bypass these detection mechanisms, including paraphrasing the generated texts (Krishna et al., 2023). Introducing an alternative approach, we focus on transformer-based reinforcement learning to bypass the detection classifier. This technique was initially proposed by Ziegler et al. (2020) to fine-tune language models using human feedback. In contrast to the conventional training approach, we treat the feedback of the detector model as a reward during the reinforcement learning processing. Consequently, the generator model is rewarded most when the classifier incorrectly identifies the output as human-generated. We furthermore add additional linguistic constraints to refine the generated texts,
avoiding exploitation of the classification model. This points out the limitations of detector models in detecting intentionally altered texts created by generative models that have been modified to evade the classifier by malicious actors.
## 2 Related Work
### Automatic Generation of Texts
Different approaches have been used to create language models capable of generating texts. The most common architecture revolves around transformer models, including GPT and its predecessors, such as GPT-Neo-125M, GPT-Neo-1.3B, GPT-Neo-2.7B (Black et al., 2021), GPT-J-6B (Wang and Komatsuzaki, 2021), OPT-125M, OPT-350M, OPT-1.3B, OPT-2.7B (Zhang et al., 2022) and GPT-2 (Radford et al., 2019). Further variants include Instruct-GPT (Ouyang et al., 2022) and T5 from Google (Raffel et al., 2020) model.
### Detection Techniques
Orabi et al. (2020) provide a further taxonomy of detection techniques, based on _graphs_(e.g. Daya et al., 2020), _crowdsourcing_(i.e. manually bot identification, e.g. "Social Turing Tests: Crowdsourcing Sybil Detection", 2013), _anomalies_(e.g. Nomm and Bahsi, 2018) and _machine learning_(Alothali et al., 2018).
However, since most of the approaches are either carried out by humans or make use of automatic tools, one can simplify them into two categories, i.e., _human_ vs. _machine-based techniques1_. Both of them are based on _behavior_ vs. _content_ of the user. We focus on pure content, which is strictly speaking _text_ (in the case of Twitter posts, so-called _tweets_). (Alothali et al., 2018)
Footnote 1: this is true at least for all the mentioned exemplary studies, even though Orabi et al. (2020) use different terminology, e.g. graph-based, anomaly-based, etc.
Human-based Detection.When tasked to identify computer-generated documents, human individuals mainly focus on the semantic coherence of the presented texts. This contrasts machine-based detection approaches, which instead focus on statistical properties, such as word probabilities and sampling schemes (Ippolito et al., 2020). Dugan et al. (2020) present a tool to evaluate human detection and conclude that it is relatively easy to fool humans for the above reasons.
Machine-based Detection.Statistical methods for machine-generated text detection are based on the fact that different modeling techniques leave detectable artifacts in the generated texts (Tay et al., 2020).
Furthermore, there exist various rule-based models to detect automatically generated texts, which are based on improbable word sequences and grammar (Cabanac and Labbe, 2021) or on similarity measures including word overlap (Harada et al., 2021).
RoBERTa or BERT-based classifiers have downsides, such as their tendency to over-fit and the need to train a classification model every time a new generator is released. To solve this issue, zero-shot classifiers like DetectGPT (Mitchell et al., 2023) have been developed, requiring only a duplicate of the model that should be tested and a second language model to introduce random permutations to the test-string. Due to their principle of introducing random permutations to the test string, they cannot operate with a short input string.
To conclude, due to the novelty of most classification approaches, there are few scientific publications on bypassing them, like the paraphrasing-based one from Krishna et al. (2023). Since we use reinforcement learning to adjust our generative model to bypass the classifier, our approach is based on the paper "Fine-Tuning Language Models from Human Preferences" by Ziegler et al. (2020).
## 3 Methodology
Our methodology is structured as follows. We describe the data set used for and the training of generator models. Then, we investigate different approaches for machine-generated text detection, starting with shallow learning methods and ranging to transformer-based detectors. We present our reinforcement learning model aimed at generating tweets that evade previous detection approaches.
### Data set
Our data set consists of tweets recorded between January and February 2020. The data is saturated with spam and advertising content, making it poorly suitable as direct training input. We, therefore, applied a set of filter policies.
First, we filter tweets composed in English to achieve comparable results when evaluating against other state-of-the-art literature. Next, we extracted texts of verified users with less than \(\geq 100.000\) followers to avoid accounts that promote advertisements. Subsequently, we choose non-truncated tweets, as longer text might get truncated if not fully requested, which is undesirable for training. Furthermore, we restrict to original tweets and omit quotes, replies, and
retweets to obtain a data set of more diverse texts. Finally, we check if the author of the tweets exhibits an average tweet volume (at most 20 tweets per day) to avoid spam content.
Overall, the data set consists of \(k=2.000.000\) tweets from \(n=136.450\) real accounts and is split into two equal parts, one for the transfer learning of the language models and one for testing the classifiers. We complement both training and evaluation data sets by adding real-world tweets. The generated tweets then form the two data sets "Fake-Tweets Evaluation" and "Fake-Tweets Training". The counterparts ("Real Tweets Evaluation" and "Real Tweets Training") are then built out of the original filtered tweets as illustrated in Figure 1.
### Training of Generative Models
Our initial focus is to examine the efficacy of a fine-tuned LLM when generating tweets, as we believe they are more challenging to create than texts containing longer passages. Additionally, we aim to assess the capability of various existing detectors in identifying machine-generated short texts. Our experiments range from simple methods such as Naive Bayes to more advanced models such as BERT.
As a basis for our research, we make use of various GPT variants, including GPT-2 (Radford et al., 2019), GPT-J-6B (Wang and Komatsuzaki, 2021), GPT-Neo-125M, GPT-Neo-1.3B, GPT-Neo-2.7B (Black et al., 2021), OPT-125M, OPT-350M, OPT-1.3B and OPT-2.7B (Zhang et al., 2022). Despite the availability of larger free models, we opt to utilize transfer learning-compatible models on a machine equipped with an A6000 GPU and 128GB of RAM.
Each fine-tuned model generates two sets of fake tweets with a given sampling strategy and temperature. These are used to train and evaluate the detector model. The size of the generated sets varies, ranging from 10.000 for comparisons of simple Bag-of-Words classifiers to 100.000 for training BERT classifiers. Examples of generated tweets can be taken from Table 1.
### Shallow Detectors for machine-generated Texts
We begin by exploring the effectiveness of a Naive Bayes classifier using Bag-of-Words (BoW) feature inputs to identify synthetic tweets. This classifier is chosen due to its simplicity and short training time. Due to the restricted number of features that BoW extracts from the texts, modifications in the generation parameters of language models, such as sampling methods, _k_-values, _p_-values, etc., exert a significantly greater influence on the classifier's output.
We aim to examine how the parameters of the generator, namely temperature, sampling strategy, sampling size, and model size, influence the accuracy of the classifier. For this, we performed a grid-based approach in training the classifier with different parameterizations, each time conducted using 10,000 real and generated tweets.
Temperature.A generator's temperature \(\tau\) alters the probability distribution of the next word. A low value sharpens the distribution curve, resulting in more accurate sentences. In contrast, a higher value flattens the curve and thus allows a higher variety of words to
Figure 1: Data pipeline used for modeling
be chosen, increasing the outcome's variance at the cost of linguistic coherence. In our experiments, we vary temperature values ranging from 0.8 to 1.4.
Sampling strategyLLMs produce sentences by generating one token at a time. Token selection depends on the selected sampling method. For example, greedy search selects the token with the highest probability as the predecessor, whereas random sampling strategies pick the next token based on its probability. This process can be limited to a certain number (k-sampling) or a combined probability (p-sampling) of the most probable next tokens. Furthermore, to enhance the unpredictability of word generation, one can employ the typical sampling technique using conditional entropy (Meister et al., 2023).
Sampling size.It is well known that both shallow and deep machine learning models yield better results when trained on larger data sets. This improvement diminishes with a certain amount of data depending on the complexity of the model. We test each classification algorithm with a training set of varying sizes (1k, 10k, 50k, and 100k tokens) to investigate the correlation and saturation.
Model size.OpenAI's GPT family has achieved notable language modeling advancements through substantial parameter size increases. While larger amounts are expected to generate better text results, they are more effective in creating longer texts. This is due to the larger context window associated with more complex models. Since our experiment involves generating short tweets, we aim to investigate if the model size affects the detectability of the language model. Our study compares models ranging from 125 million to 6 billion parameters.
### Transformer-based Detectors for machine-generated Texts
Given their state-of-the-art capabilities in text classification, transformer-based detectors are more likely to be used in production. We opted to use BERT as the primary reference model in the transformer family. Its performance is nearly on par with more advanced versions such as RoBERTa, as assessed by the MultiNLI benchmark (Williams et al., 2018). In their evaluation, BERT-Large records a score of 88% (Lee-Thorp et al., 2022), RoBERTa at 90.8% (Liu et al., 2019), and DeBERTa at 91.1% (He et al., 2021).
The evaluation process is similar to the shallow detectors, except that BERT's embeddings replace the BoW representation. Because of the training complexity, we refrain from fitting multiple BERT models with varying parameters.
### Reinforcement Learning to bypass the Detector
Our final contribution shows that even an advanced detection model such as BERT can be bypassed. We utilize a reinforcement learning approach for text generation that progressively learns from the detector output of previously generated texts. This approach roughly consists of three steps: rollout, evaluation, and optimization (von Werra et al., 2020), as discussed below.
The first step includes the model's rollout. Here, the language models from Section 3.2 generate an artificial tweet. For this, a model is provided with the initial part of an original tweet and is required to finish the sentence. In addition to completing tweets, it sometimes has to generate full texts independently to prevent overfitting on short texts.
Secondly, the evaluation step takes place. Texts and responses are supplied to the corresponding BERT classifier, as described in Section 3.4. Should a classifier identify the text as composed by humans, a positive reward is assigned to the reinforcement learning
\begin{table}
\begin{tabular}{l|l}
**Model** & **Tweet** \\ \hline GPT-Neo-125M & Kobe says new coronavirus warning on plane is too difficult to understand. \\ OPT-125M & I’m sure a few will be added in a future update as part of the “Duke” legacy. \\ OPT-350M & Good luck on the final stage of your tour! \\ GPT-Neo-1.3B & The new album is out now; make sure you have the album download code for free. \\ OPT-1.3B & Rangers’ Henrik Lundqvist: “I’m not even thinking about’ the trade rumors, says the goalie” \\ GPT-Neo-2.7B & \#ValentinesDay: Today is the day to celebrate the greatness of yourself. And to appreciate... \\ OPT-2.7B & A very cold, chilly \#day for \#Lincoln and \#Omaha \#MorningWeather \\ GPT-J-6B & “This is how we play games!” Let’s hear “The Box” tonight with @OzzyOsbourne... \\ \end{tabular}
\end{table}
Table 1: Example tweets generated with different models with temperature = 1.0 and top-50 sampling
algorithm, otherwise a negative one. Raw logits tend to perform best in this case.
Finally, the optimization step takes place. The log probabilities of the tokens are calculated to compare the active language model to the reference model. This is part of the reinforcement learning described by Ziegler et al. (2020) and prevents the adjusted model from overoptimizing.
We now delve into more detail about our proposed handcrafted reward function. This reward supplements the detector's response and imposes a separate penalty on generated text, even if classified as human-written, in cases where specific linguistic constraints are not satisfied. Calculating rewards is depicted in Figure 2. If one or more of the supplementary rules listed below produce unfavorable results, the smallest one is chosen. Conversely, if all the additional rules yield favorable results, the value indicating that humans created the text is selected.
Special characters.A text should not include more than 25% of special characters. If it does, the model assigns a linearly decreasing negative reward of up to -1 if the text consists exclusively of special characters.
Repetitions.If a text contains more than two repetitions of the same word, it is assigned a negative reward of up to -1 when repeated eight times.
Linguistic acceptability.The linguistic acceptability is checked by the usage of a DeBERTaV3-v3-large classifier (He et al., 2023) trained on the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019). Since this CoLA data set contains sentences marked as acceptable or unacceptable by humans from a grammatical point of view, the trained DeBERTa model is able to judge how good or acceptable a given sentence is. If the value is below a threshold of 40%, a negative reward is returned, which minimizes to -1 at 0% acceptability. Note that we relax the threshold value to 30% when applied to models with over 2 billion parameters, as they are harder to train.
The thresholds mentioned are not taken over from a reference paper but are determined empirically. Generally, it is recommended to aim for a higher threshold as it directly affects the text's linguistic acceptability. Nevertheless, we discovered that setting the threshold too restrictively can lead to the failure of the reinforcement learning process. In such cases, the generated text might be classified as non-human, rendering it impossible for the model to obtain rewards and impeding its learning progress.
Dictionary.To obtain natural texts, at least 25% of tokens have to appear in a dictionary. Otherwise, a linearly increasing negative reward is assigned, which rises to -1 if none of the words appear in a dictionary. We remark that this value should be chosen relatively low in the context of tweets, as short texts typically contain more abbreviations and slang language.
Word Emoji relationship.A tweet must not contain more emojis than words. If it does, the reward is progressively penalized, up to a maximum, if only 25% of the tweet is formed by words. Emojis are not counted as special characters.
Number of Emojis.There should not be more than three emojis in one tweet. Every additional Emoji leads to a negative reward of -0.4 up to -1.
Repetition of the Query.To generate unique texts different from the input query, we assign a negative reward if more than half of the query is included in the output. This reward decreases to -1 if the entire query is repeated.
Special Token.There should not be more than two special tokens in one generated tweet, including BOS (beginning of the sentence) or EOS (end of sentence) delimiters. Every additional token gets a negative reward of 0.4 up to -1.
Same start.Sometimes, we require the model to generate tweets without an input query. In such cases, it is essential that the model still generates diverse outputs. Consequently, a negative reward is returned if more than 10% of the tweets within a single training batch start with the same word. This value again decreases linearly to -1 if 20% of all sentences have similar starts.
Figure 2: RL reward calculation
Numbers at the start.Like the previous policy, we do not want tweets to start with numbers frequently. Therefore, if the model grasps that a classifier can be circumvented by a sentence beginning with a number once, it may also learn that a similar sentence, or one with the same pattern, could be effective on another occasion. Hence, only 10% of the sentences within a training batch are allowed to start with a number, resulting in a negative reward of up to -1 if there are more.
Unknown characters.Occasionally, language models may insert filler characters, represented as unknown characters. This phenomenon typically occurs when unknown characters are included in the data set used for fine-tuning. To prevent the model from using these characters to bypass the classifier, generating them will result in a negative reward starting with -0.5 and decreasing if the replacement character appears more than once.
These additional optimization rules have been found by analyzing various preliminary runs and observing the logs for requests and responses during RL training. It is worth noting that RL can function without these rules, but the results are considerably inferior, as demonstrated by outcomes such as: "Something for Administrator930 Macy's Displays! RIP Family Members". In this particular case, the rule for linguistic acceptability would have avoided a positive reward.
For bigger models, the reward of the bot classifier can be multiplied by a factor, e.g., 10, to give the task of bypassing the classifier a higher priority than having excellent sentences.
### Applicability to other text-domains
We conducted a second iteration using a CNN and Daily Mail data set (Hermann et al., 2015) to demonstrate the practical application of the reinforcement learning approach. This also proved that our approach can be adjusted to generate fake news. The training and testing procedure for this iteration was similar to the fake tweets, except the linguistic refinement filters were reduced to only one. This filter ensured that not all generated texts start the same.
## 4 Results
### Evaluation of Shallow Detectors
Word Distributions.By altering the probability distributions of the generating language model, statistical-based classifiers like Naive Bayes can detect changes more easily. In Figure 3, a quantile-quantile plot visualizes the statistical differences that these classifiers rely on. The plot shows the probability distributions of words present in a corpus with different temperature values, which differ significantly from the word distributions in human-written texts.
This shows how machine-based detection differs significantly from human-based detection. A classifier can detect changes in the probability distribution in both directions, while humans rely on language comprehension. The level of linguistic acceptability increases when the distribution curve is sharpened and the probability of the most likely next words increases. However, reducing the temperature to achieve this comes at the expense of reducing the information content.
Figure 4: Sampling methods’ detection rates by temperature (top k = 100, nucleus p = 0.95)
Figure 3: Humans’ against machines’ word probability distributions
Sampling Schemes.Next, we compare how different sampling methods used for generating texts affect the detection rates of shallow models. The results can be seen in Figure 4. The easiest method to detect is greedy search, selecting the most likely next word based on maximum likelihood. Typical-p sampling ranks second and relies on conditional entropy to deviate from the original distribution. Top-k sampling also differs significantly from the initial distribution as it does not consider the distribution curvature. Finally, nucleus sampling, which considers the various shapes of the probability curves for the next token, is most evasive after sampling using pure randomness.
Sampling Size.Shallow learning models, including Naive Bayes, require small amounts of training data compared to deep learning models such as BERT. In Figure 5, we have listed how different training data sizes affect a shallow detector's accuracy in successfully identifying machine-generated texts. As can be seen, even with a training batch consisting of as little as 1000 data points, the detector can still capture some dependencies. Increasing the size of the training data improves the detector's quality. However, the improvements successively diminish above a certain threshold, from 50k to 100k.
Contrary to the sample size, the parameter size of the evaluated models has a minor impact on the classification process, as seen in Figure 6. In this figure, we have grouped GPT and OPT model families in similar colors, where darker lines represent models with more internal parameters. One possible explanation for this behavior is that even though bigger models typically perform better on larger text passages due to larger context windows, the generation of short texts hardly benefits from this property. In addition, given that the generator family (OPT vs. GPT) leads to more significant differences in classification accuracy, we hypothesize that model design and training corpus inherently play a more crucial role in generating short texts than sheer model size.
### Evaluation of Transformer-based Detectors
The introduction of self-attention layers in transformer-based models has led to significant improvement in their performance. These layers allow the classifier to translate a token into vector space based on the surrounding tokens. As a result, all evaluated BERT models have achieved an F1-Score of over 0.9 (cf. Tab. 2, column _pre_). The parameters for tweet generation included top 50 sampling with a temperature of \(\tau=1.0\). While these results are impressive, there is a possibility that the BERT classifier might overfit one specific method and perform poorly on other generative models.
\begin{table}
\begin{tabular}{l l l l} \multirow{2}{*}{**Model**} & \multirow{2}{*}{**parameter**} & \multicolumn{2}{c}{**BERT F1-Scores**} \\ \cline{3-4} & & _pre_ & _post_ \\ \hline GPT-Neo & 125M & \(0.952\) & \(0.031\) \\ GPT-Neo & 1.3B & \(0.933\) & \(0.146\) \\ GPT2 & 1.5B & \(0.929\) & \(0.090\) \\ GPT-Neo & 2.7B & \(0.947\) & \(0.004\) \\ GPT-Neo* & 2.7B & \(0.959\) & \(0.173\) \\ \end{tabular}
*Fake news based on CNN and Daily Mail data set and model
\end{table}
Table 2: F1-Scores before and after RL
Figure 5: Comparison of different sampling sizes
Figure 6: Generator models’ ACC by temperatures
### Evaluation of Reinforcement Learning
Transformer-based classifiers are generally very reliable in distinguishing real and generated content, as investigated in the previous sections. However, we show that a carefully fine-tuned reinforcement learning approach can bypass these well-performing models. To obtain reliable results, much emphasis has to be given to optimizing the model's hyperparameters, including learning rate, mini-batch size, chosen optimizer, and threshold for detecting linguistic acceptability. Based on empirical analysis, experiments have shown that a learning rate of \(5\cdot 10^{-5}\) and a mini-batch size of 4 tend to yield the best results.
For the GPT-Neo-2.7B model, we reduced the linguistic acceptability threshold from 0.4 to 0.3 since bigger models are more susceptible to manual interventions. Similarly, we exchanged the Adam optimizer within the 2.7 billion parameter model with the novel Lion optimizer (Chen et al., 2023), reportedly outperforming the first one.
We shared the same sampling technique during the reinforcement learning and evaluation phases to ensure reliable results. Table 3 presents a snippet of the log that documents queries, responses, and rewards during the reinforcement learning process. As stated earlier, a positive reward is only possible if both the query and response have been classified as human and none of the supporting rules outlined in section 3.5 have yielded negative results. In this case, we solely rely on the BERT classifier's reward to detect generated text.
We list the classifier's F1-scores before and after the reinforcement learning process in Table 2. The 2.7 billion model had a lowered threshold for linguistic acceptability, resulting in significantly better results, but at the expense of lower linguistic quality. The experiment adapted to train the generator using CNN and the Daily Mail data set demonstrates the applicability of our approach to other text domains. Our experiment with four open-source models showed that a BERT classifier can be bypassed using a reinforcement learning training method.
## 5 Conclusions and Outlook
In this paper, we highlighted the significance of sampling techniques and the essential role of employing advanced models like transformers to detect generated texts accurately. We also demonstrated how a malicious actor could adapt generative models to evade a detector if accessible. However, this process is not simple and requires substantial computational power and hyperparameter tuning. As the parameter count of the generative model increases, the reinforcement learning process becomes more complex, and the output quality may decrease. While the risk of malicious actors modifying language models remains more theoretical than practical, it retains a degree of plausibility.
In various scenarios, such as completing homework in schools, submitting assignments at universities, or identifying suspicious campaigns on social media platforms like Twitter, it is essential to differentiate between human-written and machine-generated texts. OpenAI previously provided a detection model for this purpose, but it was discontinued in July 2023 due to inconsistency in distinguishing between the two (Kirchner et al., 2023). Our experiments have revealed a trade-off between creating easily identifiable texts that lack diversity and generating more diverse texts that are harder to differentiate but might not be as readable to humans. Therefore, larger AI-generated models may be more challenging to spot as they can combine both linguistic acceptability and diversity, thanks to their size and thus the ability to generate a higher range of plausible following tokens while still producing high-quality texts. Consequently, the trade-off between high-quality texts and low detectability is expected to diminish.
Therefore, it is essential to consider that language models may become undetectable. In such a scenario, students using ChatGPT to complete their homework could be viewed as a lesser evil. This could lead to the creation of customized social engineering bots that steal personal data without human input from the attacker, the development of malicious code by those without coding
\begin{table}
\begin{tabular}{l|l|r}
**query** & **response** & **reward** \\ \hline FINAL & UPDATE: Chancellor says he has not shown anxiety & \(0.8019\) \\ \(<\)\(|\)startoftext\(|\)\(>\) & see you this weekend! we’ll be here again cider is back in \(\nu\) & \(0.9674\) \\ \(<\)\(|\)startoftext\(|\)\(>\) & flush those lemons. This food ain’t good & \(0.9628\) \\ Ozzy Osbourne cancel & s gig at studio theatre tonight over coronavirus 2020 forced & \(-0.0089\) \\ Today we’ll know & soon morning if there’s blood somewhere in & \(-0.1756\) \\ The best \#adventure this & year on 2020, dudes. Can we really & \(-0.2899\) \\ \(<\)\(|\)startoftext\(|\)\(>\) & Yeah it’s weird how local commentators & \(0.9696\) \\ \end{tabular}
\end{table}
Table 3: **Example of the reinforcement learning process**
knowledge, and customized spear phishing emails.
Based on promising approaches such as DetectGPT [14] but also the withdrawal of the OpenAI classification model and our findings, we would like to emphasize that detecting and preventing these types of incidents is an emerging research field with ample opportunities for further publications.
## Acknowledgements
The authors would like to thank the Chair for Communication Systems and Network Security for their valuable discussions and feedback as well as the Research Institute CODE for providing hardware.
|
2305.03697 | Fault-Tolerant ST-Diameter Oracles | We study the problem of estimating the $ST$-diameter of a graph that is
subject to a bounded number of edge failures. An $f$-edge fault-tolerant
$ST$-diameter oracle ($f$-FDO-$ST$) is a data structure that preprocesses a
given graph $G$, two sets of vertices $S,T$, and positive integer $f$. When
queried with a set $F$ of at most $f$ edges, the oracle returns an estimate
$\widehat{D}$ of the $ST$-diameter $\operatorname{diam}(G-F,S,T)$, the maximum
distance between vertices in $S$ and $T$ in $G-F$. The oracle has stretch
$\sigma \geq 1$ if $\operatorname{diam}(G-F,S,T) \leq \widehat{D} \leq \sigma
\operatorname{diam}(G-F,S,T)$. If $S$ and $T$ both contain all vertices, the
data structure is called an $f$-edge fault-tolerant diameter oracle ($f$-FDO).
An $f$-edge fault-tolerant distance sensitivity oracles ($f$-DSO) estimates the
pairwise graph distances under up to $f$ failures.
We design new $f$-FDOs and $f$-FDO-$ST$s by reducing their construction to
that of all-pairs and single-source $f$-DSOs. We obtain several new tradeoffs
between the size of the data structure, stretch guarantee, query and
preprocessing times for diameter oracles by combining our black-box reductions
with known results from the literature.
We also provide an information-theoretic lower bound on the space requirement
of approximate $f$-FDOs. We show that there exists a family of graphs for which
any $f$-FDO with sensitivity $f \ge 2$ and stretch less than $5/3$ requires
$\Omega(n^{3/2})$ bits of space, regardless of the query time. | Davide Bilò, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, Simon Krogmann, Martin Schirneck | 2023-05-05T17:20:00Z | http://arxiv.org/abs/2305.03697v1 | # Fault-Tolerant ST-Diameter Oracles
###### Abstract
We study the problem of estimating the \(ST\)-diameter of a graph that is subject to a bounded number of edge failures. An \(f\)_-edge fault-tolerant \(ST\)-diameter oracle_ (\(f\)-FDO-\(ST\)) is a data structure that preprocesses a given graph \(G\), two sets of vertices \(S,T\), and positive integer \(f\). When queried with a set \(F\) of at most \(f\) edges, the oracle returns an estimate \(\widehat{D}\) of the \(ST\)-diameter \(\operatorname{diam}(G{-}F,S,T)\), the maximum distance between vertices in \(S\) and \(T\) in \(G{-}F\). The oracle has stretch \(\sigma\geqslant 1\) if \(\operatorname{diam}(G{-}F,S,T)\leqslant\widehat{D}\leqslant\sigma\operatorname {diam}(G{-}F,S,T)\). If \(S\) and \(T\) both contain all vertices, the data structure is called an \(f\)_-edge fault-tolerant diameter oracle_ (\(f\)-FDO). An \(f\)_-edge fault-tolerant distance sensitivity oracles_ (\(f\)-DSO) estimates the pairwise graph distances under up to \(f\) failures.
We design new \(f\)-FDOs and \(f\)-FDO-\(ST\)s by reducing their construction to that of _all-pairs_ and _single-source_\(f\)-DSOs. We obtain several new tradeoffs between the size of the data structure, stretch guarantee, query and preprocessing times for diameter oracles by combining our black-box reductions with known results from the literature.
We also provide an information-theoretic lower bound on the space requirement of approximate \(f\)-FDOs. We show that there exists a family of graphs for which any \(f\)-FDO with sensitivity \(f\geqslant 2\) and stretch less than \(5/3\) requires \(\Omega(n^{3/2})\) bits of space, regardless of the query time.
diameter oracles, distance sensitivity oracles, space lower bounds, fault-tolerant data structures [1][1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107][108][109][111][112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130][131][132][133][134][135][136][137][138][139][140][141][142][143][144][145][146][147][148][149][150][151][152][153][154][155][156][157][158][159][160][161][162][163][164][165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184][185][186][187][188][189][190][191][192][193][194][195][196][197][198][200][201][200][201][202][203][204][205][206][207][208][209][200][209][200][200][200][200][200][201][200][200][200][200][201][202][203][204][205][206][207][208][209][200][200][200][201][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][200][2000][200][200][200][200][2000][200][200][200][200][200][200][200][200][200][200][200][200][200][2000][200][200][200][200][200][200][200][200][200][200][200][2000][200][200][2000][200][200][200][2000][200][200][200][200][200][200][2000][200][200][200][2000][200][2000][200][2000][2000][200][200][2000][200][2000][200][2000][200][2000][200][2000][200][200][2000][2000][200][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][200][2000][2000][2000][2000][2000][2000][2000][20000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][20000][2000][2000][20000][2000][20000][2000][2000][2000][2000][2000][20000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][20000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000][2000]
can quickly report the desired solution or graph property of the network on occurrence of up to \(f\) edge/vertex failures. The parameter \(f\) that describes the degree of robustness against errors is known as the _sensitivity_ of the oracle. A lot of work has been done in designing fault-tolerant structures for various problems like connectivity [20, 32, 33], finding shortest paths [2, 12, 36], and distance sensitivity oracles [5, 7, 14, 24, 30, 31, 34, 47].
While the fault-tolerant model has been studied a lot for distances, the landscape of fault-tolerant diameter oracles is far less explored. For a given graph \(G=(V,E)\) and two sets \(S,T\subseteq V\) of vertices, an _\(f\)-edge fault-tolerant diameter oracle_ (\(f\)-FDO-\(ST\)) is a data structure that stores information about \(G\) after a preprocessing step. When queried with a set \(F\) of at most \(f\) edges, the oracle returns an upper bound of the \(ST\)-diameter \(\operatorname{diam}(G-F,S,T)=\max_{s\in S,t\in T}d_{G-F}(s,t)\) of \(G-F\). This is the maximum among all \(s\)-\(t\)-distances for \(s\in S\) and \(t\in T\) under the condition that none of the shortest paths can use an edge in the query set \(F\). We say that the oracle has a stretch of \(\sigma\geqslant 1\) if the value \(\widehat{D}\) returned upon query \(F\) satisfies \(\operatorname{diam}(G-F,S,T)\leqslant\widehat{D}\leqslant\sigma\operatorname{ diam}(G-F,S,T)\). When \(S=T=V\), the data structure is called an _\(f\)-edge fault-tolerant diameter oracle_ (\(f\)-FDO).
The problem of designing \(f\)-FDOs was originally raised by Henzinger, Lincoln, Neumann, and Vassilevska Williams [40] and has recently been studied by Bilo, Cohen, Friedrich, and Schirneck [17] and the same authors together with Choudhary [15], see also Section 1.1.
The problem of designing \(f\)-FDO-\(ST\) can be seen as a generalisation of the Bichromatic Diameter, a problem in which the two sets \(S\) and \(T\) form a partition of \(V\). The latter problem is motivated by several related, well-studied problems in computational geometry, e.g., Bichromatic Diameter on point sets (commonly known as Bichromatic Farthest Pair), where one seeks to determine the farthest pair of points in a given set of points in space. The problem of Bichromatic Diameter was studied by Dalirrooyfard, Vassilevska Williams, Vyas, and Wein [28].
Given the plethora of work on distance oracles and the close connection between the distance and the diameter problem, a natural question is if we can convert the results on distance computation under failures into analogous oracles for the diameter without sacrificing much on our performance parameters.
[]
Are there black-box reductions from fault-tolerant diameter oracles to fault-tolerant distance oracles without considerable overhead in stretch, query time, and space?
In this work, we present several such reductions and, from them, conclude trade-offs between the space, stretch, preprocessing, and query time for diameter oracles. In more detail, our techniques for obtaining upper bounds is by presenting reductions to the problem of constructing _\(f\)-edge fault-tolerant distance sensitivity oracles_ (\(f\)-DSOs) in two widely studied categories. The _all-pairs_ variant can be queried with any pair of vertices \(s,t\in V\) and set \(F\subseteq E\) of \(f\) failures and reports (an estimate) of the distance \(d_{G-F}(s,t)\) between \(s\) and \(t\) in \(G-F\). In the _single-source_ variant, the source \(s\) is fixed and the set of allowed queries consists of the target vertices \(t\) together with a set \(F\) of failures.
For the regular diameter (\(S=T=V\)), we provide two theorems showing that both all-pairs and single-source \(f\)-DSOs can be used to construct \(f\)-FDOs.
Let \(G\) be a (undirected or directed) graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Given access to an \(f\)-DSO \(\mathcal{D}\) for \(G\) with stretch \(\sigma\geqslant 1\), preprocessing time \(\mathtt{P}\), space \(\mathtt{S}\), and query time \(\mathtt{Q}\), one can construct an \(f\)-FDO for \(G\) with stretch \(1+\sigma\), preprocessing time \(O(mn\log n+\mathtt{P})\), space \(O(\mathtt{S})\), and query time \(O(f^{2}\mathtt{Q})\).
In Section 1.2, we review existing all-pairs \(f\)-DSOs. By applying the reduction stated in Theorem 1 we obtain new \(f\)-FDOs as listed in Table 1.
The following theorem shows how we can use the single-source variant of distance sensitivity oracles to construct \(f\)-FDOs.
Let \(G\) be a (undirected or directed) graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Given access to a single-source \(f\)-DSO \(\mathcal{D}\) for \(G\) with stretch \(\sigma\geqslant 1\), preprocessing time \(\mathtt{P}\), space \(\mathtt{S}\), and query time \(\mathtt{q}\), one can construct an \(f\)-FDO for \(G\) with stretch \(2(1+\sigma)\), preprocessing time \(O(\mathtt{P})\), space \(O(\mathtt{S})\), and query time \(O(f\mathtt{q})\).
Section 1.3 discusses single-source \(f\)-DSOs from the literature. Together with Theorem 2 they give new \(f\)-FDOs, summarized in Table 2.
The main technical contribution of this work, however, is a novel fault-tolerant data structure for the more general \(ST\)-diameter problem that was introduced and studied in recent years. For example, Backurs, Roditty, Segal, Vassilevska Williams, and Wein [4] proved that for any undirected graph one can compute a 3-approximation of the \(ST\)-diameter in \(O(mn)\) time. They also provided a randomized algorithm that computes a 2-approximation of the \(ST\)-diameter in \(\widetilde{O}(m\sqrt{n}\,)\) time.1 Dalirroooyfard, Vassilevska Williams, Vyas, and Wein [28] studied the problem of computing the bi-chromatic \(ST\)-diameter, the special case of \(ST\)-diameter problem where the sets \(S\) and \(T\) form a partition of \(V\). Similar to \(f\)-FDOs, we explore the problem of designing compact oracles that report the \(ST\)-diameter of a graph after occurrences of up to \(f\) failures. We present reductions between \(f\)-DSOs and \(f\)-FDO-\(ST\)s, as stated in the following theorem. To the best of our knowledge, our paper is the first work that provides some results on \(f\)-FDO-\(ST\)s, for general values of \(f\).
Footnote 1: For a non-negative function \(g(n,m,f)\), we write \(\widetilde{O}(g)\) for \(O(g\cdot\mathsf{polylog}(n))\).
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(S,T\subseteq V\) be two non-empty sets. Given access to an \(f\)-DSO for \(G\) with stretch \(\sigma\geqslant 1\), preprocessing time \(\mathtt{P}\), space \(\mathtt{S}\), and query time \(\mathtt{Q}\), one can compute an \(f\)-FDO-\(ST\) for \(G\) with preprocessing time \(\mathtt{P}+\widetilde{O}(mn+n|S||T|)\) and stretch \(1+3\sigma\). Additionally, the \(f\)-FDO-\(ST\) has the following properties.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** &
\begin{tabular}{c} **Preprocessing** \\ **Time** \\ \end{tabular} & **Ref.** \\ \hline
1 & 2 & \(\widetilde{O}(n^{2})\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [10, 11] \\
1 & 2 & \(\widetilde{O}(n^{2})\) & \(O(1)\) & \(\widetilde{O}(n^{2.5743}M+mn)\) & [37] \\
1 & \(1+(2k-1)(1+\varepsilon)\) & \(\widetilde{O}(k^{n^{1+1/2}}\varepsilon^{-1})\) & \(O(k)\) & \(O(km^{n^{1+1/2}})\) & [8] \\ \hline
2 & 2 & \(\widetilde{O}(n^{2})\) & \(\widetilde{O}(1)\) & \(\mathsf{poly}(n)\) & [32] \\ \hline \(f=o(\frac{\log n}{\mathsf{polylog}n})\) & 2 & \(\widetilde{O}(n^{3-n})\) & \(\widetilde{O}(f^{n^{2-(1-\infty)/f}})\) & \(O(n^{x+1-\alpha}M)\) & [47] \\ \(f=o(\frac{\log n}{\mathsf{polylog}n})\) & 2 + \(\varepsilon\) & \(O(f^{n^{2+(1-\log N)}}\varepsilon^{-f})\) & \(\widetilde{O}(f^{\prime}\log\log W)\) & \(O(f^{n^{5+(1)}}(\log W)\varepsilon^{-f})\) & [23] \\ \hline \(f\geqslant 1\) & 2 & \(O(f^{n})\) & \(f^{O(f)}\) & \(n^{O(f)}\) & [34] \\ \(f\geqslant 1\) & 2 & \(O(n^{2+\alpha}M)\) & \(\widetilde{O}(f^{n}n^{2-\alpha}M+f^{2+\alpha}nM)\) & \(\widetilde{O}(n^{x+(3-\omega)M}+mn)\) & [18] \\ \(f\geqslant 1\) & \(1+(8k-2)(f+1)\) & \(O\left(fkn^{n+1/k}\log(nW)\right)\) & \(\widetilde{O}(f^{\prime})\) & \(\mathsf{poly}(n)\) & [24] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Properties of the \(f\)-FDOs obtained via Theorem 1 using all-pairs \(f\)-DSOs from the literature. The applicable graph class (un-/directed, un-/weighted) is determined by the \(f\)-DSO. \(W\) denotes the maximum edge weight for graphs with arbitrary positive weights, \(M\) is the maximum edge weight for integer weighted graphs. The parameter \(k\geqslant 1\) is a positive integer, \(\varepsilon>0\) a positive real, \(\alpha\in[0,1]\) is a real number in the unit interval, and \(\omega<2.37286\) denotes the matrix multiplication exponent.
* _If the sensitivity is_ \(f=o(\log n)\)_, the oracle requires_ \(\mathsf{S}+O(n^{3/2}\,(2^{f}+\log n))\) _space and has a query time of_ \(O(f^{2}\,(2^{f}+\mathsf{Q}))\)_._
* _If_ \(f=\Omega(\log n)\)_, the oracle requires_ \(\mathsf{S}+O(n^{2})\) _space and has a query time of_ \(O(f^{2}(f+\mathsf{Q}))\)_._
Some more remarks on the preprocessing time stated in Theorem 3 may be in order. The reduction itself takes time \(\mathsf{P}+O(mn+n^{2}\log n+n|S||T|)\) to compute but requires that the shortest paths in \(G\) are unique. The total preprocessing time depends on how this condition is achieved. It is always possible to guarantee unique shortest paths either by randomly perturbing the edge weights with sufficiently small values, see [41], or by using a more complex deterministic method, also known as _lexicographic perturbation_[19, 21, 39]. While the first method increases the preprocessing only by a constant factor, it makes the preprocessing procedure randomized. Lexicographic perturbation, in turn, increases the time by an additive \(O(mn+n^{2}\log^{2}n)\) term [19]. By applying the reduction stated in Theorem 3 to existing all-pairs \(f\)-DSOs we obtain the \(f\)-FDOs listed in Table 3.
In addition, we present improved constructions of \(f\)-FDO-\(ST\)s for the important case of a single source or target, i.e., when \(|S|=1\) or \(|T|=1\), or when one is only given access to single-source \(f\)-DSOs. In the following, for the sake of readability, when \(S=\{s\}\), we will use "\(sT\)-diameter" instead of "\(ST\)-diameter" or "\(\{s\}T\)-diameter", same for the oracles.
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(s\in V\) be a vertex and \(T\subseteq V\) a non-empty set. Given a single-source \(f\)-DSO for \(G\) with preprocessing time \(\mathsf{P}\), space \(\mathsf{S}\), query time \(\mathsf{Q}\), and stretch \(\sigma\), one can compute an \(f\)-FDO-\(sT\) for \(G\) with preprocessing time \(\mathsf{P}+O(m+n\log n)\), space \(\mathsf{S}+O(n)\), query time \(O(f^{2}+f\mathsf{Q})\), and stretch \(1+2\sigma\). For unweighted graphs, the preprocessing time can be improved to \(\mathsf{P}+O(m)\).
Table 4 shows the \(f\)-fault-tolerant \(sT\)-diameter-oracle obtained from Theorem 4.
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(S,T\) be two non-empty subsets of \(V\). Given a single-source \(f\)-DSO for \(G\) with preprocessing time \(\mathsf{P}\), space \(\mathsf{S}\), query time \(\mathsf{Q}\), and stretch \(\sigma\), one can compute an \(f\)-FDO-\(ST\) for \(G\) with preprocessing time \(O(\mathsf{P}+m\!+\!n\log n)\), space \(O(\mathsf{S}+n)\), query time \(O(f^{2}+f\mathsf{Q})\), and stretch \(2+5\sigma\). For unweighted graphs, the preprocessing time can be improved to \(O(\mathsf{P}+m)\).
Table 5 corresponds to the oracles obtained via Theorem 5.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** & \multicolumn{2}{c}{**Preprocessing Time**} & **Ref.** \\ \hline
1 & 4 & \(\widetilde{O}(n^{3/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(mn^{1/2}+n^{2})\) & [16, 38] \\
1 & 4 & \(\widetilde{O}(n^{3/2}M^{1/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(n^{\omega}M)\) & [16] \\
1 & \(4+\varepsilon\) & \(\widetilde{O}(n(\log W)\,\varepsilon^{-1})\) & \(O(\log\log_{1+\varepsilon}(nW))\) & \(\mathsf{poly}(n)\) & [5, 8, 13] \\
1 & 6 & \(O(n)\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [13] \\ \hline \(f\geqslant 1\) & \(4f+4\) & \(\widetilde{O}(fn)\) & \(\widetilde{O}(f^{3})\) & \(\widetilde{O}(fm)\) & [14] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Properties of the \(f\)-FDOs obtained via Theorem 2 using single-source \(f\)-DSOs from the literature. The applicable graph class (un-/directed, un-/weighted) is determined by the single-source \(f\)-DSO. \(W\) denotes the maximum edge weight for graphs with arbitrary positive weights, \(M\) is the maximum edge weight for integer weighted graphs. The parameter \(\varepsilon>0\) is a positive real and \(\omega<2.37286\) denotes the matrix multiplication exponent.
We also prove an information-theoretic lower bound on the space requirement of approximate \(f\)-FDOs that support \(f\geqslant 2\) edge failures. Note that the lower bound in Theorem 3.1 holds independently of the query time. It is known from work of Bilo, Cohen, Friedrich, and Schirneck [17] that \(f\)-FDOs with stretch \(\sigma<1.5\) require \(\Omega(n^{2})\) bits of space, and in our work we complement this result by proving that \(f\)-FDOs with stretch \(\sigma<5/3\) require \(\Omega(n^{1.5})\) bits of space. Obtaining \(\Omega(n^{2})\) lower bound for \(f\)-FDOs with stretch \(\sigma<2\) for undirected unweighted graphs is an interesting open problem.
Let \(n\) be a positive integer. Any \(f\)-FDO or \(f\)-FDO-ST for \(n\)-vertex graphs with sensitivity \(f\geqslant 2\) and stretch \(\frac{5}{3}-\varepsilon\) for any \(\varepsilon>0\) requires \(\Omega(n^{3/2})\) bits of space.
Outline.This work is structured as follows. In the remainder of this section, we review the literature focusing on diameter oracles and distance sensitivity oracles. We then fix our notations and some preliminaries in Section 2. Section 3 presents our constructions of \(f\)-FDO-ST, for the general case of \(S,T\subseteq V\). In Section 4 we consider the special case of a single source, that is, of \(f\)-FDO-\(sT\). In Section 5 we prove the space lower bound. The proofs of Theorems 2.1 and 2.1 follow from similar ideas as discussed in Section 3 and are deferred to the full version of the paper.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** & **Preprocessing Time** & **Ref.** \\ \hline
1 & 3 & \(\widetilde{O}(n^{3/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(mn^{1/2}+n^{2})\) & [16, 38] \\
1 & 3 & \(\widetilde{O}(n^{3/2}M^{1/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(n^{\omega}M)\) & [16] \\
1 & \(3+\varepsilon\) & \(\widetilde{O}(n(\log W)\varepsilon^{-1})\) & \(O(\log\log_{1+\varepsilon}(nW))\) & \(\mathsf{poly}(n)\) & [5, 8, 13] \\
1 & 5 & \(O(n)\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [13] \\ \hline \(f\geqslant 1\) & \(4f+3\) & \(\widetilde{O}(fn)\) & \(\widetilde{O}(f^{3})\) & \(\widetilde{O}(fm)\) & [14] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Properties of the \(f\)-FDO-\(sT\) for undirected graphs obtained via Theorem 3.1 using single-source \(f\)-DSOs from the literature.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** & **Ref.** \\ \hline
1 & 4 & \(\widetilde{O}(n^{2})\) & \(O(1)\) & [10, 11, 37] \\
1 & \(1+(6k-3)(1+\varepsilon)\) & \(\widetilde{O}(n^{3/2}+k^{3}n^{1+1/k}\varepsilon^{-4})\) & \(O(1)\) & [8] \\ \hline
2 & 4 & \(\widetilde{O}(n^{2})\) & \(\widetilde{O}(1)\) & [32] \\ \hline \(f=o(\frac{\log n}{\log n})\) & 4 & \(\widetilde{O}(n^{3-n})\) & \(\widetilde{O}(f^{2}n^{-(1-\alpha)/f})\) & [47] \\ \(f=o(\frac{\log n}{\log n})\) & \(4+\varepsilon\) & \(O(fn^{2+\alpha(1)}(\log W)\varepsilon^{-f})\) & \(\widetilde{O}(f^{2}f^{2}+f^{7}\log\log W)\) & [23] \\ \hline \(f=o(\log n)\) & 4 & \(O(n^{2+\alpha}M)\) & \(\widetilde{O}(f^{n}n^{2-\alpha}M+f^{2+\omega}nM)\) & [18] \\ \(f=o(\log n)\) & 4 & \(O(fn^{4})\) & \(f^{O(f)}\) & [34] \\ \(f=o(\log n)\) & \(1+(24k-6)(f+1)\) & \(O(n^{3/2+\alpha(1)}+fkn^{1+1/k}\log(nW))\) & \(\widetilde{O}(f^{2}f^{2})\) & [24] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Properties of the \(f\)-FDO-\(ST\) for undirected graphs obtained via Theorem 3.1 using all-pairs \(f\)-DSOs from the literature. The preprocessing time is omitted due to space reasons. \(W\) denotes the maximum edge weight for graphs with arbitrary positive weights, \(M\) is the maximum edge weight for integer weighted graphs. The parameter \(k\geqslant 1\) is a positive integer, \(\varepsilon>0\) a positive real, \(\alpha\in[0,1]\) is a real number in the unit interval, and \(\omega<2.37286\) denotes the matrix multiplication exponent.
### Related Work on Fault-Tolerant Diameter Oracles
Fault-tolerant diameter oracles were introduced by Henzinger, Lincoln, Neumann, and Vassilevska Williams [40]. They showed that for a single failure in unweighted directed graphs, one can compute in time \(\widetilde{O}(mn+n^{1.5}\sqrt{Dm/\varepsilon}\,)\), where \(\varepsilon\in(0,1]\) and \(D\) is the diameter of the graph, a 1-FDO with \(1+\varepsilon\) stretch that has \(O(m)\) space, constant query time. Bilo, Cohen, Friedrich, and Schirneck [17] showed that one can improve the preprocessing time to \(\widetilde{O}(mn+n^{2}/\varepsilon)\), which is nearly optimal under certain conditional hardness assumptions for combinatorial algorithms (see [40]). They also showed that fast matrix multiplication reduces the preprocessing time for dense graphs to \(\widetilde{O}(n^{2.5794}+n^{2}/\varepsilon)\).
Bilo, Choudhary, Cohen, Friedrich, and Schirneck [15] addressed the problem of computing 1-FDOs with \(o(m)\) space. They showed that for unweighted directed graphs with diameter \(D=\omega(n^{5/6})\), there is a 1-FDO with \(\widetilde{O}(n)\) space, \(1+\frac{n^{5/6}}{D}=1+o(1)\) stretch, and \(O(1)\) query time. It has a preprocessing time of \(O(mn)\). In the same work it was also shown that for graphs with diameter \(D=\omega((n^{4/3}\log n)/(\varepsilon\sqrt{m}\,))\) and any \(\varepsilon>0\), there is a \((1+\varepsilon)\)-stretch 1-FDO, with preprocessing time \(O(mn)\), space \(o(m)\), and constant query time.
For _undirected_ graphs the space requirement can be reduced. There is a folklore construction that combines the DSO by Bernstein and Karger [11] with the observation that in undirected graphs the eccentricity of an arbitrary vertex is a 2-approximation of the diameter. This results in an 1-FDO with stretch 2 and constant query time that takes only \(O(n)\) space, details can be found in [17, 40].
For \(f>1\) edge failures in undirected graphs with non-negative edge weights, Bilo et al. [17] presented an \(f\)-FDO with \((f+2)\) stretch, \(O(f^{2}\log^{2}n)\) query time, \(\widetilde{O}(fn)\) space, and \(\widetilde{O}(fm)\) preprocessing time. A lower bound in that work showed that \(f\)-FDO with finite stretch must have \(\Omega(fn)\) space, nearly matching their construction.
We are not aware of any \(O(n)\)-sized, constant-stretch FDOs for _directed_ graphs with arbitrary diameter in the literature prior to this work, not even for sensitivity (\(f=1\)). Also, no non-trivial \(f\)-FDOs with \(o(f)\)-stretch were known. To the best of our knowledge, we are the first to study the problem of general \(f\)-FDO-\(ST\)s with \(S,T\neq V\).
We now discuss the known information-theoretic lower bounds for FDOs. Bilo, Cohen, Friedrich, Schirneck [17] showed that any FDO with stretch \(\sigma<3/2\) for undirected unweighted graphs requires \(\Omega(m)\) bits of space, even for \(f=1\). They also extended the same lower bound of \(\Omega(m)\) bits to edge-weighted graphs and \(\sigma<2\). Bilo, Choudhary, Cohen, Friedrich,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** &
\begin{tabular}{c} **Preprocessing** \\ **Time** \\ \end{tabular} & **Ref.** \\ \hline
1 & 7 & \(\widetilde{O}(n^{3/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(mn^{1/2}+n^{2})\) & [16, 38] \\
1 & 7 & \(\widetilde{O}(n^{3/2}M^{1/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(n^{\omega}M)\) & [16] \\
1 & 7 + \(\varepsilon\) & \(\widetilde{O}(n(\log W)\varepsilon^{-1})\) & \(O(\log\log_{1+\varepsilon}(nW))\) & \(\mathsf{poly}(n)\) & [5, 8, 13] \\
1 & 12 & \(O(n)\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [13] \\ \hline \(f\geqslant 1\) & \(10f+7\) & \(\widetilde{O}(fn)\) & \(\widetilde{O}(f^{3})\) & \(\widetilde{O}(fm)\) & [14] \\ \hline \hline \end{tabular}
\end{table}
Table 5: Properties of the fault-tolerant \(ST\)-diameter oracles (\(f\)-FDO-\(ST\)) obtained via the reduction in Theorem 5 using single-source distance sensitivity oracles (\(f\)-DSOs) from the literature. \(W\) denotes the maximum edge weight for graphs with arbitrary positive weights, \(M\) is the maximum edge weight for integer weighted graphs. The parameter \(\varepsilon>0\) is a positive real and \(\omega<2.37286\) denotes the matrix multiplication exponent.
and Schirneck [15] extended this result to directed graphs. In particular, they showed that for directed unweighted graphs with diameter \(D=O(\sqrt{n}/m)\), any FDO with stretch better than \(\left(\frac{3}{2}-\frac{1}{D}\right)\) requires \(\Omega(m)\) bits of space. They further proved that for directed graphs any \(f\)-FDO requires \(\Omega(2^{f/2}n)\) bits of space, as long as \(2^{f/2}=O(n)\).
### All-Pairs Distance Sensitivity Oracles
The first distance-sensitive oracle was in the context of directed graphs [29]. It maintained exact distances and was capable of handling a single edge failure. The space requirement of this oracle is \(O(n^{2}\log n)\) and its query time is \(O(\log n)\). This was later generalized to handle a single vertex or edge failure in [30]. Demetrescu, Thorup, Chowdhury, and Ramachandran [30] presented an exact 1-sensitive distance oracle of size \(O(n^{2}\log n)\), \(O(1)\) query time and \(\widetilde{O}(mn^{2})\) preprocessing time. Later, in two consecutive papers, Bernstein and Karger improved the preprocessing time (while keeping the space and query time unchanged), first to \(O(n^{2}\sqrt{m})\) in [10] and then to \(\widetilde{O}(mn)\) in [11]. Baswana and Khanna [8] considered approximate 1-DSOs for unweighted graphs. More precisely, they presented a data structure of size \(O(k^{5}n^{1+1/k}\frac{\log^{3}n}{\varepsilon^{4}})\), \((2k-1)(1+\varepsilon)\) stretch and \(O(k)\) query time. Duan and Pettie [32] considered the case of two failures (vertices or edges) with exact distances. The size of their oracle is \(O(n^{2}\log^{3}n)\), the query time is \(O(\log n)\) and the construction time is polynomial.
Using fast matrix multiplication, Weimann and Yuster [47] presented, for any parameter \(\alpha\in[0,1]\), a DSO that can handle up to \(O(\log n/\log\log n)\) edges or vertices failures with \(\widetilde{O}(n^{2-(1-\alpha)/f})\) query time and \(O(Mn^{\omega+1-\alpha})\) preprocessing time for directed graphs with integer weights in the range \([-M,M]\), where \(\omega<2.373\) is the matrix multiplication exponent. In [35], Grandoni and Vassilevska Williams presented a distance sensitivity oracle with subcubic \(\widetilde{O}(Mn^{\omega+1/2}+Mn^{\omega+\alpha(4-\omega)})\) preprocessing time and sublinear \(\widetilde{O}(n^{1-\alpha})\) query time. Van den Brand and Saranurak [18] presented a distance-sensitive oracle that can handle \(f\geqslant\log n\) updates (where an update is an edge insertion or deletion), with \(\widetilde{O}(Mn^{\omega+(3-\omega)\mu})\) preprocessing time, \(\widetilde{O}(Mn^{2-\mu}f^{2}+Mnf^{\omega})\) update time, and \(\widetilde{O}(Mn^{2-\mu}f+Mnf^{2})\) query time, where the parameter \(\mu\in[0,1]\) can be chosen. Chechik and Cohen [22] presented a 1-DSO with with subcubic \(\widetilde{O}(Mn^{2.873})\) preprocessing time and \(\widetilde{O}(1)\) query time. This was improved by Ren [42] and later by Gu and Ren [37], who obtained a 1-DSO with \(\widetilde{O}(Mn^{2.5794})\) preprocessing time and constant query time. Recently Duan and Ren [34] presented an exact
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** &
\begin{tabular}{c} **Preprocessing** \\ **Time** \\ \end{tabular} & **Ref.** \\ \hline
1 & 1 & \(\widetilde{O}(n^{2})\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [10, 11] \\
1 & 1 & \(\widetilde{O}(n^{2})\) & \(O(1)\) & \(\widetilde{O}(n^{2.5794}M)\) & [37] \\
1 & \((2k-1)(1+\varepsilon)\) & \(\widetilde{O}(k^{5}n^{1+1/2}\varepsilon^{-t})\) & \(O(k)\) & \(O(kmn^{1+1/2})\) & [8] \\ \hline
2 & 1 & \(\widetilde{O}(n^{2})\) & \(\widetilde{O}(1)\) & \(\mathsf{poly}(n)\) & [32] \\ \hline \(f=o(\frac{\log n}{\log n})\) & 1 & \(\widetilde{O}(n^{2-\alpha})\) & \(\widetilde{O}(n^{2-(1-\alpha)/f})\) & \(O(n^{\omega+1-\alpha}M)\) & [47] \\ \(f=o(\frac{\log n}{\log n})\) & 1 + \(\varepsilon\) & \(O(fn^{2+\alpha(\gamma)}(\log W)\varepsilon^{-f})\) & \(\widetilde{O}(f^{\gamma}\log\log W)\) & \(O(fn^{\omega+\alpha(\gamma)}(\log W)\varepsilon^{-f})\) & [23] \\ \hline \(f>1\) & 1 & \(O(fn^{4})\) & \(f^{O(f)}\) & \(n^{O(f)}\) & [34] \\ \(f>1\) & 1 & \(O(n^{2+\alpha}M)\) & \(\widetilde{O}(f^{\alpha}n^{2-\alpha}M+f^{\gamma-\alpha}M)\) & \(\widetilde{O}(n^{\omega+(3-\omega)\alpha}M)\) & [18] \\ \(f>1\) & \((8k-2)(f+1)\) & \(O\left(fkn^{1+1/k}\log(nW)\right)\) & \(\widetilde{O}(f^{3})\) & \(\mathsf{poly}(n)\) & [24] \\ \hline \hline \end{tabular}
\end{table}
Table 6: Existing \(f\)-sensitive all-pairs distance oracles for undirected graphs. The parameter \(k\geqslant 1\) is a positive integer, \(\varepsilon>0\) a positive real, \(\alpha\in[0,1]\) is a real number in the unit interval, and \(\omega<2.37286\) denotes the matrix multiplication exponent.
\(f\)-DSO with \(O(fn^{4})\) space, \(f^{O(f)}\) query time, and \(n^{O(f)}\) preprocessing time.
In Table 6 we summarize several of the above \(f\)-DSOs for undirected graphs.
### Related Work on Single-Source Distance Sensitivity Oracles
First, we discuss undirected graphs. Baswana and Khanna [8] showed that unweighted undirected graphs can be preprocessed in \(O(m\sqrt{n/\varepsilon}\,)\) time to compute a \((1+\varepsilon)\)-stretch single-source edge/vertex fault-tolerant distance-oracle of size \(O(n\log n+n/\varepsilon^{3})\) and constant query time. For weighted graphs, they showed the construction of an \(O(n\log n)\) size oracle which can report 3-approximate distances on single failure in \(O(1)\) time. Bilo, Guala, Leucci, and Proietti [13] showed that for a single edge failure in weighted graphs we can compute an \(O(n)\)-size oracle with stretch 2 and constant query time. Also, a construction is provided that has \(1+\varepsilon\) stretch, with \(O\big{(}\varepsilon^{-1}n\log(1/\varepsilon)\big{)}\) size and \(O\big{(}\varepsilon^{-1}\log n\log(1/\varepsilon)\big{)}\) query time. All the results stated till now are for a single edge or vertex failure only. For multiple failures, Bilo, Guala, Leucci, and Proietti [14] gave a construction of size \(O\big{(}fn\log^{2}n\big{)}\), computable in \(\widetilde{O}(mf)\) time that reports \((2f+1)\)-stretched distances in \(O(f^{2}\log^{2}n)\) time.
Bilo, Cohen, Friedrich, and Schirneck [16] presented several additional single-source DSOs. For undirected unweighted graphs, they presented a single-source DSO that has size \(O(n^{3/2})\), query time \(\widetilde{O}(1)\) and \(\widetilde{O}(m\sqrt{n}+n^{2})\) preprocessing time. For graphs with integer edge weights in the range \([1,M]\) and using fast matrix multiplication, they presented a single-source DSO with \(O(M^{1/2}n^{3/2})\) space, \(\widetilde{O}(1)\) query time and \(\widetilde{O}(Mn^{\omega})\) preprocessing time. For sparse graphs with \(m=O(M^{3/7}n^{7/4})\) they presented a single-source DSO with the same \(O(M^{1/2}n^{3/2})\) size, \(\widetilde{O}(1)\) query time, and subquadratic \(\widetilde{O}(M^{7/8}m^{1/2}n^{11/8})\) preprocessing time.
For directed graphs, Baswana, Choudhary, Hussain, and Roditty [5] showed that we can preprocess directed weighted graphs with edge weights in range \([1,W]\) to compute an oracle of \(\widetilde{O}(\varepsilon^{-1}n\log W)\) size that reports \((1+\varepsilon)\)-approximate distances on single edge/vertex failure in \(\widetilde{O}(\log\log_{1+\varepsilon}(nW))\) time. Gupta and Singh [38] designed exact distance oracles of \(\widetilde{O}(n^{3/2})\) size that on single edge/vertex failure in directed/undirected unweighted graphs reports distances in \(\widetilde{O}(1)\) time.
In Table 7 we summarize several of the above \(f\)-DSOs for undirected graphs.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Sensitivity** & **Stretch** & **Space** & **Query time** &
\begin{tabular}{c} **Preprocessing** \\ **Time** \\ \end{tabular} & **Ref.** \\ \hline
1 & 1 & \(\widetilde{O}(n^{3/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(mn^{1/2}+n^{2})\) & [16, 38] \\
1 & 1 & \(\widetilde{O}(n^{3/2}M^{1/2})\) & \(\widetilde{O}(1)\) & \(\widetilde{O}(n^{\omega}M)\) & [16] \\
1 & \(1+\varepsilon\) & \(\widetilde{O}(n\log W)\varepsilon^{-1})\) & \(O(\log\log_{1+\varepsilon}(nW))\) & \(\poly(n)\) & [5, 8, 13] \\
1 & 2 & \(O(n)\) & \(O(1)\) & \(\widetilde{O}(mn)\) & [13] \\ \hline \(f\geqslant 1\) & \(2f+1\) & \(\widetilde{O}(fn)\) & \(\widetilde{O}(f^{2})\) & \(\widetilde{O}(fm)\) & [14] \\ \hline \hline \end{tabular}
\end{table}
Table 7: Existing \(f\)-sensitive single-source distance oracles for undirected graphs \(W\) denotes the maximum edge weight for graphs with arbitrary positive weights, \(M\) is the maximum edge weight for integer weighted graphs. The parameter \(\varepsilon>0\) is a positive real and \(\omega<2.37286\) denotes the matrix multiplication exponent.
## 2 Preliminaries
For a given graph \(G=(V,E)\), possibly with positive edge weights, we denote by \(d_{G}(u,v)\) the distance in \(G\) from vertex \(u\in V\) to vertex \(v\in V\). Given two non-empty subsets \(S,T\subseteq V\), the _\(ST\)-diameter_ of \(G\) is defined as \(\operatorname{diam}(G,S,T)=\max_{s\in S,t\in T}d_{G}(s,t)\). With a little abuse of notation, when \(S=\{s\}\) (resp., \(T=\{t\}\)), we also use \(\operatorname{diam}(G,s,T)\) (resp., \(\operatorname{diam}(G,S,t)\)) as a shorthand of \(\operatorname{diam}(G,\{s\},T)\) (resp., \(\operatorname{diam}(G,S,\{t\})\)) for the \(sT\)-diameter (resp., \(St\)-diameter). Moreover, if \(S=T=V\), we use \(\operatorname{diam}(G)\) instead of \(\operatorname{diam}(G,V,V)\).
For a given set \(F\subseteq E\), we denote by \(G-F\) the graph obtained from \(G\) by removing all the edges of \(F\). If \(H\) is a subgraph of \(G\), we use \(V(H)\) and \(E(H)\) for the vertices and edges of \(H\), respectively. An _\(f\)-edge fault-tolerant distance sensitivity oracle_ (\(f\)-DSO) with _stretch_\(\sigma\geqslant 1\) is a data structure that answers queries \((u,v,F)\) with \(u,v\in V\) and \(F\subseteq E\) with \(|F|\leqslant f\). It returns an estimate \(\widehat{d}_{G-F}(u,v)\) of the distance from \(u\) to \(v\) in \(G-F\) such that \(d_{G-F}(u,v)\leqslant\widehat{d}_{G-F}(u,v)\leqslant\sigma\cdot d_{G-F}(u,v)\). An _\(f\)-edge fault-tolerant \(ST\)-diameter oracle_ (\(f\)-FDO-\(ST\)) with stretch \(\sigma\) returns, upon query \(F\subseteq E\) with \(|F|\leqslant f\), an estimate \(\widehat{D}=\widehat{D}(F,S,T)\) of the \(ST\)-diameter of \(G-F\) such that \(\operatorname{diam}(G-F,S,T)\leqslant\widehat{D}\leqslant\sigma\cdot \operatorname{diam}(G-F,S,T)\). If \(S=\{s\}\) is a singleton or \(S=T=V\) are both the whole vertex set, we abbreviate such oracles for as \(f\)-FDO-\(sT\) and \(f\)-FDO, respectively.
## 3 ST-Diameter Oracles
We start by showing how to use distance sensitivity oracles to design data structures for the fault-tolerant \(ST\)-diameter, i.e., the \(ST\)-diameter of \(G-F\) after a set of edges \(F\subseteq E\) failed. The maximum number \(f\) of supported failures is called the sensitivity of the data structure. The result is formally stated in Theorem 3.
In the following, we assume that the shortest paths in \(G\) are made unique. This way, we can identify a shortest path with its endpoints, which enabled saving both in the time-efficiency of the preprocessing and the space-efficiency of the resulting data structure. In particular, it allows for a subquadratic (in \(n\)) space overhead over the underlying \(f\)-DSO. However, the precise way how to make the paths unique influences the nature of the preprocessing. As discussed in Section 1, one can ensure a unique shortest path in a random fashion by slightly perturbing the edge weights. Alternatively, lexicographic perturbation [19, 21, 39] provides a deterministic procedure but adds an \(O(mn+n^{2}\log^{2}n)\) term to the running time.
Let \(\pi_{u,v}\) denote the (unique) shortest path in \(G\) from \(u\) to \(v\). Fix a set \(F\subseteq E\) of at most \(f\) edges and recall that we use \(V(F)\) to denote the set of endpoints of edges in \(F\). Our \(f\)-DSO-\(ST\) uses a data structure to map \(S\) and \(T\) into two suitable subsets \(S^{\prime}\) and \(T^{\prime}\) of \(V(F)\), respectively. A vertex \(v\in V(F)\) belongs to \(S^{\prime}\) (resp., \(T^{\prime}\)) if there exists a shortest path \(\pi_{s,t}\) from some \(s\in S\) to some \(t\in T\) such that \(v\) is a vertex on \(\pi_{s,t}\) and the subpath \(\pi_{s,v}\) (resp., \(\pi_{v,t}\)) of \(\pi_{s,t}\) from \(s\) to \(v\) (resp., from \(t\) to \(v\)) contains no vertex of \(V(F)\) other than \(v\). Note that \(\pi_{s,v}\) (resp., \(\pi_{v,t}\)) is completely contained in \(G-F\), whence \(d_{G-F}(s,v)=d_{G}(s,v)\) (analogously for \(d_{G-F}(v,t)\)). The sizes of \(S^{\prime},T^{\prime}\subseteq V(F)\) are in \(O(f)\).
### Query Algorithm
Before describing the data structure, we present the query algorithm. Let \(\mathcal{D}\) denote the \(f\)-DSO with stretch \(\sigma\geqslant 1\) that is assumed in Theorem 3. Given the query \(F\), our diameter oracle computes the two sets \(S^{\prime}\) and \(T^{\prime}\). Next, for every two vertices \(u\) and \(v\) such that \(u\in S^{\prime}\) and \(v\in T^{\prime}\), it queries \(\mathcal{D}\) with the triple \((u,v,F)\) to obtain a \(\sigma\)-approximation of \(d_{G-F}(u,v)\). The \(f\)-FDO-\(ST\) returns the value \(\widehat{D}=\operatorname{diam}(G,S,T)+\max_{(u,v)\in S^{\prime}\times T^{ \prime}}\mathcal{D}(u,v,F)\).
Given \(S^{\prime}\) and \(T^{\prime}\), the time needed to compute \(\widehat{D}\) is \(O(f^{2}\mathtt{Q})\), where \(\mathtt{Q}\) is the query time of the \(f\)-DSO \(\mathcal{D}\). The value \(\operatorname{diam}(G,S,T)\) can be precomputed.
The \(f\)-FDO-\(ST\) has a stretch of \(1+3\sigma\).
Proof.: Let \(s\in S\) and \(t\in T\) be two arbitrary vertices. We first show that \(d_{G-F}(s,t)\leqslant\widehat{D}\), that is, the returned value never underestimates the \(ST\)-diameter of \(G-F\). We only need to prove the case in which some of the failing edges in \(F\) belong to \(\pi_{s,t}\) as otherwise \(d_{G-F}(s,t)=d_{G}(s,t)\leqslant\operatorname{diam}(G,S,T)\leqslant\widehat{D}\). Thus, let \(x_{s}\) (resp., \(x_{t}\)) be the vertex of \(V(F)\) that is closest to \(s\) (resp., \(t\)) in \(\pi_{s,t}\). By definition of \(S^{\prime},T^{\prime}\), we have \(x_{s}\in S^{\prime}\) and \(x_{t}\in T^{\prime}\) and thus \(d_{G-F}(s,x_{s})=d_{G}(s,x_{s})\) and \(d_{G-F}(x_{t},t)=d_{G}(x_{t},t)\). Moreover, it holds that \(d_{G-F}(s,x_{s})+d_{G-F}(x_{t},t)=d_{G}(s,x_{s})+d_{G}(x_{t},t)\leqslant \operatorname{diam}(G,S,T)\) as \(\pi_{s,x_{s}}\) and \(\pi_{x_{t},t}\) are vertex-disjoint. Using the triangle inequality twice and the fact that \(\max_{(u,v)\in S^{\prime}\times T^{\prime}}\mathcal{D}(u,v,F)\) never underestimates \(\operatorname{diam}(G{-}F,S^{\prime},T^{\prime})\), we get
\[d_{G-F}(s,t) \leqslant d_{G-F}(s,x_{s})+d_{G-F}(x_{s},x_{t})+d_{G-F}(x_{t},t)\] \[\leqslant\operatorname{diam}(G,S,T)+\operatorname{diam}(G{-}F,S ^{\prime},T^{\prime})\leqslant\widehat{D}.\]
We now prove that \(\widehat{D}\leqslant(1+3\sigma)\cdot\operatorname{diam}(G{-}F,S,T)\). Let \(u\in S^{\prime}\) and \(v\in T^{\prime}\) be arbitrary. There are \(s\in S\) and \(t\in T\) such that \(d_{G-F}(s,u),d_{G-F}(v,t)\leqslant\operatorname{diam}(G,S,T)\). We arrive at
\[\mathcal{D}(u,v,F) \leqslant\sigma\hskip 0.853583ptd_{G-F}(u,v)\leqslant\sigma \hskip 0.853583pt\big{(}d_{G-F}(u,s)+d_{G-F}(s,t)+d_{G-F}(t,v)\big{)}\] \[\leqslant\sigma\hskip 0.853583pt\big{(}\operatorname{diam}(G,S,T) +\operatorname{diam}(G{-}F,S,T)+\operatorname{diam}(G,S,T)\big{)}\] \[\leqslant 3\sigma\operatorname{diam}(G{-}F,S,T),\]
thus \(\widehat{D}=\operatorname{diam}(G,S,T)+\max_{u\in S^{\prime},v\in T^{\prime} }\mathcal{D}(u,v,F)\leqslant(1+3\sigma)\operatorname{diam}(G-F,S,T)\).
### Data Structure for the Sets \(S^{\prime}\) and \(T^{\prime}\) for Large Sensitivity
Recall that, given the failure set \(F\), the set \(S^{\prime}\) contains all \(v\in V(F)\) such that there are \(s\in T\) and \(t\in T\) for which \(v\) is the closest vertex to \(s\) on \(V(F)\cap E(\pi_{s,t})\), analogously for \(T^{\prime}\). We now describe the data structure that computes the sets \(S^{\prime}\) and \(T^{\prime}\), focusing on \(S^{\prime}\) since the case of \(T^{\prime}\) follows in the same fashion.
The construction algorithm depends on the sensitivity \(f\). Suppose first that \(f=\Omega(\log n)\). For each vertex \(v\in V\), the data structure stores the shortest-path tree \(T_{v}\) of \(G\) rooted at \(v\) and mark some of its vertices. Namely, all \(s\in S\) are marked for which there is a \(t\in T\) such that \(v\) lies on the path \(\pi_{s,t}\). For every two vertices \(s\in S\) and \(t\in T\), \(\pi_{s,t}\) contains \(v\) if and only if \(d_{G}(s,t)=d_{G}(s,v)+d_{G}(v,t)\). We used here that the paths are unique. It suffices to compute the all-pairs distances in \(G\) in time \(O(mn+n^{2}\log n)\) time2 and use them to mark the vertices of \(T_{v}\) for _all_\(v\) with the obvious \(O(n|S||T|)\)-time algorithm.
Footnote 2: The time needed for this step reduces to \(O(mn)\) in case \(G\) is unweighted or has only small integer or even floating point weights (in exponent-mantissa representation) using Thorup’s algorithm [46].
Additionally, each vertex \(u\) of \(T_{v}\) is annotated with the value \(count_{v}(u)\), the number of marked vertices in the subtree \((T_{v})_{u}\) rooted at \(u\). For a fixed tree \(T_{v}\), all values \(count_{v}(u)\) are computable in \(O(n)\) time in a bottom-up fashion. Finally, we store, for each \(T_{v}\), a data structure that supports least common ancestor (LCA) queries in constant time. Such structures can be built in time and space that is linear in the size of the tree [9]. The time needed to construct the data structure is \(O(mn+n^{2}\log n+n|S||T|)\) and the space is \(O(n^{2})\)
To answer a query \(F\), the algorithm scans all the vertices \(v\in V(F)\) and decides which of them to include in \(S^{\prime}\). The graph \(T_{v}-F\) is a collection of rooted trees. (Possibly some of the trees degenerated to isolated vertices.) We observe that \(v\in S^{\prime}\) if and only if \(T_{v}-F\) contains a marked vertex that is still reachable from \(v\). To check this condition, the algorithm computes the set \(F_{0}\) of all the edges \(\{u,w\}\in F\) that are contained in \(T_{v}\). This is the case if and only if the LCA of \(u\) and \(w\) in \(T_{v}\) is either \(u\) or \(w\).
Next, we define a notion of domination for edges in \(F_{0}\). We say that an edge \(\{u,w\}\in F_{0}\), where \(u\) is the parent of \(w\) in \(T_{v}\), is _dominated_ by another edge \(\{a,b\}\in F_{0}\), where \(a\) is the parent of \(b\) in \(T_{v}\), if \(\{u,w\}\) is in the subtree of \(T_{v}\) rooted at \(b\). This is equivalent to \(b\) being the LCA of \(b\) and \(u\). The query algorithm removes all dominated edges from \(F_{0}\), which can be done in \(O(|F_{0}|^{2})=O(f^{2})\) time.
Recall that \(count_{v}(v)\) is the overall number of marked vertices in \(T_{v}\). Evidently, some vertex in \(T_{v}-F\) is reachable from \(v\) iff they are in the same connected component. Thus, there is a marked vertex reachable from \(v\) if and only if \(count_{v}(v)\) is strictly larger than the number of marked vertices contained in those components of \(T_{v}-F\) that do _not_ contain \(v\). Indeed, the difference between those two values is exactly the _number_ of marked vertices reachable from \(v\). Each connected component of \(T_{v}-F\) that does not contain \(v\) is a tree \(T^{\prime}\) rooted at some vertex \(w\in V(F_{0})\backslash\{v\}\). Let \(u\) be the parent of \(w\) in \(T_{v}\). Compared to the full subtree \((T_{v})_{u}\) rooted at \(u\), \(T^{\prime}\) is missing those subtrees "further down" that are rooted at some other vertex \(b\) whose parent \(a\) is a vertex of \(T^{\prime}\). Those are exactly the edges \(\{a,b\}\in F_{0}\) that are dominated by \(\{u,w\}\). Accordingly, the value \(count_{v}(u)\) counts the marked vertices in \(T^{\prime}\) and additionally those in the subtrees rooted at the vertices \(b\). By removing all dominated edges from \(F_{0}\), we avoid any double counting and ensure that \(count_{v}(v)-\sum_{u\in V(F_{0})}count_{v}(u)\) is indeed the quantity we are interested in. It can be computed in time \(O(f)\) for each \(v\).
### Small Sensitivity
We now modify the data structure in the case where the sensitivity \(f=o(\log n)\) is sublogarithmic. If so, the information of all the trees \(T_{v}\) can be stored in a more compact way. For every vertex \(v\in V\), we define a new representation \(\mathcal{T}_{v}\) of the tree \(T_{v}\) by first removing unnecessary parts and then replacing long paths with single edges. This corresponds to the two steps of the compression described below. For the first one, we need the following definition. We say a subtree \(\mathcal{T}_{v}\) of \(T_{v}\) preserves the _source-to-leaf reachability_ if, for every set \(F\subseteq E\) of up to \(f\) failing edges, there is a marked vertex of \(T_{v}\) that is reachable from the source \(v\) in \(T_{v}-F\) if and only if there is a leaf of \(\mathcal{T}_{v}\) that is reachable from \(v\) in \(\mathcal{T}_{v}-F\).
The first compression step.We first describe how to preserve the source-to-leaf reachability. We select a set \(\mathcal{L}_{v}\subseteq S\) of at most \(2^{f}\) marked vertices and set \(\mathcal{T}_{v}\) as the smallest subtree of \(T_{v}\) that contains \(v\) and \(\mathcal{L}_{v}\). We say that a marked vertex \(s\) of \(T_{v}\) is _relevant_ if there is no marked vertex \(s^{\prime}\neq s\) that is contained in the path from \(v\) to \(s\) in \(T_{v}\).
We compute \(\mathcal{L}_{v}\) as follows. We construct a DAG \(G_{v}\) that is obtained from a copy of \(T_{v}\) in which each edge \((u,u^{\prime})\), with \(u\) being the parent of \(u^{\prime}\) in \(T_{v}\), is directed from \(u\) to \(u^{\prime}\). The DAG is augmented with a dummy sink vertex \(x\) that contains an incoming directed edge from each relevant vertex \(s\) of \(T_{v}\). We then run the algorithm of Baswana, Choudhary, and Roditty [6] to compute a subgraph \(H_{v}\) of \(G_{v}\) such that (i) the in-degree of each vertex of \(H_{v}\) is at most \(2^{f}\) and (ii) for every possible set \(F\) of at most \(f\) edge failures, each vertex \(u\) is reachable from \(v\) in the graph \(G_{v}-F\) iff \(u\) is reachable from \(v\) in \(H_{v}-F\).
The set \(\mathcal{L}_{v}\) of marked vertices corresponds to the tails of the edges in \(H_{v}\) that enter the sink \(x\). As \(x\) has in-degree of at most \(2^{f}\) in \(H_{v}\), the size of \(\mathcal{L}_{v}\) is \(O(2^{f})\). Moreover, \(\mathcal{L}_{v}\) is the
set of leaves of \(\mathcal{T}_{v}\). The following lemma proves the correctness of our selection algorithm.
For every \(F\subseteq E(G)\), with \(|F|\leqslant f\), there is a marked vertex of \(T_{v}\) that is reachable from \(v\) in \(T_{v}-F\) iff there is a vertex of \(\mathcal{L}_{v}\) that is reachable from \(v\) in \(\mathcal{T}_{v}-F\).
Proof.: Fix a set \(F\) of at most \(f\) failing edges of \(G\). As \(\mathcal{T}_{v}\) is a subtree of \(T_{v}\), if there is a vertex in \(\mathcal{L}_{v}\) that is reachable from \(v\) in \(\mathcal{T}_{v}-F\), then the same marked vertex is reachable from \(v\) in \(T_{v}-F\). To prove the other direction, let \(X\) be the set of all marked vertices that are reachable from \(v\) in \(T_{v}-F\). We prove that \(X\cap\mathcal{L}_{v}\neq\emptyset\). Let \(s\in X\) be a marked vertex that is reachable from \(v\) in \(T_{v}-F\). Let \(s^{*}\in S\) be the vertex closest to \(v\) in the path from \(v\) to \(s\) in \(T_{v}\) (possibly, \(s^{*}=s\)). We have that \(s^{*}\) is relevant and is reachable from \(v\) in \(T_{v}-F\). This implies that the sink \(x\) is reachable from \(v\) in \(G_{v}-F\) via the path that goes through \(s^{*}\). As a consequence, \(x\) is also reachable in \(H_{v}-F\). Hence, there is a vertex in \(\mathcal{L}_{v}\) that is also reachable from \(v\) in \(\mathcal{T}_{v}-F\). Therefore, \(X\cap\mathcal{L}_{v}\neq\emptyset\).
The second compression step.After the first compression step, the tree \(\mathcal{T}_{v}\) contains at most \(2^{f}\) leaves. However, it might still be the case that the number of vertices of \(\mathcal{T}_{v}\) is large due to the presence of very long paths connecting two consecutive _branch vertices_, i.e., vertices of \(\mathcal{T}_{v}\) with two or more children. The second step of compressing \(\mathcal{T}_{v}\) allows us to represent long paths between consecutive branch vertices in a more compact way.
Let \(x\) and \(y\) be two consecutive branch vertices in \(\mathcal{T}_{v}\), i.e., \(x\) is an ancestor of \(y\) in \(\mathcal{T}_{v}\) and the internal vertices of the path \(P\) from \(x\) to \(y\) are not branch vertices. We say that \(P\) is _long_ if it contains at least \(\sqrt{n}\) edges. If the path \(P\) is long, we substitute the path \(P\) in \(\mathcal{T}_{v}\) with a _representative_ edge between \(x\) and \(y\) (so we also remove all the internal vertices of \(P\) from the tree) and we add the path \(P\) to the set \(\mathcal{P}\) of long paths. So, in every tree \(\mathcal{T}_{v}\), we replace every long path between two consecutive branch vertices with a representative edge. We observe that \(\mathcal{P}\) can be computed in \(O(n^{2})\) time. Moreover, we observe that \(\mathcal{P}\) contains \(O(n^{3/2})\) paths as each tree \(T_{v}\) contributes with at most \(\sqrt{n}\) long paths.
Next, we use the algorithm given in [2] to hit all the long paths in \(\mathcal{P}\) with a set \(Z\) of \(O(\sqrt{n}\,\log n)\)_pivot_ vertices in \(O(|\mathcal{P}|\sqrt{n})=O(n^{2})\) time, where a path is _hit_ if we select a pivot vertex that belongs to the path. For each pivot \(z\in Z\), we store the shortest-path tree \(T_{z}\) of \(G\) rooted at \(z\). By construction, each long path \(P\in\mathcal{P}\) between two consecutive branch vertices \(x\) and \(y\) of a tree \(\mathcal{T}_{v}\) is contained in \(T_{z}\), for some \(z\in Z\) that hits \(P\); moreover, a vertex \(z\in Z\) that hits \(P\) is also the least-common-ancestor of \(x\) and \(y\) in \(T_{z}\).
The representative edge \((x,y)\) in \(\mathcal{T}_{v}\) stores a pointer to the tree \(T_{z}\) of any pivot \(z\) that hits \(P\) (ties can be arbitrarily broken). Clearly, after the second compression step, each tree \(\mathcal{T}_{v}\) contains \(O(2^{f}\sqrt{n}\,)\) vertices. Therefore, the overall size needed to store all the trees \(\mathcal{T}_{v}\) is \(O(2^{f}n^{3/2})\). Moreover, storing the trees \(T_{z}\) for all the pivots in \(Z\) requires \(O(n)\) space per tree, for a total of \(O(n^{3/2}\log n)\) space. Hence, the overall size of our data structure is \(O(n^{3/2}(2^{f}+\log n))\).
Now, given a set \(F\) of at most \(f\) failing edges, we describe how the query algorithm computes the set \(S^{\prime}\) in \(O(f^{2}2^{f})\) time. As before, for every \(v\in V(F)\), we need to understand whether \(v\) must be added to \(S^{\prime}\) or not. In the following, we fix \(v\in V(F)\) and explain how to check whether \(v\in S^{\prime}\) or not in \(O(f2^{f})\) time. We recall that \(v\) must be added to \(S^{\prime}\) iff there is a marked vertex in \(T_{v}-F\) that is still reachable from \(v\). By Lemma 3.2, this is equivalent to having a leaf of \(\mathcal{L}_{v}\) that is reachable from \(v\) in \(\mathcal{T}_{v}-F\).
We visit the tree \(\mathcal{T}_{v}\) and we remove from \(\mathcal{T}_{v}\) all edges that correspond to edges in \(F\). This can be easily done in \(O(f)\) time for each non-representative edge using least-common-ancestor queries. For the representative edges we proceed as follows. We consider all the representative
edges in \(\mathcal{T}_{v}\). Let \((x,y)\) be a representative edge of \(\mathcal{T}_{v}\) and let \(z\) be the pivot of the tree \(T_{z}\) that is associated with the edge \((x,y)\) in \(\mathcal{T}_{v}\). We remove \((x,y)\) from \(\mathcal{T}_{v}\) iff there is a failing edge in \(F\) that is contained in the path \(P\) in \(T_{z}\) from \(x\) to \(y\). We check whether \(P\) contains some edges of \(F\) in \(O(f)\) time as follows. We look at all the failing edges in \(F\) and, for each failing edge \((u,u^{\prime})\in F\), we check whether \((u,u^{\prime})\) is an edge of \(P\) using a constant number of _least-common-ancestor_ queries in the tree \(T_{z}\).3 As each tree \(\mathcal{T}_{v}\) contains \(O(2^{f})\) representative edges and we need \(O(f)\) time to understand if a representative edge can be removed or not from the tree, we need \(O(f2^{f})\) to understand which are the representative edges that need to be removed from \(\mathcal{T}_{v}\), for a fixed \(v\in V(F)\).
Footnote 3: We observe that \((u,u^{\prime})\) is on the path \(P\) iff one of the following two conditions hold: (i) the least-common-ancestor of \(u\) and \(x\) in \(T_{z}\) is \(u\) and the least-common-ancestor of \(u^{\prime}\) and \(x\) in \(T_{z}\) is \(u^{\prime}\); (ii) the least-common-ancestor of \(u\) and \(y\) in \(T_{z}\) is \(u\) and the least-common-ancestor of \(u^{\prime}\) and \(y\) in \(T_{z}\) is \(u^{\prime}\).
Once all edges that represent \(F\) have been removed from \(\mathcal{T}_{v}\), it is enough to check whether there is a vertex of \(\mathcal{L}_{v}\) that is still reachable from \(v\). This can be clearly done in \(O(f^{2})\) time per tree \(\mathcal{T}_{v}\) using the values \(k_{u}\), as already discussed for the case in which \(f=\Omega(\log n)\). In particular, for every vertex \(u\) in \(\mathcal{T}_{v}\), the value \(k_{u}\) is equal to the number of vertices of \(\mathcal{L}_{v}\) that are contained in the subtree of \(\mathcal{T}_{v}\) rooted at \(u\).
## 4 Single-Source sT-Diameter Oracles
In the following theorem, we address the question of computing an \(sT\)-diameter oracle using a single-source DSO with source \(s\). We restate the relevant theorem below. Its proof uses similar ideas as those shown in Section 3, but the single-source setting allows for a better preprocessing time, space, and stretch.
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(s\in V\) be a vertex and \(T\subseteq V\) a non-empty set. Given a single-source \(f\)-DSO for \(G\) with preprocessing time \(\mathtt{P}\), space \(\mathtt{S}\), query time \(\mathtt{Q}\), and stretch \(\sigma\), one can compute an \(f\)-FDO-\(sT\) for \(G\) with preprocessing time \(\mathtt{P}+O(m+n\log n)\), space \(\mathtt{S}+O(n)\), query time \(O(f^{2}+f\mathtt{Q})\), and stretch \(1+2\sigma\). For unweighted graphs, the preprocessing time can be improved to \(\mathtt{P}+O(m)\).
Proof.: Let \(\mathcal{D}\) denote the single-source \(f\)-DSO. The preprocessing algorithm for the \(f\)-FDO-\(sT\) first constructs \(\mathcal{D}\) with source \(s\). It also computes a shortest path tree \(T_{s}\) of \(G\) rooted at \(s\). Each node \(v\in V(T_{s})=V\) is annotated with a pointer to its parent node and its respective number in the pre-order and post-order traversal of \(T_{s}\). Similarly as above, the algorithm also computes the value \(count(v)\) for every \(v\), which is the number of descendants of \(v\) (including \(v\) itself) that are in \(T\). Finally, it stores the maximum distance \(C=\max_{t\in T}d_{G}(s,t)\) from the root among the vertices in the set \(T\). The preprocessing takes total time \(\mathtt{P}+O(m+n\log n)\) in general weighted graphs and, again, can be reduced to \(\mathtt{P}+O(m)\) for certain classes of weights [46]. Storing the oracle and the tree takes \(\mathtt{S}+O(n)\) space.
For the query, consider a set \(F\subseteq E\) of up to \(f\) failing edges and let \(F_{0}=F\cap E(T_{s})\) be those failures that are in the tree. Consider the collection of rooted (sub-)trees \(T_{s}-F_{0}\). Define \(X_{F}\) to be the set of roots of those trees that contain some vertex from \(T\). For some \(v\in V\), let \(\mathcal{D}(v,F)\) be the \(\sigma\)-approximation of the replacement distance \(d_{G-F}(s,v)\) computed by the DSO \(\mathcal{D}\). Our \(sT\)-diameter oracle answers the query \(F\) by reporting the value
\[\widehat{D}=C+\max_{x\in X_{F}}\mathcal{D}(x,F).\]
Regarding the correctness of that answer, consider a vertex \(t\in T\). Let \(x\in X_{F}\) be the root of the subtree of \(T_{s}\) that contains \(t\). There is a path from \(s\) to \(t\) in \(G-F\) of length at most \(d_{G-F}(s,x)+d_{G}(x,t)\leqslant d_{G-F}(s,x)+d_{G}(s,t)\leqslant\mathcal{D}(x,F )+C\). Hence, we have \(d_{G-F}(s,t)\leqslant C+\max_{x\in X_{F}}\mathcal{D}(x,F)\), that is, \(\operatorname{diam}(G{-}F,s,T)\leqslant\tilde{D}\). We next prove \(\tilde{D}\leqslant(1+2\sigma)\cdot\operatorname{diam}(G{-}F,s,T)\). Let \(x_{0}\in X_{F}\) be the maximizer of \(\mathcal{D}(x,F)\), and \(t\in T\) be in the tree in \(T_{s}-F_{0}\) that is rooted in \(x_{0}\). Then, we have \(d_{G-F}(s,x_{0})\leqslant d_{G-F}(s,t)+d_{G}(t,x_{0})\leqslant d_{G-F}(s,t)+d _{G}(t,s)\leqslant 2\cdot d_{G-F}(s,t)\). We used here that \(G\) is undirected so that we can go "up" the tree from \(t\) to \(x_{0}\). From this, we get
\[\tilde{D}=C+\mathcal{D}(x_{0},F)\leqslant C+\sigma\cdot d_{G-F}(s,x_{0}) \leqslant C+2\sigma\cdot d_{G-F}(s,t)\leqslant(1+2\sigma)\cdot\operatorname{ diam}(G-F,s,T).\]
Given \(X_{F}\), computing \(\widehat{D}\) takes time \(O(f\mathbb{Q})\). It remains to show how to compute \(X_{F}\) from \(F\) in \(O(f^{2})\) time. Recall that we know the parent of every non-root node in \(T_{s}\). We use it to first obtain \(F_{0}\) from \(F\) in time \(O(f)\) as an edge \(\{a,b\}\) is in \(T_{s}\) iff \(a\) is parent of \(b\) or vice versa.
For each edge \(e\in F_{0}\), let \(b(e)\) be the endpoint of \(e\) that is farther from the source \(s\). Next, define \(B_{0}=\{b(e)\,|\,e\in F_{0}\}\cup\{s\}\). Every root in \(X_{F}\) is either the source \(s\) or the "lower" endpoint of a failing edge, i.e., \(X_{F}\subseteq B_{0}\). For each \(b\in B_{0}\), let \(B_{0}(b)\) be the closest proper descendants of \(b\) in \(B_{0}\), if any. That is, on the paths in \(T_{s}\) between \(b\) and any \(b^{\prime}\in B_{0}(b)\) there is no other vertex from \(B_{0}\). We can compute the sets \(B_{0}(b)\) for all \(b\in B_{0}\) simultaneously in total time \(O(|B_{0}|^{2})=O(f^{2})\) as follows. A vertex is a proper ancestor of \(b^{\prime}\) iff its pre-order number is strictly smaller than that of \(b^{\prime}\) and its post-order number is strictly larger. So finding those takes time \(O(|B_{0}|)\) for each \(b^{\prime}\in B_{0}\). Then, \(b^{\prime}\) is in the set \(B_{0}(b)\) for the proper ancestor \(b\) with the highest pre-order number.
Finally, observe that a vertex \(b\in B_{0}\) lies in \(X_{F}\) if and only if there is at least one vertex of \(T\) that falls into the subtree of \(T_{s}\) rooted at \(v\) but not in any of the subtrees rooted at (proper) descendants of \(b\) in \(B_{0}\). To check this condition via the counts, we only need to consider the immediate descendants in \(B_{0}(b)\). If the element of \(T\) is in some lower subtree, then it is also accounted for by an immediate descendant. In summary, some \(b\in B_{0}\) is in \(X_{F}\) iff \(count(b)-\sum_{b^{\prime}\in B_{0}(b)}count(b^{\prime})>0\). This proves that \(X_{F}\) is computable in time \(O(f^{2})\).
We now handle multiple sources, that is, we build an \(f\)-FDO-\(ST\) for a general set \(S\). The next result is a straightforward reduction to the \(sT\)-case. As it turns out, it is enough to construct the \(sT\)-diameter oracle for two arbitrary vertices \(s\in S\) and \(t\in T\). Due to lack of space, the proof of Lemma 9 is deferred to the full version of the paper.
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(S,T\subseteq V\) be non-empty sets of vertices, and \(s\in S\) and \(t\in T\) be two vertices. Suppose one is given access to an \(f\)-FDO-\(sT\) and an \(f\)-FDO-\(tS\) for \(G\) with respective preprocessing times \(\mathsf{P}_{sT}\) and \(\mathsf{P}_{tS}\), space requirements \(\mathsf{S}_{sT}\) and \(\mathsf{S}_{tT}\), query times \(\mathsf{Q}_{sT}\) and \(\mathsf{Q}_{sT}\), and stretches \(\sigma_{sT}\) and \(\sigma_{tS}\). Then, one can compute an \(f\)-FDO-\(ST\) for \(G\) with preprocessing time \(\mathsf{P}_{sT}+\mathsf{P}_{tS}\), space \(\mathsf{S}_{sT}+\mathsf{S}_{tS}\), query time \(\mathsf{Q}_{sT}+\mathsf{Q}_{tT}\), and stretch \(\sigma_{sT}+\sigma_{tS}+\min(\sigma_{sT},\sigma_{tS})\).
Combining Theorem 4 and Lemma 9 gives a reduction from \(f\)-FDO-\(ST\) to single-source \(f\)-DSOs. However, it results in a data structure with a stretch of \(3+6\sigma\), where \(\sigma\) is the original stretch of the \(f\)-DSO We can improve this by not treating Lemma 9 as a black box.
Let \(G=(V,E)\) be an undirected graph with \(n\) vertices, \(m\) edges, and possibly positive edge weights. Let \(S,T\) be two non-empty subsets of \(V\). Given a single-source \(f\)-DSO for \(G\) with preprocessing time \(\mathsf{P}\), space \(\mathsf{S}\), query time \(\mathbb{Q}\), and stretch \(\sigma\), one can compute an \(f\)-FDO-\(ST\) for \(G\) with preprocessing time \(O(\mathsf{P}+m{+}n\log n)\), space \(O(\mathsf{S}+n)\), query time \(O(f^{2}+f\mathbb{Q})\), and stretch \(2+5\sigma\). For unweighted graphs, the preprocessing time can be improved to \(O(\mathsf{P}+m)\)
Proof.: Let \(s\in S\) and \(t\in T\) be arbitrary. The preprocessing algorithm of the \(f\)-FDO-\(ST\) uses the single-source \(f\)-DSO twice, once for source \(s\) and once for \(t\), to construct an \(f\)-FDO-\(sT\)\(\mathcal{D}_{sT}\) and an \(f\)-FDO-\(tS\)\(\mathcal{D}_{tS}\) both with stretch \(1+2\sigma\), as described in Theorem 4.
For a set \(F\subseteq E\) of at most \(f\) edge failures, let \(\mathcal{D}_{sT}(F)\) and \(\mathcal{D}_{tS}(F)\) be the respective \((1+2\sigma)\)-approximations of \(\operatorname{diam}(G-F,s,T)\) and \(\operatorname{diam}(G-F,t,S)\). Further, let \(\mathcal{D}_{st}(F)\) be a \(\sigma\)-approximation of \(d_{G-F}(s,t)\), obtained from the DSO with source \(s\). The query algorithm outputs \(\widehat{D}=\mathcal{D}_{tS}(F)+\mathcal{D}_{st}(F)+\mathcal{D}_{sT}(F)\). Let \((s_{0},t_{0})\in S\times T\). We have
\[d_{G-F}(s_{0},t_{0}) \leqslant d_{G-F}(s_{0},t)+d_{G-F}(t,s)+d_{G-F}(s,t_{0})\] \[\leqslant\mathcal{D}_{tS}(F)+\mathcal{D}_{st}(F)+\mathcal{D}_{sT }(F)\leqslant(2{+}5\sigma)\cdot\operatorname{diam}(G{-}F,S,T).\qed\]
## 5 Space Lower Bound
Recall that Theorem 6 states a space lower bound for \(f\)-FDOs and \(f\)-FDO-STs with sensitivity \(f\geqslant 2\) in that if they have stretch better than \(\sfrac{5}{3}\), they must take \(\Omega(n^{3/2})\) space. The theorem is implied by the following lemma, which we prove in this section.
For infinitely many \(n\), there is a graph \(G=(V,E)\) with \(n\) vertices (and two sets \(S,T\subseteq V\)) such that any data structure that decides for any pair of edges \(e,e^{\prime}\in E\), whether \(G-\{e,e^{\prime}\}\) has diameter (resp., \(ST\)-diameter) 3 or 5 requires \(\Omega(n^{3/2})\) bits of space.
We first construct an auxiliary graph \(H\). Let \(n=6N\) for some \(N\) which is a perfect square. In the following, indices \(i,j\) range over the set \([\sqrt{N}]\) and \(k\) ranges over \(\{0,1\}\) Define four pairwise disjoint sets of vertices \(A=\{a[i,j]\}_{i,j}\), \(B=\{b[i,j,k]\}_{i,j,k}\), \(C=\{c[i,j,k]\}_{i,j,k},D=\{d[i,j]\}_{i,j}\) with respective cardinalities \(N,2N,2N,\) and \(N\). The vertex set of \(H\) is \(V(H)=A\cup B\cup C\cup D\). The edges in \(H\) are shown in Table 8 and are defined depending on the relations among the indices of the participating vertices. For example, some edge \(\{b[i,j,k],\ b[x,y,z]\}\) between elements of \(B\) and \(C\) exists if and only if _either_\(i\) and \(x\) are equal _or_\(j\) and \(y\) are equal, while \(k,z\in\{0,1\}\) can be arbitrary. Note that the number of edges in \(E\) is \(\Theta(N^{3/2})=\Theta(n^{3/2})\).
The diameter of \(H\) is at most \(3\).
Proof.: To verify that the diameter of \(H\) is at most \(3\), we give explicit paths of length at most \(3\) between all possible vertex pairs from the sets \(A\), \(B\), \(C\), and \(D\). Note that all paths below are reversible as the edges are undirected. The symbol \(\overline{x}\) stands for any index from \([\sqrt{N}\,]\) except \(x\), analogously for \(\overline{y}\).
\begin{table}
\begin{tabular}{l l l} \hline Set Pair & Vertex Pair & Edge Condition \\ \hline \(A\times A\) & & independent set \\ \(B\times B\) & \(b[i,j,k],\ b[x,y,z]\) & \((i=x)\oplus(j=y)\) \\ \(C\times C\) & \(c[i,j,k],\ c[x,y,z]\) & \((i=x)\oplus(j=y)\) \\ \(D\times D\) & & independent set \\ \(A\times B\) & \(a[i,j],\ b[x,y,z]\) & \((i=x)\wedge(z=0)\) \\ \(B\times C\) & \(b[i,j,k],\ c[x,y,z]\) & \((i=x)\wedge(j=y)\) \\ \(C\times D\) & \(c[i,j,k],\ d[x,y]\) & \((j=y)\wedge(k=0)\) \\ \hline \end{tabular}
\end{table}
Table 8: Conditions for the presence of edges between the vertex sets of graph \(G\) in Section 5. The symbol \(\oplus\) stands for the exclusive or. All conditions are symmetric with respect to the index pairs \((i,x)\), \((j,y)\), and \((k,z)\), whence \(H\) is undirected.
* For vertices \(a[i,j],a[x,y]\in A\), we distinguish two cases depending on whether the first indices \(i\neq x\) are different or not. In the first case, the vertices are joined by the path \((a[i,j],b[i,y,0],b[x,y,0],a[x,y])\). In the second case, the middle two vertices are the same, thus the path shortens to \((a[i,j],b[x,y,0],a[x,y])\).
* Symmetrically, for vertices \(d[i,j],d[x,y]\in D\), the cases are defined with respect two the second indices, i.e., whether \(j\neq y\). The paths are \((d[i,j],c[x,j,0],c[x,y,0],d[x,y])\) and \((d[i,j],c[x,y,0],d[x,y])\), respectively.
* For vertices \(b[i,j,k],b[x,y,z]\in B\), the generic path is \((b[i,j,k],b[x,j,k],b[x,y,z])\). If \(i=x\), then the first two vertices are the same; if \(j=y\), the last two are. The argument for vertices \(c[i,j,k],c[x,y,z]\in C\) is the same.
* For the vertex pair \((a[i,j],b[x,y,z])\in A\times B\), the key point is that any edge inside of \(B\) changes exactly one of the first two indices. If \(i\neq x\), the path is \((a[i,j],b[i,y,0],b[x,y,z])\), otherwise it is \((a[i,j],b[x,\overline{y},0],b[x,y,z])\).
* The pair \((d[i,j],c[x,y,z])\in D\times C\) is handled symmetrically. If \(j\neq y\), the path is \((d[i,j],c[x,j,0],c[x,y,z])\), otherwise it is \((d[i,j],b[\overline{x},y,0],b[x,y,z])\).
* Vertex pair \((a[i,j],c[x,y,z])\in A\times C\): path \((a[i,j],b[i,y,0],c[i,y,z],c[x,y,z])\). Note that if \(i=x\) the last two vertices are the same. Vertex pair \((d[i,j],b[x,y,z])\in D\times B\): path \((d[i,j],c[x,j,0],b[x,j,z],b[x,y,z])\).
* Vertex pair \((a[i,j],d[x,y])\in A\times D\): path \((a[i,j],b[i,y,0],c[i,y,0],d[x,y])\).
* Vertex pair \((b[i,j,k],c[x,y,z])\in B\times C\): the path \((b[i,j,k],b[x,j,k],c[x,j,k],c[x,y,z])\) possibly shortens if consecutive vertices are the same.
Consider an arbitrary binary \(\sqrt{N}\times\sqrt{N}\times\sqrt{N}\) matrix (tensor) \(M\). We build a supergraph \(G\supseteq H\) embedding the information about the entries of \(M\) in the fault-tolerant diameter of \(G\) under dual failures, i.e., \(\operatorname{diam}(G{-}F)\) with \(|F|=2\). The number of possible matrices \(M\) will then imply the space lower bounds for diameter oracles for \(G\).
The graph \(G\) contains all vertices and edges of \(H\) and the following additional edges.
* For all \(i,j,y\in[\sqrt{N}]\), if \(M[i,j,y]=1\), then add \(\{a[i,j],b[i,y,1]\}\) as an edge of \(G\).
* For all \(i,x,y\in[\sqrt{N}]\), if \(M[i,x,y]=1\), then add \(\{c[i,y,1],d[x,y]\}\).
Note that the diameter of \(G\) remains at most \(3\).
Consider any four indices \(i,j,x,y\in[\sqrt{N}]\) such that \(i\neq x\) and \(j\neq y\). We define two sets \(F,F^{\prime}\) both containing pairs of vertices in \(V=V(H)\). First, let \(F\subseteq E(H)\subseteq E\) contain
Figure 1: Visual representation of the graph \(H\). Each vertex corresponds to a tuple \([i,j]\) or \([i,j,k]\) and belongs to one of the sets \(A\), \(B\), \(C\), or \(D\). To move from vertex \(a[i,j]\) to \(b[i^{\prime},j^{\prime},k^{\prime}]\) it must be the case that \(i=i^{\prime}\) and \(k^{\prime}=0\), but one can jump from any \(j\) to any \(j^{\prime}\). This is marked by the blue edge labeled \(j\) between sets \(A\) and \(B\), analogously for the other pairs of sets. When moving inside the sets \(B\) or \(C\)_either_ the first index \(i\neq i^{\prime}\)_or_ the second one \(j\neq j^{\prime}\) changes, marked by the red labels. Sets \(A\) and \(D\) have no internal edges.
\(e_{1}=\{a[i,j],b[i,y,0]\}\) and \(e_{2}=\{c[i,y,0],d[x,y]\}\). Secondly, let \(F^{\prime}\) be the set comprising the two pairs \(e^{\prime}_{1}=\{a[i,j],b[i,y,1]\}\) and \(e^{\prime}_{2}=\{c[i,y,1],d[x,y]\}\). Note that the elements of \(F^{\prime}\) are only edges of \(G\) if the entries \(M[i,j,y]\) and \(M[i,x,y]\) are \(1\).
For any four indices \(i,j,x,y\in[\sqrt{N}]\) such that \(i\neq x\) and \(j\neq y\), the diameter of \(G-(F\cup F^{\prime})\) is at least \(5\).
Proof.: We show that the distance between \(a[i,j]\) and \(d[x,y]\) in \(G-(F\cup F^{\prime})\) is at least \(5\). Contrarily, assume that \(P=(a[i,j],w_{1},w_{2},w_{3},d[x,y])\) is a path of length at most \(4\). \(P\) must pass across sets \(A\to B\), \(B\to C\), and \(C\to D\) and change the indices from \((i,j)\) to \((x,y)\).
The neighborhood of \(a[i,j]\) in \(G-(F\cup F^{\prime})\) is the set
\[\Big{\{}b[i,\overline{y},0]\mid\overline{y}\in[\sqrt{N}]\setminus\{y\}\Big{\}} \cup\Big{\{}b[i,\overline{y},1]\mid\overline{y}\in[\sqrt{N}]\setminus\{y\} \wedge M[i,j,\overline{y}]\}=1\Big{\}}.\]
The index \(i\) cannot change on the first edge \(\{a[i,j],w_{1}\}\) of \(P\) and, since the edges \(e_{1}=\{a[i,j],b[i,y,0]\}\in F\) and \(e^{\prime}_{1}=\{a[i,j],b[i,y,1]\}\in F^{\prime}\) are missing, the second index of \(w_{1}\) must differ from \(y\). Symmetrically, the change of \(j\) cannot take place on the last edge \(\{w_{3},d[x,y]\}\) and the first index of \(w_{3}\) must differ from \(x\). At least one of the edges \(\{w_{1},w_{2}\}\) or \(\{w_{2},w_{3}\}\) passes from \(B\) to \(C\), w.l.o.g. let this be \(\{w_{1},w_{2}\}\). This edge (already present in \(H\)) cannot change any of the indices. We are left with \(\{w_{2},w_{3}\}\). If \(P\) has strictly less than \(4\) edges, then \(w_{2}=w_{3}\). Otherwise, either both endpoints \(w_{2}\) and \(w_{3}\) are in \(B\), both are in \(C\) or there is exactly one in either. None of those cases allows one to make the _two_ necessary changes to the indices simultaneously.
The proof is very similar to that of Lemma 3, only that every time the edge \(e_{1}=\{a[i,j],b[i,y,0]\}\in F\) (respectively, \(e_{2}=\{c[i,y,0],d[x,y]\}\)) has been used, it is replaced by \(e^{\prime}_{1}=\{a[i,j],b[i,y,1]\}\in F^{\prime}\) (respectively, by \(e^{\prime}_{2}=\{c[i,y,1],d[x,y]\}\)).
The diameter of \(G-F\) is at most \(3\) if \(M[i,j,y]=M[i,x,y]=1\), and at least \(5\) if \(M[i,j,y]=M[i,x,y]=0\).
Proof.: The diameter of graph \(G-F\) is at least \(5\) if neither vertex pair in \(F^{\prime}\) is an edge of \(G\) by Lemma 3. This is only true if \(M[i,j,y]=M[i,x,y]=0\). Conversely, by Lemma 3, the diameter is at most \(3\) if both edges in \(F^{\prime}\) lie in \(G\), i.e., if \(M[i,j,y]=M[i,x,y]=1\).
We now finish the proof of Lemma 3. Suppose there exists a data structure that distinguishes whether after any two edges fail the diameter of the resulting graph is bounded by \(3\) or at least \(5\). We can use it to infer the entry \(M[i,j,y]\) for any triple \((i,j,y)\in[\sqrt{N}]^{3}\) of indices such that \(i\) and \(j\) differ from each other, and \(j\) and \(y\) differ. We compute the edges in \(F\) with respect to the indices \(i\neq x=j\neq y\) and apply Lemma 3 to check whether \(M[i,j,y]=M[i,x,y]=1\) or \(M[i,j,y]=M[i,x,y]=0\). For the assertion in Lemma 3 about the \(ST\)-diameter, we choose \(S=A\) and \(T=D\). Since there are \(2^{\sqrt{N}(\sqrt{N}-1)^{2}}=2^{\Omega(n^{3/2})}\) collections of possible answers, the oracle must take \(\Omega(n^{3/2})\) bits of space.
|
2302.08765 | On the Regularising Levenberg-Marquardt Method for Blinn-Phong
Photometric Stereo | Photometric stereo refers to the process to compute the 3D shape of an object
using information on illumination and reflectance from several input images
from the same point of view. The most often used reflectance model is the
Lambertian reflectance, however this does not include specular highlights in
input images. In this paper we consider the arising non-linear optimisation
problem when employing Blinn-Phong reflectance for modeling specular effects.
To this end we focus on the regularising Levenberg-Marquardt scheme. We show
how to derive an explicit bound that gives information on the convergence
reliability of the method depending on given data, and we show how to gain
experimental evidence of numerical correctness of the iteration by making use
of the Scherzer condition. The theoretical investigations that are at the heart
of this paper are supplemented by some tests with real-world imagery. | Georg Radow, Michael Breuß | 2023-02-17T09:01:24Z | http://arxiv.org/abs/2302.08765v1 | # On the Regularising Levenberg-Marquardt Method for Blinn-Phong Photometric Stereo
###### Abstract
Photometric stereo refers to the process to compute the 3D shape of an object using information on illumination and reflectance from several input images from the same point of view. The most often used reflectance model is the Lambertian reflectance, however this does not include specular highlights in input images. In this paper we consider the arising non-linear optimisation problem when employing Blinn-Phong reflectance for modeling specular effects. To this end we focus on the regularising Levenberg-Marquardt scheme. We show how to derive an explicit bound that gives information on the convergence reliability of the method depending on given data, and we show how to gain experimental evidence of numerical correctness of the iteration by making use of the Scherzer condition. The theoretical investigations that are at the heart of this paper are supplemented by some tests with real-world imagery.
## 1 Introduction
The photometric stereo (PS) problem is a fundamental task in computer vision [5]. The aim of PS is to infer the 3D shape of an object from a set of multiple images. Thereby the images depict an object from the same perspective, but the illumination direction changes throughout the images. An important information besides the illumination is the light reflectance of the object. The classic PS model [14, 13] is formulated in terms of Lambertian light reflectance. A Lambertian surface is characterised by diffuse reflectance and the independence of perceived shading from the viewing angle. The Lambertian set-up is certainly convenient for modeling, as it represents the most simple mathematical model for reflectance, and thus resulting formula and inverse problems are relatively simple. However, it is quite well known that in PS specular highlights [6] as well as non-Lambertian diffuse effects [7] may have an important impact on 3D reconstruction.
Let us also comment on some other basic characteristics of PS. Depending on the knowledge on the lighting, one discerns between calibrated and uncalibrated PS. In this work we consider only the calibrated case, where lighting directions
and intensities are known. Furthermore, the final goal of PS is to obtain a depth map, such that for each relevant image pixel three-dimensional information of the depicted object is obtained. While some approaches tackle this problem directly in terms of depth values [8], the more common strategy is to divide depth computation into two sub-problems. In doing so at first a map of normal vectors is computed, from which the (relative) depth is obtained in a second step. See for instance [11] for a survey on surface normal integration. In this paper we only consider the first of the latter tasks, that is to find the normal vectors. Another aspect is sometimes the projection performed by the camera during image acquisition, often leading to orthographic or perspective models, respectively. In this work we address effectively both settings.
**Our contribution.** In this paper, we consider some theoretical aspects of practical value in the optimisation of PS when using Blinn-Phong reflectance. Here we extend in several ways upon previous work; let us especially refer to [6], where the Blinn-Phong model is employed in a similar way as here. Thereby, we consider to include the potentially most important specularity parameter, the so-called shininess, as an unknown in the optimisation, which is in contrast to [6] and many other works in the field. The approximate solution of the non-linear optimisation problem arising pixel-wise is performed by the regularising Levenberg-Marquardt method, see especially [2]. As this is an iterative method, it is important to assess the influence of initialisation on the convergence and to give a rigorous bound as a stopping criterion. Furthermore as the problem is non-linear, one can observe in practical examples, that it may be difficult to minimise the underlying residual. To address this issue we investigate the use of a coarse-to-fine (CTF) scheme as well as an initialisation obtained through classical PS. We show how to explore Scherzer's criterion [3], which appeared in [4] for the first time. This criterion is considered for theoretical purposes within the construction of the method, in order to assess the convergence property in our PS problem experimentally.
## 2 Classical Photometric Stereo
Let us reiterate the classic PS approach of Woodham [14, 13]. Given is a set of \(m\geq 3\) images \(\left(\mathcal{I}_{1},\ldots,\mathcal{I}_{m}\right)^{\top}=:\mathcal{I}\), so that \(\mathcal{I}:\Omega\to\mathbb{R}^{m}\), along with the corresponding lighting directions \(L_{k}\in\mathbb{R}^{3}\) with \(\|L_{k}\|=1\) for \(k=1,\ldots,m\), with associated intensities \(l_{k}\geq 0\). Throughout the paper \(\|\cdot\|\) denotes the Euclidean norm or the induced spectral norm. The object to be reconstructed is depicted usually as a non-rectangular domain \(\Omega\in\mathbb{R}^{2}\), which is embedded in the image domain.
The surface normal vectors \(\mathcal{N}:\Omega\to\mathbb{R}^{3}\) with \(\|\mathcal{N}(x,y)\|=1\) for all \((x,y)^{\top}\in\mathbb{R}^{2}\) and the albedo \(\rho^{d}:\Omega\to\mathbb{R}\) are fitted through a least squares approach, by minimising
\[\iint_{\Omega}\bigl{\|}\mathcal{R}^{\mathrm{L}}(x,y)-\mathcal{I}(x,y)\bigr{\|} ^{2}\,\mathrm{d}x\,\mathrm{d}y, \tag{1}\]
with reflectance function \(\mathcal{R}^{\mathrm{L}}\coloneqq\left(\mathcal{R}^{\mathrm{L}}_{1},\ldots, \mathcal{R}^{\mathrm{L}}_{m}\right)^{\top}\), consisting of components
\[\mathcal{R}^{\mathrm{L}}_{k}\coloneqq\rho^{d}l_{k}L_{k}^{\top}\mathcal{N},\quad k =1,\ldots,m. \tag{2}\]
In practice this boils down to finding a local solution \(N\in\mathbb{R}^{3}\) at every sample location \((x,y)^{\top}\) for the problem
\[\min_{N}\lVert LN-I\rVert^{2},\quad L\coloneqq\begin{pmatrix}l_{1}L_{1}^{\top }\\ \vdots\\ l_{m}L_{m}^{\top}\end{pmatrix},\quad I\coloneqq\mathcal{I}(x,y). \tag{3}\]
This, in turn, leads to the computation of the normal vectors and, as a byproduct, the albedo according to
\[N=\left(L^{\top}L\right)^{-1}L^{\top}I,\qquad\rho^{d}(x,y)=\lVert N\rVert, \qquad\mathcal{N}(x,y)=N/\lVert N\rVert\,. \tag{4}\]
## 3 Blinn-Phong Photometric Stereo
In the general least squares approach Eq. (1), we can modify the reflectance function to account for non-Lambertian effects. To this end we investigate the Blinn-Phong (BP) model [9, 1], which has the form \(\mathcal{R}^{\mathrm{BP}}\coloneqq\left(\mathcal{R}^{\mathrm{BP}}_{1},\ldots,\mathcal{R}^{\mathrm{BP}}_{m}\right)^{\top}\) with components
\[\mathcal{R}^{\mathrm{BP}}_{k}\coloneqq\rho^{d}l_{k}L_{k}^{\top}\mathcal{N}+ \rho^{s}h_{k}\max\left\{0,\mathcal{H}_{k}^{\top}\mathcal{N}\right\}^{\alpha}, \tag{5}\]
\(k=1,\ldots,m\). We observe by (5) that in the BP model, diffuse reflection as in (2) is supplemented by a specular reflection term. Here \(\rho^{s}:\Omega\to\mathbb{R}\) denotes the specular albedo. Another material parameter is the specular sharpness or shininess \(\alpha:\Omega\to\mathbb{R}\). The halfway vectors \(\mathcal{H}_{k}:\Omega\to\mathbb{R}^{3}\) depend on the viewing directions \(\mathcal{V}:\Omega\to\mathbb{R}^{3}\) and are computed for \(k=1,\ldots,m\) as
\[\mathcal{H}_{k}(x,y)\coloneqq H_{k}/\lVert H_{k}\rVert,\qquad H_{k}\coloneqq L _{k}+\mathcal{V}(x,y). \tag{6}\]
Making use of focal length \(f\), the viewing directions \(\mathcal{V}^{\perp}\) and \(\mathcal{V}^{\angle}\) in the orthographic and perspective setting respectively are
\[\mathcal{V}^{\perp}=(0,0,1)^{\top},\qquad\mathcal{V}^{\angle}(x,y)=(x,y,f)^{ \top}. \tag{7}\]
We reinterpret \(l_{k}\) as diffuse intensity of the light source and denote \(h_{k}\geq 0\) as specular intensity. To ensure that image intensities are only increased due to diffuse and specular terms, it is reasonable to enforce \(\rho_{d},\rho_{s}\geq 0\). Furthermore \(\rho_{d},\rho_{s}\leq 1\) ensures that at most as much image intensity is added as light intensity is supplied by each light source. Finally, it is reasonable to enforce \(\alpha>1\) to actually produce specular highlights through the specular term.
The BP model was originally proposed for computer graphics. It is not based on physical laws, but it enables to create plausible images with a still simple model compared to other possible approaches. Despite its simplicity, for use
in inverse problems in computer vision, the non-linearities in Eq. (5) may pose considerable hurdles.
Let us now discuss the modeling of the components in Eq. (5) along with a few adaptations we employ. First we turn our attention to the normal vectors \(\mathcal{N}\). One may model them through derivatives of the depth or its logarithm. In this approach we may parametrise them at a specific location through depth derivatives \(p,q\) as
\[\mathcal{N}(x,y)=\frac{N(p,q)}{\|N(p,q)\|}. \tag{8}\]
However the step of obtaining a normal vector of length \(1\) in Eq. (8) adds another layer of non-linearity to the model. In numerical experiments we found this approach to be not very reliable. Therefore we opt for an approach in analogy to classical PS. In Eq. (5) we replace \(\rho^{d}\mathcal{N}=N\) introducing the auxiliary variable \(r=\rho^{s}/(\rho^{d})^{\alpha}\).By furthermore replacing \(\alpha=1+\exp(a)\) we ensure that \(\mathcal{R}^{\mathrm{BP}}\) has continuous first derivatives. Eq. (5) then takes the form
\[\mathcal{R}^{\mathrm{BP}}_{k}(N,r,a)=l_{k}L_{k}^{\top}N+rh_{k}\max\left\{0, \mathcal{H}_{k}^{\top}N\right\}^{1+\exp(a)}, \tag{9}\]
with \(r,a\in\mathbb{R}\) and \(N\in\mathbb{R}^{3}\).
## 4 On the Optimisation Strategy
With BP reflectance, we have to solve a non-linear least squares problem, to which end we utilise the regularising Levenberg-Marquardt (RLM) scheme [2, 3]. Writing the underlying task in standard notation, with this algorithm one may aim to find a solution \(\vec{x}\) of the problem
\[F(\vec{x})=\vec{y},\qquad F:\mathbb{R}^{n}\to\mathbb{R}^{m}, \tag{10}\]
with a known differentiable function \(F\). Let us note that the description and discussion of the RLM algorithm in [3] is in a more general setting. For simplicity we only give an overview of the algorithm based on finite dimensional spaces, as is fitting for the problem at hand.
It is furthermore assumed that the original data \(\vec{y}\) is not known, but with some \(\delta>0\) an estimate is required on how good the given data \(\vec{y}^{\delta}\) approximates the original data, according to
\[\left\|\vec{y}^{\delta}-\vec{y}\right\|\leq\delta. \tag{11}\]
Then with some starting point \(\vec{x}_{0}\) the iterative rule takes the form
\[\vec{x}_{k+1}=\vec{x}_{k}+\left(F^{\prime}(\vec{x}_{k})^{\top}F^{\prime}( \vec{x}_{k})+\alpha_{k}I_{n}\right)^{-1}F^{\prime}(\vec{x}_{k})^{\top}\left( \vec{y}^{\delta}-F(\vec{x})\right) \tag{12}\]
with Jacobian matrix \(F^{\prime}\), \(n\times n\)-dimensional identity matrix \(I_{n}\) and a regularisation weight \(\alpha_{k}>0\) such that with a preassigned \(\rho\in(0,1)\) the new iterate \(\vec{x}_{k+1}\) fulfils
\[\left\|\vec{y}^{\delta}-F(\vec{x}_{k})-F^{\prime}(\vec{x}_{k})\left(\vec{x}_{ k+1}-\vec{x}_{k}\right)\right\|=\rho\big{\|}\vec{y}^{\delta}-F(\vec{x}_{k}) \big{\|}. \tag{13}\]
The _stopping criterion_ of the RLM scheme depends explicitly on the noise level \(\delta\) in the given data. To stop at an iterate \(\vec{x}_{k}\), it has to fulfil
\[\left\|\vec{y}^{\delta}-F(\vec{x}_{k})\right\|\leq\tau\delta, \tag{14}\]
with a preassigned \(\tau>2\), fulfilling \(\rho\tau>1\). For numerical experiments we set \(\rho=0.5,\ \tau=2.5\), following [3].
The discussion of the RLM scheme in [3] relies on the strong Scherzer condition [4]. For the Jacobian matrices at two points \(\vec{x}_{1},\vec{x}_{2}\in\mathbb{R}^{n}\) there exists a matrix \(R=R(\vec{x}_{1},\vec{x}_{2})\) such that \(F^{\prime}(\vec{x}_{1})=RF^{\prime}(\vec{x}_{2})\) and
\[\|R-I_{m}\|\leq C^{R}\|\vec{x}_{1}-\vec{x}_{2}\| \tag{15}\]
with some \(C^{R}>0\), which is constant for all \(\vec{x}_{1},\vec{x}_{2}\in\mathbb{R}^{n}\). This condition imposes a certain regularity of the Jacobian matrix \(F^{\prime}\). In this context we are interested in a local approximation of \(C^{R}\). For two consecutive iterations \(\vec{x}_{k},\vec{x}_{k+1}\) we estimate \(R\) as a solution of \(F^{\prime}(\vec{x}_{k})=R(\vec{x}_{k},\vec{x}_{k+1})F^{\prime}(\vec{x}_{k+1})\) with minimal norm. Then we can locally approximate the constant in Eq. (15) as
\[C^{R,\text{loc}}_{k}=\frac{\|R(\vec{x}_{k},\vec{x}_{k+1})-I_{m}\|}{\|\vec{x}_{ k}-\vec{x}_{k+1}\|}. \tag{16}\]
Since \(F\) in Eq. (10) is nonlinear, we employ a CTF framework. In doing so the data is scaled to a coarser scale, _i.e._ to a lower resolution. The obtained result is then used as initialisation on the next finer scale, until we arrive at the original resolution.
Let us focus on the assumption Eq. (11). The noise level \(\delta\) governs the stopping criterion of the RLM scheme. If Eq. (11) is not fulfilled then the iterates may actually diverge.
At this point we make the assumption that our data \(\mathcal{I}(x,y)\) is a realisation of the BP model corrupted by additive white Gaussian noise, _i.e._ it can be modelled as
\[\mathcal{I}(x,y)=\mathcal{R}(x,y)+\varepsilon(x,y),\quad\text{for }(x,y)^{\top}\in\Omega. \tag{17}\]
Here \(\varepsilon(x,y)\) is a realisation of a multivariate normal distribution, such that the \(m\) components are independent and identically distributed (i.i.d.) with mean zero and standard deviation \(\sigma>0\), the corresponding density function is
\[f(X)=\frac{1}{\sqrt{2\pi}^{m}\sigma^{m}}\exp\left(-\frac{1}{2\sigma^{2}}\sum _{i=1}^{m}X_{i}^{2}\right), \tag{18}\]
_cf._[10]. The probability that Eq. (11) holds can be computed with the following result. The proof, which is technical but straightforward, is included for the readers convenience. The following result is also related to the Chi distribution.
**Proposition 1**.: _Let \(m\in\mathbb{N}\), \(\delta>0\) and let \(\varepsilon\) be a realisation of an \(m\)-dimensional multivariate normal distribution with mean zero, standard deviation \(\sigma>0\) and density Eq. (18). The probability of \(P\coloneqq P(\|\varepsilon\|\leq\delta|\sigma,m)\) can be computed as follows:_
1. _If_ \(m\) _is even, then_ \[P=1-\exp\left(-\frac{\delta^{2}}{2\sigma^{2}}\right)\sum_{i=0}^{\frac{m}{2}-1} \left(\frac{\delta^{2}}{2\sigma^{2}}\right)^{i}\frac{1}{i!}.\] (19)
2. _If_ \(m\) _is odd, then_ \[P=\sqrt{\frac{2}{\pi}}\Bigg{(}\frac{1}{\sigma}\int_{0}^{\delta} \exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\mathrm{d}r\\ -\exp\left(-\frac{\delta^{2}}{2\sigma^{2}}\right)\sum_{i=1}^{\frac {m-1}{2}}\Bigg{(}\left(\frac{\delta}{\sigma}\right)^{m-2i}\prod_{j=1}^{\frac{m+ 1}{2}-i}\left(\frac{1}{2j-1}\right)\Bigg{)}\Bigg{)}.\] (20)
Proof.: For any continuous probability density \(f\) we have
\[P=P(\|\varepsilon\|\leq\delta|\sigma,m)=\int_{\|X\|\leq\delta}f(X)\,\mathrm{d}X. \tag{21}\]
Since the density function in Eq. (18) is radially symmetric, this simplifies to
\[P=\int_{0}^{\delta}O_{m}(r)f(r,0,\ldots,0)\,\mathrm{d}r, \tag{22}\]
where
\[O_{m}(r)=2r^{m-1}\frac{\pi^{\frac{m}{2}}}{\Gamma\left(\frac{m}{2}\right)} \tag{23}\]
denotes the surface area of a sphere with radius \(r\) around the origin in \(\mathbb{R}^{m}\). \(\Gamma\) denotes the gamma function. Inserting Eq. (18), we write
\[P=\frac{2^{1-\frac{m}{2}}}{\sigma^{m}\Gamma\left(\frac{m}{2}\right)}\int_{0}^{ \delta}r^{m-1}\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\mathrm{d}r. \tag{24}\]
Since \(\int r\exp(r^{2}/(2a))\,\mathrm{d}r=a\exp(r^{2}/(2a))+c\), for \(m>2\) the integral in Eq. (24) can be simplified by partial integration, _i.e._
\[\int_{0}^{\delta}r^{m-2}\cdot r\exp\left(-\frac{r^{2}}{2\sigma^{2 }}\right)\mathrm{d}r\\ =-\sigma^{2}\left[r^{m-2}\exp\left(-\frac{r^{2}}{2\sigma^{2}} \right)\right]_{r=0}^{\delta}+\sigma^{2}(m-2)\int_{0}^{\delta}r^{m-4}\cdot r \exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\mathrm{d}r. \tag{25}\]
We now consider the two cases of \(m\) being even or odd.
Let \(m\in\mathbb{N}\) be even. Then repeated partial integration of the integral Eq. (24) leads to
\[\int_{0}^{\delta}r^{m-1}\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right) \mathrm{d}r \tag{26}\] \[=-\sum_{i=1}^{\frac{m}{2}-1}\sigma^{2i}\prod_{j=1}^{i-1}(m-2j) \left[r^{m-2i}\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\right]_{r=0}^{\delta}\] \[\quad+\sigma^{m-2}\prod_{j=1}^{\frac{m}{2}-1}(m-2j)\int_{0}^{ \delta}r\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\mathrm{d}r\] \[=-\sum_{i=1}^{\frac{m}{2}}\sigma^{2i}\prod_{j=1}^{i-1}(m-2j) \left[r^{m-2i}\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\right]_{r=0}^{\delta}\] \[=-\sum_{i=1}^{\frac{m}{2}}\sigma^{2i}\frac{2^{i-1}\left(\frac{m} {2}-1\right)!}{\left(\frac{m}{2}-i\right)!}\left[r^{m-2i}\exp\left(-\frac{r^{ 2}}{2\sigma^{2}}\right)\right]_{r=0}^{\delta}\] \[=\sigma^{m}2^{\frac{m}{2}-1}\left(\frac{m}{2}-1\right)!-\sum_{i= 1}^{\frac{m}{2}}\sigma^{2i}\frac{2^{i-1}\left(\frac{m}{2}-1\right)!}{\left( \frac{m}{2}-i\right)!}\delta^{m-2i}\exp\left(-\frac{\delta^{2}}{2\sigma^{2}} \right).\]
This formula can easily be verified for \(m=2\), as in this case the initial integral simplifies to the form \(\int r\exp(r^{2}/(2a))\,\mathrm{d}r\). Inserting Eq. (26) and \(\Gamma(m/2)=(m/2-1)!\) into Eq. (24), we obtain after an index shift Eq. (19).
Now let \(m\in\mathbb{N}\) be odd. Again we use repeated partial integration on the integral in Eq. (24), until we arrive at
\[\int_{0}^{\delta}r^{m-1}\exp\left(-\frac{r^{2}}{2\sigma^{2}} \right)\mathrm{d}r=\sigma^{m-1}\prod_{j=1}^{\frac{m-1}{2}}(2j-1)\int_{0}^{ \delta}\exp\left(-\frac{r^{2}}{2\sigma^{2}}\right)\mathrm{d}r\\ -\sum_{i=1}^{\frac{m-1}{2}}\sigma^{2i}\frac{\prod_{j=1}^{\frac{m- 1}{2}}(2j-1)}{\prod_{j=1}^{\frac{m+1}{2}-i}(2j-1)}\delta^{m-2i}\exp\left(- \frac{\delta^{2}}{2\sigma^{2}}\right). \tag{27}\]
Plugging this together with
\[\Gamma\left(\frac{m}{2}\right)=\Gamma\left(\frac{m-1}{2}+\frac{1 }{2}\right)=\frac{(m-1)!\,\sqrt{\pi}}{\left(\frac{m-1}{2}\right)!\,2^{m-1}}\\ =\frac{\prod_{j=1}^{\frac{m-1}{2}}\left((2j)(2j-1)\right)\sqrt{ \pi}}{2^{\frac{m-1}{2}}\prod_{j=1}^{\frac{m-1}{2}}\left(2j\right)}=\frac{ \sqrt{\pi}}{2^{\frac{m-1}{2}}}\prod_{j=1}^{\frac{m-1}{2}}\left(2j-1\right) \tag{28}\]
into Eq. (24) leads to Eq. (20).
## 5 Experiments
Since we focus on the computed vector fields of surface normals, it appears adequate to employ colour coding of surface normals for visual assessment, _cf._ Figure 1. For quantitative evaluation we consider here the standard AAE, where the averaging is performed over the object domain. Let us note that we use the result obtained through classical PS as an initialisation for the BP model. Throughout the experiments we computed \(\delta\) according to Proposition 1, such that Eq. (11) is fulfilled with a probability of \(95\%\). We observed that the choice of this confidence level is not critical for the outcome of our experiments.
Synthetic Test Example.As a synthetic experiment for our investigations we consider the _sphere_ example, see Figure 1. Let us note that we consider an orthographic setting for all the _sphere_ experiments. As we observe in Figure 1, in this experiment the developed computational model and set-up enables to obtain a nearly perfect result. For optimisation we employed in total 5 input images, of which we show here just one example. For comparison, we give here the corresponding result obtained by Lambertian PS applied at analogous input images where we filtered the specular highlights by the subspace technique proposed in [15], which is supposed to make the input nearly Lambertian. As is confirmed here visually as well as quantitatively, it appears favorable (at least in this example) to explore an explicit modeling like with the proposed BP framework.
Let us note that in fact this test example may not be too easy, as can be observed by the results obtained by preprocessing and Lambertian PS. The reason is that the specular highlights in the input are not perfectly distributed over the sphere and may result in distortions if not being accounted for sufficiently accurate in the model.
Evaluation of Scherzer's Condition.As discussed in Sec. 4, between two iterates of the RLM scheme we observe the local approximation \(C_{k}^{R,\text{loc}}\) of the constant in Eq. (15) according to Eq. (16). As the Scherzer condition is an important assumption for the results in [3], we opt to add a break condition,
Figure 1: _(left-to-right:)_ one of the input images of the _sphere_ rendered using the BP model; colour coded vector field of ground truth normal vectors; classical PS with preprocessing [15], AAE 1.02; developed BP framework with CTF, AAE 0.37
where the algorithm stop if the estimate grows too large. In practice the algorithm is halted if we observe an iterate with \(C_{k}^{R,\mathrm{loc}}\geq 2000\). As can be seen in Figs. 2 and 3 this is usually the case at locations where specular highlights may occur, as the angle between halfway vectors and surface normals becomes small. One may interprete this result in the way, that the energy that is minimised features at highlights many small variations that makes it difficult to obtain a reliable local minimum.
We evaluated the restarting of the RLM scheme with a larger parameter \(\rho\) in Eq. (13), if it stopped before an iterate fulfils Eq. (14). This may lead to a smaller trust region and to a more stable behaviour of the algorithm. However we did in general not observe a significant increase in quality. The results displayed here were thus computed without restarting the RLM scheme, giving an account of the unstabilised version of the method.
Real World Test Example.In order to assess the properties and usefulness of the developed numerical BP framework, we exploit here a selected variety of examples taken from the _DiLiGent_ data set [12] which gives an account of photographed real-world objects with different reflectance properties. Here we do not employ a CTF scheme, as we rely on the initialisation obtained with classical PS. Let us note that the underlying model is now (in practice, weakly) perspective.
As can be visually assessed by means of Figure 3, the proposed model along with its adaptations performs very reasonably but in some details not perfect, depending on the actual example. For clarifying thereby the zones of influence of the specular terms we depict masks showing the object parts where the BP model gives an effective contribution. When taking into account the properties of the considered examples, it appears especially that the broad specularities as appearing in the input (teddy bear, goblet) may result in a certain inaccuracy. In turn, when highlights appear but are not too strong (cat, tea pot), results are quite convincing, given that the underlying reflectance in these cases is supposed to be non-linear in the diffuse reflectance as the underlying material is rough. In the tested real world setting from _DiLiGent_ the results are overall of similar quality to the preprocessed Lambertian method. Therefore we conjecture that
Figure 2: Algorithmic behaviour in the _sphere_ experiment _(left)_ and an example from the _DiLiGent_ data set [12]_(right)_. White depicts the locations where the RLM scheme stopped due to the \(C_{k}^{R,\mathrm{loc}}\geq 2000\) criterion.
our numerical BP framework appears to be especially suited for dealing with objects with not too strong highlights, being at the same time able to tackle a certain range of diffuse reflectance of rough materials.
## 6 Conclusion
We discussed the BP reflectance in the context of PS. The augmentation of classical PS with this reflectance model is straightforward, but solving the arising optimisation problem is less so. This task can be tackled with the RLM scheme, which leads to satisfactory results.
The findings for the implementation of the RLM scheme may be translated to other problems, since the assumption that the data follows a normal distribution is very common. The application of the BP model to more complex data sets poses considerable hurdles, which may be adressed in future work.
Figure 3: _(left-to-right:)_ Examples from _Diligent_ data sets. _(top-to-bottom:)_ Visualisation of ground truth normals; normal fields based on BP (where we note the effect of the not satisfied Scherzer condition at some highlights at the goblet); mask based on half directions. White depicts locations where the maximum of the cosines between halfway vectors and the normal vector obtained with classical PS is \(\geq 0.99\). |
2310.07647 | Generation of isolated flat bands with tunable numbers through Moiré
engineering | Unlike the spin-1/2 fermions, the Lieb and Dice lattices both host
triply-degenerate low-energy excitations. Here, we discuss Moir\'e structures
involving twisted bilayers of these lattices, which are shown to exhibit a
tunable number of isolated flat bands near the Fermi level. These flat bands
remain isolated from the high-energy bands even in the presence of small
higher-order terms and chiral-symmetry-breaking interlayer tunneling. At small
twist angles, thousands of flat bands can be generated to substantially amplify
flat band physics. We demonstrate that these flat bands carry substantial
quantum weight so that upon adding a BCS-type pairing potential, the associated
superfluid weight would also be large, and the critical superconducting
temperature would be tunable. Our study suggests a new pathway for flat-band
engineering based on twisted bilayer Lieb and Dice lattices. | Xiaoting Zhou, Yi-Chun Hung, Baokai Wang, Arun Bansil | 2023-10-11T16:45:04Z | http://arxiv.org/abs/2310.07647v2 | # Generation of isolated flat bands with tunable numbers through Moire engineering
###### Abstract
Unlike the spin-1/2 fermions, the Lieb and Dice lattices both host triply-degenerate low-energy excitations. Here we discuss Moire structures involving twisted bilayers of these lattices, which are shown to exhibit a tunable number of isolated flat bands near the Fermi level. These flat bands remain isolated from the high-energy bands even in the presence of small higher-order terms and chiral-symmetry-breaking interlayer tunneling. At small twist angles, thousands of flat bands can be generated to substantially amplify flat band physics. We demonstrate that these flat bands carry substantial quantum weight so that upon adding a BCS-type pairing potential, the associated superfluid weight would also be large, and the critical superconducting temperature would be tunable. Our study suggests a new pathway for flat-band engineering based on twisted bilayer Lieb and Dice lattices.
_Introduction.--_ Dispersionless electrons, commonly referred to as flat bands, provide a foundation for exploring strongly correlated physics. Flat bands generate high densities of states (DOS). As a result, the kinetic energy of the associated carriers is suppressed, and the interaction energy begins to dominate to trigger various correlated physical phenomena, especially when the flat bands are isolated near the Fermi energy. [1, 2, 3]. Exploration of flat-band systems is thus of fundamental importance, and subject of much current interest.
There are many mechanisms for generating flat bands. They can be induced by the geometry or symmetry of the lattice to produce the so-called compact localized states (CLS) [4, 5, 6]. In certain bipartite lattices [1, 2, 3], the wave functions are localized on certain sublattices due to destructive interference. Examples include Kagome [7], Lieb [8], and Dice lattices [1]. Such geometry-induced flat bands can be isolated from the high-energy bands via effects of spin-orbit coupling, dimerization, anisotropic strain [9], or stacking [10, 11, 12, 13]. The Dice lattice also hosts pseudospin-1 low-energy states at high-symmetry points [14, 15], and these have been realized experimentally realization in SrTiO\({}_{3}\)/SrIrO\({}_{3}\)/SrTiO\({}_{3}\) superlattice [16], and LaAlO\({}_{3}\)/SrTiO\({}_{3}\) (111) quantum wells [17, 18].
Another widely recognized mechanism involves the Moire bands observed in twisted van der Waals systems, such as bilayer graphene [19, 20, 21, 22], which supports superconductivity and provides a tunable system for realizing correlated phases [23, 24]. However, all known Moire materials possess a fixed number of flat bands near the Fermi level, and their physics is controlled by the bandwidth that varies with twist angle [25, 26, 20, 19, 21, 22].
Here, we combine the two aforementioned mechanisms for generating flat bands and discuss properties of Moire structures based twisted bilayer bipartite lattices, especially the Lieb and Dice lattices. These Moire structures are found to manifest a tunable number of isolated flat bands depending on the twist angle. We elucidate the origin of flat bands through an analysis of continuum models, where the valley structure is generated by the inclusion of second nearest neighbor (SNN) hoppings. Our conclusions are further substantiated by tight-binding calculations involving the absence of SNN hoppings in the twisted bilayer lattice.
_Twisted bilayer Lieb/Dice lattice.--_ We consider the generalized Lieb or Dice lattice as an unrotated monolayer system in which the second nearest neighbor hoppings are included (Figs. 1(a) and 1(b)). With the second nearest-neighbor hopping \(t_{2}\), the valley structures develop at the \(M\) point in the Lieb and at \(K\) and \(K^{\prime}\) in the Dice lattice, allowing low-energy expansions around these triply degenerate points, see Supplementary Material. Continuum models for the twisted bilayer Lieb (TBL) and twisted bilayer Dice (TBD) lattices can be obtained straightforwardly by following the Bistritzer-MacDonald formalism [26]:
\[H(\theta)=\begin{pmatrix}H_{0}(\frac{\theta}{2})&T\\ T^{\dagger}&H_{0}(-\frac{\theta}{2})\end{pmatrix}. \tag{1}\]
Figure 1: A schematic diagram of the Lieb (**a**) and Dice (**b**) lattices, in which the nearest-neighbor hopping \(t\) is marked in blue and the second nearest-neighbor hopping \(t_{2}\) is marked in red. The Moiré pattern of the twisted bilayer Lieb (**c**) and Dice (**d**) lattices. Centers of the high-symmetry stacked regions are marked by colored circles. Black arrows give the translation vectors of the Moiré unit cell.
Here, for the TBL:
\[H_{0}(\theta)=H_{0}^{(\rm Lieb)}(\theta)=t((\tilde{q}_{x})\lambda_{2}+(\tilde{q}_ {y})\lambda_{7})+t_{2}q^{2}, \tag{2}\]
where \(\tilde{q}_{x}\equiv q_{x}\cos(\theta)-q_{y}\sin(\theta)\), \(\tilde{q}_{y}\equiv q_{x}\sin(\theta)+q_{y}\cos(\theta)\), and \(\vec{q}=\vec{k}-\vec{k}_{M}^{(\theta)}\) with \(\vec{k}_{M}^{(\theta)}\) be the \(k\)-vector of the \(M\) valley in the _rotated_ monolayer Brillouin zone (FIG. 2(a)). The \(\{\lambda_{i}\}\) are Gellmann matrices [27]. On the other hand, for the TBD:
\[H_{0}(\theta)=H_{0}^{(\rm Dice)}(\theta,\zeta)=\frac{3t}{2}(\zeta q_{x}\hat{x} -q_{y}\hat{y})\cdot\vec{S}^{(\theta)}+t_{2}q^{2}, \tag{3}\]
where \(\vec{S}^{(\theta)}\equiv e^{-i\frac{\theta}{2}S_{3}}\vec{S}e^{i\frac{\theta}{ 2}S_{3}}\) and \(\zeta=\pm\) indicates the valley degree of freedom. The vector \(\vec{q}=\vec{k}-K_{\zeta}^{(\theta)}\) with \(K_{\zeta}^{(\theta)}\) be the \(K_{\zeta}\) valley in the _rotated_ monolayer BZ of Dice lattice ( FIG. 2(b)). The \(\{S_{i}\}\) are matrix representations of spin-1 in the basis of \(s_{z}\) eigenstates [28]. Note that the basis in Eqs. 2 and 3 is \(\begin{pmatrix}A,&B,&C\end{pmatrix}^{T}\), where \(A\), \(B\), and \(C\) are sublattices in the monolayer lattices (FIG. 1).
The interlayer coupling \(T\) has the following form [26; 29]:
\[T=\begin{pmatrix}\omega_{1}g(\vec{r})&\omega_{2}g(\vec{r}-\vec{r}_{12})& \omega_{3}g(\vec{r}-\vec{r}_{13})\\ \omega_{2}g(\vec{r}+\vec{r}_{12})&\omega_{1}g(\vec{r})&\omega_{2}g(\vec{r}- \vec{r}_{23})\\ \omega_{3}g(\vec{r}+\vec{r}_{13})&\omega_{2}g(\vec{r}+\vec{r}_{23})&\omega_{1} g(\vec{r})\end{pmatrix}, \tag{4}\]
where \(g(\vec{r})=\sum_{i}e^{i\vec{q}\cdot\vec{r}}\) with \(\vec{q}_{i}\) representing the momentum transfer of interlayer tunneling between valleys from different layers in the Moire Brillouin zone ( Figs. 2(a) and 2(b)). For TBL, \(\vec{q}_{1}=\frac{\pi}{L}\hat{x}+\frac{\pi}{L}\hat{y}\), \(\vec{q}_{2}=-\frac{\pi}{L}\hat{x}+\frac{\pi}{L}\hat{y}\), \(\vec{q}_{3}=\frac{\pi}{L}\hat{x}-\frac{\pi}{L}\hat{y}\), and \(\vec{q}_{4}=-\frac{\pi}{L}\hat{x}-\frac{\pi}{L}\hat{y}\). For TBD, \(\vec{q}_{1}=-\frac{4\pi}{3L_{x}}\hat{y}\), \(\vec{q}_{2}=\frac{2\pi}{\sqrt{3}L_{x}}\hat{x}+\frac{2\pi}{3L_{y}}\hat{y}\), \(\vec{q}_{3}=-\frac{2\pi}{\sqrt{3}L_{x}}\hat{x}+\frac{2\pi}{3L_{x}}\hat{y}\). In Eq. 4, \(\vec{r}_{ij}\) represents the center of different stacked regions in the Moire unit cell, and the origin of the coordinates is set at the center of the AA-stacked region, see Supplementary Material for details of lattice geometry of high-symmetry stacking. For the TBL, positions of different stacked regions are \(\vec{r}_{12}=\vec{r}_{AB_{x}}=\frac{L_{x}}{2}\hat{y}\), \(\vec{r}_{23}=\vec{r}_{AB_{y}}=\frac{L_{x}}{2}\hat{x}\), and \(\vec{r}_{13}=\vec{r}_{AB_{xy}}=\frac{L_{x}}{2}(\hat{x}+\hat{y})\) (see FIG. 1(a)). For the TBD, positions of different stacked regions are \(\vec{r}_{12}=\vec{r}_{23}=-\vec{r}_{13}=\vec{r}_{AB}=\frac{L_{x}}{\sqrt{3}}\hat {x}\) ( FIG. 1(b)).
Electronic structure of twisted bilayers.--The band structures obtained from diagonalizing Eq. 1 in the plane-wave basis are shown in FIG. 3(a.) for TBL and FIG. 4(a.) for TBD, which show isolated flat bands at the band edge around the Fermi level. These isolated flat bands are generated by gapping the folded parabolic dispersion originating from the second nearest-neighbor hopping through the interlayer tunneling \(\omega_{1}\). To demonstrate this fact and get more insight into the TBL and TBD, let's consider the case where \(\omega_{1}=\omega_{3}=t_{2}=0\). In this scenario, the electrons hop only between the \(A,B\) sublattice and \(A,C\) sublattice in both layers, which keeps the twisted bilayer lattice bipartite [1; 2; 3], making the Hamiltonian in Eq. 1 non-invertible and, therefore, hosting robust zero-energy states [30]. When \(t_{2}=0\), the existence of flat bands poses a challenge in validating Moire band structure calculations using low-energy continuum models.
To properly describe the TBL and TBD when \(t_{2}=0\), we construct the tight-binding model with Slater-Koster parameterization. The setting \(\omega_{1}=\omega_{3}=0\) corresponds to no interlayer tunneling between the same sublattice on different layers and no interlayer tunneling from \(A\) and \(C\) sublattices on different layers. When second-nearest neighbor hopping is not considered, the vanishing \(\omega_{1}\) and \(\omega_{3}\) leads to isolated exact flat bands at \(E=0\) with considerable degeneracies in the Moire Brillouin zone (see FIG. 3(b.) and FIG. 4(b.)). In this scenario, the system is controlled by the parameter \(\alpha\equiv\frac{\omega_{2}}{tk_{2}}\). Due to the special hopping structure in TBD and TBL when \(\omega_{1}=\omega_{3}=t_{2}=0\), the number of these flat bands is \(\frac{1}{3}\) of the total bands. Remarkably, these flat bands remain isolated for a large range of \(\alpha\) as shown in FIG. 5(a.) and FIG. 5(b.).
Due to the time-reversal symmetry, these isolated flat bands have no net Chern numbers. The Wilson loop calculation also shows that the isolated flat bands in TBL and TBD do not have non-trivial winding of the hybridized Wannier centers (see FIG. 6(a.) and FIG. 6(b.)). Due to the time-reversal symmetry, the flat bands in the TBL have no Berry curvature around
Figure 2: (**a**) BZ of the Lieb lattice on the top (blue) and bottom (red) layer. \(\vec{G}_{i}\) are the reciprocal lattice vectors for an unrotated monolayer Lieb lattice. (**b**) Moiré BZ of the TBL. The \(M\)-points of the top layer (blue) and of the bottom layer (red) map to the \(M^{(m)}\) point and the \(\Gamma^{(m)}\) point in the Moire Brillouin zone, respectively. (**c**) The Brillouin zone of the Dice lattice on the top (blue) and bottom (red) layer. The \(\vec{G}_{i}\) indicates the reciprocal lattice vectors of an unrotated monolayer Dice lattice. (**d**) The Moiré Brillouin zone of the TBD system. The \(K\)-points of the top layer (blue) and of the bottom layer (red) map to the \(K^{\prime(m)}\) point and the \(K^{(m)}\) point in the Moiré Brillouin zone, respectively.
its \(M\) valley. In contrast, the flat bands in the TBD are found to host considerable Berry curvature around the \(K,K^{\prime}\) valleys (see FIG. 6(c).). Note that the Berry curvature around each valley doesn't give a non-trivial valley Chern number, which is consistent with the analysis from the pseudo-Landau level representation of the isolated flat bands in TBD [29]. More specifically, due to the bipartite nature, the pseudo-Landau level spectrum has \(\frac{1}{3}\) total number of Landau levels are zero modes. However, we find that the zero-mode of the \(n=0\) pseudo-Landau level cancels out the contribution of the Hall conductivity from the zero-modes of \(n\neq 0\) pseudo-Landau level (see Supplementary Material). In contrast to twisted bilayer graphene, the zero-mode generated by \(n=0\) Landau levels in TBD do not contribute an integer conductivity quanta, and the zero-modes of the pseudo-Landau level in TBD are also generated by \(n\neq 0\) Landau levels, which makes the zero-modes of pseudo-Landau levels contribute no net Chern number.
Discussion--In contrast to prior twisted materials where the count of low-energy flat bands remains constant with varying twisted angles, the twisted bilayer Lieb (TBL) and Dice (TBD) lattices exhibit a twist angle-dependent quantity of flat bands near the Fermi level. Notably, these quantities increase at a rate greater than \(\frac{1}{\theta^{2}}\) as the twist angle \(\theta\) decreases (FIG. 7).
The introduction of second nearest-neighbor (SNN) hopping \(t_{2}\) and chiral-symmetry-breaking tunneling terms \(\omega_{1}\) or \(\omega_{3}\) disrupts the bipartite nature of the Moire lattice structure, resulting in the coupling of these flat bands. However, as long as these effects are not overly significant, the cluster of flat bands near the Fermi level remains isolated from the high-energy spectra. Specifically, the consideration of SNN hopping \(t_{2}\) leads to a parabolic dispersion of the flat bands near the valleys. Meanwhile, the inclusion of \(\omega_{1}\) acts as an interaction, opening a gap in the parabolic dispersion and creating isolated flat bands at the upper band edge of the flat band cluster. In this scenario, the system is governed by three key parameters: \(\alpha\equiv\frac{\omega_{2}}{tk_{a}}\), \(\beta\equiv\frac{t_{2}}{tk_{a}}\), and \(\gamma\equiv\frac{\omega_{1}}{tk_{a}}\). A similar effect is expected with the inclusion of \(\omega_{3}\). Further exploration of the topological phase diagram concerning \(\alpha\), \(\beta\), and \(\gamma\) will be interesting.
It is noteworthy that the twisted bilayer Dice lattice
Figure 5: (**a**) The gap between the flat bands and other high-energy bands in the tight-binding model of TBL with \(t_{2}=\omega_{1}=\omega_{3}=0\) as a function of interlayer tunneling \(\omega_{2}\). (**b**) The gap between the flat bands and the high-energy bands in the tight-binding model of TBD with \(t_{2}=\omega_{1}=\omega_{3}=0\) as a function of interlayer tunneling \(\omega_{2}\).
Figure 4: (**a**) The band structure of the Hamiltonian in Eq. 1 for TBD with \(\alpha\cong 19.429\), in which \(t=5.817\) eV, \(t_{2}=-0.1\) eV, \(\omega_{1}=0.001\) eV, \(\omega_{2}\cong 1.163426\) eV, and \(\omega_{3}=0\) eV. (**b**) The band structure of the TBD generated by the tight-binding model with Slater-Koster parameterization, which corresponds to the continuum model in Eq. 1 with \(\alpha\cong 19.429\) and \(t_{2}=\omega_{1}=\omega_{3}=0\).
(TBD) exhibits a valley structure characterized by a concentrated Berry curvature, signifying a substantial quantum weight. Consequently, upon introducing a BCS-type pairing potential, this leads to a noteworthy superfluid weight [33; 34; 35]. Moreover, the dramatic variation in the density of states (DOS) with respect to the twist angle suggests a promising way for the realization of tuning the transition temperature by adjusting the twist angle. As the number of flat bands increases with a decrease in the twist angle, it is anticipated that the transition temperature will increase at a smaller twist angle due to the growing DOS [36]. This paves the way toward controllable high-temperature superconductivity.
The count of flat bands near the Fermi level undergoes a dramatic change with varying twist angles, especially noticeable at small angles. Therefore, Moire engineering of the TBL and TBD offers an exceptional platform for investigating strongly correlated physics, in which a slight tweak in the twist angle can exert a large influence on low-energy physics. For instance, manipulating the twist angle allows for engineering spontaneous ferromagnetic phase transitions based on the Stoner criterion. This is achieved by inducing dramatic changes in the DOS near the Fermi level, considering the spin degree of freedom through significant spin-orbit coupling or the presence of magnetic impurities [36]. Notably, the TBL and TBD exemplify how Moire engineering paves a novel pathway for manipulating geometry-induced flat bands in bipartite lattices.
The Moire pattern provides not only an alternative way, distinct from dimerization or anisotropic strain, to isolate flat bands in bipartite lattices but also enables the creation of a large supercell, folding numerous flat bands into the first Brillouin zone. Additionally, while inheriting flat band physics from the unrotated monolayer, the Moire pattern generates a distinctive electronic structure characterized by highly degenerate flat bands near the Fermi level. Further investigation into strongly correlated physics and the corresponding phase diagram within the realm of twisted bipartite lattices will be interesting.
## Acknowledgements
We thank Gregory Fiete for the discussions. The work was supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0322 and benefited from the computational resources of Northeastern University's Advanced Scientific Computation Center (ASCC) and the Discovery Cluster.
_Note:_ After the completion of our manuscript, we became aware of the related paper [37].
|
2303.09752 | CoLT5: Faster Long-Range Transformers with Conditional Computation | Many natural language processing tasks benefit from long inputs, but
processing long documents with Transformers is expensive -- not only due to
quadratic attention complexity but also from applying feedforward and
projection layers to every token. However, not all tokens are equally
important, especially for longer documents. We propose CoLT5, a long-input
Transformer model that builds on this intuition by employing conditional
computation, devoting more resources to important tokens in both feedforward
and attention layers. We show that CoLT5 achieves stronger performance than
LongT5 with much faster training and inference, achieving SOTA on the
long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably
make use of extremely long inputs, showing strong gains up to 64k input length. | Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago Ontañón, Siddhartha Brahma, Yury Zemlyanskiy, David Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, Sumit Sanghai | 2023-03-17T03:28:17Z | http://arxiv.org/abs/2303.09752v3 | # CoLT5: Faster Long-Range Transformers with Conditional Computation
###### Abstract
Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive - not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose CoLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that CoLT5 achieves stronger performance than LongT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length.
## 1 Introduction
Many natural language processing tasks, such as summarization (Cohan et al., 2018) or question answering over long documents (Joshi et al., 2017), require machine learning models to encode long-form text. Processing long documents with a Transformer model is computationally expensive, both because attention cost scales quadratically with input length and because feedforward and projection layers have to be applied to each input token.
Over the past few years, many "efficient Transformer" approaches have been proposed that reduce the cost of the attention mechanism over long inputs (Child et al., 2019; Ainslie et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020; Wang et al., 2020; Tay et al., 2021; Guo et al., 2022). However, especially for larger models, the feedforward and projection layers actually make up the majority of the computational burden and can render processing long inputs intractable.
This paper presents CoLT5 (Conditional LongT5), a new family of models that, building on top of LongT5 (Guo et al., 2022), enables fast processing of long inputs by combining architecture improvements for both attention and feedforward layers. CoLT5 is based on the intuition that some tokens are more important than others, and we can achieve better quality for lower cost by devoting more computation to important tokens. Moreover, the fraction of important tokens is likely to diminish with document length, allowing for tractable processing of long documents.
In particular, CoLT5 divides each feedforward layer and each attention layer into a _light branch_ which is applied to all tokens and a _heavy branch_ which is applied to a set of important tokens, se
Figure 1: An overview of a CoLT5 Transformer layer with conditional computation. All tokens are processed by light attention and MLP layers, while \(q\) routed query tokens perform heavier attention over \(v\) routed key-value tokens and \(m\) routed tokens are processed by a heavier MLP.
lected specifically for that input and component. The light feedforward branch has lower hidden dimension than standard LongT5 while the heavy feedforward branch has higher hidden dimension. The light attention branch has fewer heads and applies only local attention, while the heavy attention branch performs full attention over another separately selected set of important tokens. Figure 1 provides an overview of the CoLT5 conditional mechanism.
Finally, CoLT5 also includes two other modifications to the LongT5 architecture. CoLT5 adds multi-query cross-attention (Shazeer, 2019), significantly speeding up inference. CoLT5 also employs the UL2 (Tay et al., 2022) pre-training objective, which we demonstrate allows for in-context learning over long inputs.
We show that CoLT5 performs much faster finetuning and inference with similar or better model quality, improving over LongT5 on arXiv summarization (Cohan et al., 2018) and TriviaQA question answering (Joshi et al., 2017) datasets and achieving SOTA on the SCROLLS benchmark (Shaham et al., 2022). Moreover, CoLT5 achieves further gains in quality and speed for tasks with extremely long inputs (64k tokens), with less-than-linear scaling of "focus" tokens.
## 2 Background
Transformer FLOPsCoLT5 follows an extensive line of work in attempting to reduce the computational cost of Transformer models, particularly over long inputs. The computational burden of Transformer models has several distinct elements, and different approaches focus on reducing the cost of different components. For that reason, it is helpful to start by providing a breakdown of the computational cost of Transformer components. Table 1 shows the FLOPs1 for each component of a Transformer encoder layer (Kaplan et al., 2020).
Footnote 1: Each multiply-add is counted as a single FLOP.
Sparse attentionThe first challenge of applying a Transformer to a long input is that the FLOPs of the self-attention mechanism scales quadratically in the input length, becoming intractable for long inputs. A large body of work focuses on reducing self-attention cost, restricting attention between a subset of inputs (Child et al., 2019; Ainslie et al., 2020; Beltagy et al., 2020; Zaheer et al., 2020; Wang et al., 2020; Guo et al., 2022) or to a subset of layers (Zemlyanskiy et al., 2021). In LongT5 (Guo et al., 2022), the most closely related model to CoLT5, tokens attend within a local window as well as to a mean-pooled summary representation for each block of 16 tokens in the
\begin{table}
\begin{tabular}{l l} \hline \hline
**Encoder Layer Component** & **Flops** \\ \hline Vanilla self-attention computation & \(2n^{2}d\) \\ Attention QKV and output projections & \(4nd^{2}\) \\ Feedforward layer & \(8nd^{2}\) \\ LongT5 local attention computation & \(2nwd\) \\ LongT5 global attention computation & \(\frac{n^{2}}{8}d\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational cost of encoder layer transformer components measured in FLOPs. \(n\) is the input length, \(d\) is the model dimensionality, and \(w\) is the size of the local attention window.
Figure 2: **CoLT5 achieves stronger performance than LongT5 at any speed.** Average performance on all datasets as a function of inference and fine-tuning time per sample (ms) for LongT5 and CoLT5 Base, Large, and XL models. LongT5 does not use MQA, but we report speed as though it had for a conservative baseline.
input. LongT5 attention leads to sharply reduced (though still non-negligible) FLOPs (Table 1).
Conditional computationAfter applying a sparse attention mechanism, the feedforward and attention projection layers account for the majority of the FLOPs. These costs scale with the length of the input, such that processing long inputs is still prohibitively expensive. A common approach to reduce the remaining cost is to employ some form of _conditional computation_, avoiding applying all model parameters to the entire input. CALM (Schuster et al., 2022) applies a varying number of decoder layers to each decoded token, outputting a token early if the model is confident in its prediction. Mixture-of-Experts models (Shazeer et al., 2017; Fedus et al., 2021; Zoph et al., 2022) route inputs through a small proportion of expert sub-modules, bringing to bear only the parameters most relevant to the input. In the context of retrieval-augmented models, numerous works rerank retrieved passages by their relevance to the query and process only the highest scoring passages (Mao et al., 2021; Wang et al., 2018; Yu et al., 2022) and vary the number of processed passages depending on model confidence (Kratzwald and Feuerriegel, 2018; Varshney et al., 2022). Concurrent work CoDA (Lei et al., 2023) employs a related conditional computation mechanism, designed for efficient adaptation rather than modeling long documents.
Device utilizationFLOPs do not tell the whole story, as modeling choices can influence the effective speed of operations achieved by accelerators. For long text inputs, autoregressive decoder inference is very slow due to memory bandwidth constraints from repeatedly loading the long sequence of keys and values (Shazeer, 2019; de Jong et al., 2022). Shazeer (2019) introduces multi-query attention (MQA), sharing heads for keys and values to reduce memory bandwidth overhead. Pope et al. (2022) studies how to shard large models, especially in the context of MQA, to obtain optimal device utilization and therefore speed.
Training objectivesT5 introduced the span corruption objective (Raffel et al., 2020), a modification of masked language modeling (Devlin et al., 2019). LongT5 made use of the PEGASUS (Zhang et al., 2020) sentence reconstruction objective for improved summarization performance. Tay et al. (2022) proposes UL2, a mixture of span corruption, prefix, and causal language modeling, and shows that it leads to strong performance on both short-output and generative tasks.
## 3 CoLT5
### Conditional computation
As discussed in the previous section, a large proportion of Transformer FLOPs arise from feedforward and projection layers that scale with the length of the input sequence. Therefore, LongT5 training and inference on long documents remains expensive.
CoLT5 further reduces the cost of processing long documents through _conditional computation_, following the intuition that some tokens are more important and therefore benefit more than others from heavy computation. First, some types of tokens may inherently require less computation, such as filler words and punctuation. Second, especially in long documents, large parts of the input may not be relevant to the current question, task, or processing stage.
The CoLT5 conditional computation mechanism consists of three components: routing modules, conditional feedforward layers, and conditional attention layers. All tokens are processed by standard, lightweight attention and feedforward layers. Routing modules additionally select important tokens from an input at each attention or feedforward layer, and a heavy conditional layer applies additional computation to routed tokens. This section describes each component in detail. Figure 1 provides an overview of the CoLT5 conditional computation mechanism, and Table 2 compares CoLT5 and LongT5 FLOPs.
RoutingIn order to separately select important tokens for each component in each layer, we need a _learnable_ and _tractable_ routing function. We follow the simple three-step mechanism from Lei
\begin{table}
\begin{tabular}{l l} \hline \hline
**Model** & **Encoder Layer Flops** \\ \hline T5 & \(12nd^{2}+2n^{2}d\) \\ LongT5 & \(12nd^{2}+\frac{n^{2}}{8}d\) \\ CoLT5 & \(7\frac{1}{4}nd^{2}+\frac{n^{2}}{84}d\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **CoLT5 uses significantly fewer FLOPs than LongT5. Comparison of approximate encoder layer total FLOPs between T5, LongT5, and CoLT5. CoLT5 FLOPs rounded to readable fractions.**
et al. (2023): (1) multiply inputs with a learned embedding to obtain routing scores, (2) normalize, and (3) select the top-\(k\) highest scoring inputs.
Let \(X_{i}\) be the representation of token \(i\), and \(u\) a \(d\)-dimensional learnable embedding. Then the routing score of token \(i\) is
\[s_{i}=X_{i}\cdot u\]
We select the top-\(k\) highest scoring inputs. In order to provide a learning signal to the scoring embedding, we make sure the contribution of the routed tokens to the layer update is _scaled_ according to the routing score, as will be seen later. To provide a better distributed signal to all tokens, we also globally normalize the routing scores to sum up to the number of desired routed tokens using a generalized softmax, resulting in normalized scores \(\tilde{s}_{i}\). Each CoLT5 layer has three independent routers, one each for the feedforward layer, attention queries, and attention key-values.
Conditional FeedforwardIntuitively, some token representations may benefit from more processing than others. The CoLT5 conditional feedforward layer applies an additional high-capacity feedforward layer to selected tokens. In particular, let \(X_{i}\) be the model state of the \(i\)th token and \(\tilde{s}_{i}\) denote the normalized routing score (set to 0 for non-routed tokens). Then the feedforward update for CoLT5 is given by
\[X_{i}=X_{i}+\text{FFd}_{\text{Light}}(X_{i})+\tilde{s}_{i}\cdot\text{FFd}_{ \text{Heavy}}(X_{i})\]
The light and heavy feedforward branches differ only in their hidden dimension, with the light branch having smaller hidden dimension than the standard T5 feedforward layer and the heavy branch larger. Let \(n\) denote the number of input tokens, \(m\) the number of selected tokens, and \(r_{L}\) and \(r_{H}\) the ratios of light and heavy hidden dimension to standard T5 hidden dimension. Then the FLOPs of the CoLT5 layer are given by
\[\text{FLOPs}_{\text{FFd}}=\underbrace{8nr_{L}d^{2}}_{\text{Light branch}}+ \underbrace{8mr_{H}d^{2}}_{\text{Heavy branch}}\]
We set the light and heavy ratios as \(r_{L}=\frac{1}{2}\) and \(r_{H}=4\), half and quadruple the standard T5 hidden dimension respectively. For our main experiments, a fraction \(\frac{1}{16}\) of tokens are routed to the heavy branch. As a result the approximate FLOPs from the CoLT5 feedforward layer equals
\[\text{FLOPs}_{\text{FFd}}=\underbrace{4nd^{2}}_{\text{Light branch}}+ \underbrace{2nd^{2}}_{\text{Heavy branch}}\]
consuming 75% of the FLOPs of a standard T5 feedforward layer.
Conditional AttentionCoLT5 conditional attention operates on the intuition that most tokens have simple, local interactions, but some tokens benefit from heavier processing and long-range interactions. The CoLT5 conditional attention layer applies an additional high-capacity attention layer that attends from selected query tokens to selected key-value tokens. Let \(\tilde{s}_{i}^{q}\) denote the normalized routing query score for token \(i\), and \(\tilde{s}^{kv}\) the key-value scores for all tokens (set to \(0\) if not routed). Then the attention update for CoLT5 is given by
\[X_{i}=X_{i}+\text{A}_{\text{Light}}(X_{i},X)+\tilde{s}_{i}^{q}\cdot\text{A}_{ \text{Heavy}}(X_{i},\tilde{s}^{kv}X)\]
The light and heavy branches differ in the number of heads and tokens attended to: the light branch has fewer heads and attends to a local context window, while the heavy branch has more heads and attends to all routed key-value tokens. Separately selecting query and key-value tokens also allows the model to differentiate between tokens that _require_ additional information and those that _possess_ such information. Figure 3 shows the CoLT5 attention pattern. Let \(q,v\) be the number of selected query and key-value tokens, \(w\) the size of the local attention window and \(r_{L},r_{H}\) the proportion of light and heavy heads relative to standard T5. Then
Figure 3: An overview of the CoLT5 attention pattern. The light branch performs local attention for each token. In the higher capacity heavy branch \(q\) selected query tokens (2 in the figure) attend to \(v\) separately selected key and value tokens (4 in the figure).
the FLOPs of the CoLT5 attention layer are given by
\[\text{FLOPs}_{\text{Att}} =\underbrace{4n\cdot r_{L}d^{2}}_{\text{Local projection}}+\underbrace{2nw\cdot r_{L}d}_{\text{Local attention}}\] \[+\underbrace{2q\cdot r_{H}d^{2}+2v\cdot r_{H}d^{2}}_{\text{ Global projection}}+\underbrace{2qv\cdot r_{H}d}_{\text{Global attention}}\]
We set the light and heavy head ratios as \(r_{L}=\frac{1}{4}\) and \(r_{H}=\frac{3}{4}\), keeping the total number of heads across the light and heavy branches equal to standard T5 heads. For our main experiments a fraction \(\frac{1}{16}\) query tokens and \(\frac{1}{8}\) key-value tokens are routed to the heavy branch. Ignoring local attention computation, we approximate attention FLOPS by2
Footnote 2: Global projection and attention FLOPs rounded to readable fractions, exact values are \(\frac{9}{32}\) and \(\frac{3}{256}\). Complexity assumes constant fraction of routed tokens; we show we can do better in practice for extremely long inputs.
\[\text{FLOPs}_{\text{Att}}\approx\underbrace{nd^{2}}_{\text{Local proj.}}+\underbrace{\frac{1}{4}nd^{2}}_{\text{Global proj.}}+\underbrace{\frac{1}{84}n^{2}d}_{\text{Global att.}}\]
with less than half projection FLOPs and order-of-magnitude smaller quadratic length scaling compared to LongT5. Table 2 shows total FLOPs for the CoLT5 layer. In general, we set \(q=m\) and \(v=2m\), and use \(m\) to summarize the number of routed tokens going forward.
### Multi-query Attention
Conditional computation effectively reduces the computational cost of the encoder. However, for encoder-decoder models with long inputs the majority of inference time is spent in the decoder due to memory bandwidth constraints (Shazeer, 2019; de Jong et al., 2022). We apply multi-query attention (Shazeer, 2019) (MQA) in cross-attention layers for much faster inference.
### Ul2
The UL2 pre-training objective (Tay et al., 2022) combines different denoising objectives and has been shown to lead to improved in-context learning. We train CoLT5 on UL2 instead of PEGASUS (Zhang et al., 2020), endowing CoLT5 with in-context learning capabilities.
## 4 Experiments
In order to evaluate CoLT5, we perform the following experiments: (1) our main results compare CoLT5 and LongT5 on a collection of long input datasets using input length of 16k tokens; (2) we evaluate CoLT5 on extremely long inputs up to 64k tokens and compare scaling against LongT5; (3) demonstrate CoLT5's few-shot capability, investigating how performance changes as input length and number of shots increase, (4) perform a series of ablations to understand the effect of individual CoLT5 components, and (5) investigate empirical routing patterns. The remainder of the section outlines our experimental setup, and then describes each of the experiments above.
### Experimental setup
ConfigurationsCoLT5 is based on the T5.1.1 architecture (Raffel et al., 2020), implemented with JAX (Bradbury et al., 2018), Flax (Heek et al., 2020), and Flaxformer3. Following LongT5, we
\begin{table}
\begin{tabular}{l|c|c c|c c c c c c c c} \hline \hline
**Model** & **Avg** & **Speed** & **TQA** & **NQA** & **QAS** & **QuAL** & **CNLI** & **arXiv** & **SumS** & **QMS** & **GovR** \\ \hline & & **inf** & **fn** & **F1** & **F1** & **F1** & **EM** & **EM** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** \\ \hline LongT5-B & 43.1 & 0.6 / 7.4 & 3.7 & 82.2 & 23.0 & 46.6 & 37.9 & 85.6 & 35.4 & 19.2 & 20.4 & 37.7 \\ CoLT5-B & 42.4 & 11.2 & 6.5 & 82.4 & 23.3 & 42.1 & 36.5 & 86.5 & 35.3 & 18.7 & 18.4 & 37.9 \\ \hline LongT5-L & 45.3 & 0.3 / 3.0 & 1.3 & 84.2 & 27.2 & 52.3 & 40.6 & 87.3 & 35.7 & 19.1 & 21.4 & 39.5 \\ CoLT5-L & 45.3 & 5.0 & 2.0 & 84.5 & 27.7 & 49.8 & 39.9 & **88.7** & 35.9 & **20.5** & 21.0 & 39.7 \\ \hline LongT5-XL & 46.6 & 0.2 / 1.2 & 0.4 & 85.3 & 29.3 & 53.1 & 46.0 & 88.2 & 35.9 & 19.4 & 21.3 & **40.5** \\ CoLT5-XL & **47.4** & 2.3 & 0.5 & **86.1** & **31.1** & **53.9** & **48.1** & 88.4 & **36.1** & 20.0 & **22.5** & **40.5** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of CoLT5 and LongT5 Base, Large and XL models on question-answering datasets TriviaQA (TQA), NarrativeQA (NQA), QASPER (QAS), and QuALITY (QuAL), NLI dataset ContractNLI (CNLI), and summarization datasets arXiv, SummScreenFD (SumS), QMSum (QMS), and GovReport (GovR). SCROLLS results are on leaderboard test set where CoLT5-XL achieves SOTA. Average speed is reported in samples per second for inference (inf) and fine-tuning (fn). LongT5 does not use MQA but inference speed is reported without/with MQA for conservative baseline. R\({}_{\text{gm}}\) stands for the geometric mean of ROUGE-1,2,L.
experiment with Base, Large, and XL model sizes. CoLT5 models use the same embedding dimension, number of layers, and total attention heads as corresponding LongT5 models of the same size, with more overall parameters (but less compute) due to the conditional branch. See Appendix B for additional details on model configuration.
Pre-trainingWe pre-train CoLT5 for 1M steps on a variant of the UL2 objective [14] with batch size 256, input length 4096, and output length 910. In particular, our mixture contains four objectives in equal proportion: prefix-LM with noise rate 0.5, and span corruption [15] with noise rate 0.15 and average span lengths 3, 8, and 64. We use the Adafactor optimizer [13] with the T5.1.1 inverse square root learning rate schedule and no dropout. CoLT5 is trained with the T5X [12] framework. For pre-training, we route \(m=512\) tokens, \(\frac{1}{8}\)th of the input length.
Fine-tuningFor fine-tuning we use a constant learning rate of 0.001, batch size 128, and dropout rate 0.1 for all tasks. Main results use input length of 16384 for all datasets other than ContractNLI, which uses 8192. Question answering datasets use output length 128 and summarization datasets use output length 512, except for GovRep which uses output length 1024. We route \(m=1024\) tokens, \(\frac{1}{16}\)th of the input length. We train until convergence and select the checkpoint with the highest dev performance. We use greedy decoding for inference.
DataWe evaluate CoLT5 on TriviaQA [15], arXiv [10], and the SCROLLS benchmark [13]. SCROLLS contains question-answering datasets NarrativeQA [14], QASPER [12], QuALITY [15], NLI dataset ContractNLI [14], and summarization datasets SummScreenFD [10], QMSum [15], and GovReport [11]. Table 4 provides an overview of the size and input length for each dataset.
TimingWe report time per sample per TPUv4 chip, as measured by xprof [12]. For inference we use a single TPUv4 with batch size 16 or the largest that fits in memory. For fine-tuning we profile with 8 TPUv4 chips, sharded separately for each model to maximize throughput.
### Main results
Figure 2 compares the quality-speed trade-off for LongT54 and CoLT5, showing that CoLT5 is better at any speed. For 16k input length, CoLT5 matches or exceeds LongT5 quality for Large and XL with 35-75% training speedup and 50-100% inference speedup on top of the order-of-magnitude inference speedup from MQA. Encoder speedups are even greater (Appendix D). CoLT5-XL also achieves SOTA performance on the SCROLLS benchmark. Table 3 contains all main results.
Footnote 4: Note that LongT5 does not use MQA, but for profiling we add MQA to LongT5 for a conservative baseline.
### Scaling to extremely long inputs
We hypothesize that the advantage of CoLT5 over LongT5 strengthens with input length, as the fraction of important tokens decreases and CoLT5 can route a greater proportion of important tokens to
\begin{table}
\begin{tabular}{l|c c c c}
**Dataset** & **Type** & **Samples** & **Median** & **90\%** \\ \hline TriviaQA & QA & 157,053 & 8,858 & 28,956 \\ arXiv & Sum & 215,913 & 8,519 & 20,170 \\ NarrativeQA & QA & 71,187 & 57,829 & 176,862 \\ QASPER & QA & 5,692 & 5,472 & 8,657 \\ QuALITY & QA & 6,737 & 7,171 & 8,276 \\ ContractNLI & NLI & 10,319 & 2,148 & 4,485 \\ SummScreen & Sum & 4,348 & 9,046 & 15,172 \\ QMSum & Sum & 1,810 & 14,197 & 27,761 \\ GovRep & Sum & 19,402 & 8,841 & 18,835 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Median and 90th percentile input length by dataset measured in SentencePiece tokens.
Figure 4: **CoLT5 effectively scales to extremely long inputs, achieving stronger performance and faster speed than LongT5. F1 on NarrativeQA as a function of inference time per sample for LongT5 and CoLT5 Large models using varying input lengths.**
the heavy branch. Figure 4 compares the quality-speed trade-off for LongT5 and CoLT5 on NarrativeQA, sweeping over input length rather than model size. The number of routed tokens is \(\frac{1}{16}\)th of the input length, except that we do not increase routed tokens going from 32k to 64k. CoLT5 achieves both stronger performance and faster inference speed at all input lengths and is able to effectively make use of extremely long inputs. We note that CoLT5 achieves large quality gains by going from 32k to 64k tokens even while keeping the number of routed tokens constant, providing more evidence for our hypothesis.
### In-context learning
Models trained on the UL2 objective have shown strong few-shot in-context learning (ICL) capabilities even at smaller sizes (Tay et al., 2022). CoLT5 enables tractable inference with long inputs. Here, we leverage this for scaling the number of examples used for in-context learning.
We test the above hypothesis by evaluating few-shot learning performance on Natural Questions (Kwiatkowski et al., 2019) and TriviaQA as a function of input length, using as many examples as fit in the context. We consider the open book setting, such that each example consists of question, context document, and answer. Table 5 shows the number of examples by input length. We evaluate on the full dev set, randomly sampling examples from the training set for each dev sample until no further examples fit in the input length. We found that CoLT5 can perform in-context learning only up to the input length it was trained on, so for these experiments we continued pre-training a CoLT5-Large model on input length 16384 for another 100k steps. For the same reason we route \(m=512\) tokens as in pre-training.
Figure 5 displays CoLT5 few-shot performance as a function of input length, showing that CoLT5 is able to apply its long-input capabilities to extract information from increasing numbers of examples.
### Ablations
This section studies the effect of different choices in the CoLT5 recipe. Table 6 contains results of a series of experiments that change a single component for CoLT5 Base.
RoutingFirst, we note that static routing -- evenly distributing routed tokens over the input -- leads to massive drop in performance. The importance of routing provides evidence that the model learns to devote capacity to important tokens and the advantage of CoLT5 is not merely a result of additional parameters. Sharing routing decisions for query and KV tokens should be compared with v=q, and leads to a modest reduction in quality and increase in speed.
The optimal number of routed tokens represents a trade-off between improved performance and computational cost of applying heavier layers. Table 6 shows strong gains going from 512 to 1024 (baseline) routed tokens and diminishing returns for further increases.
AttentionCoLT5 relies on routing to identify not only tokens that can benefit from important information elsewhere in the input, but also which tokens contain such important information. We study whether CoLT5 is successful in this task by comparing performance with two different attention settings -- v=all, in which routed tokens attend to the entire input, and v=q, which uses equal number of routed keys and values as queries, rather than twice as many. CoLT5 appears to occupy a sweet spot, as using fewer routed key-values modestly decreases performance at similar speed but attending
Figure 5: **CoLT5 can use its long-input capability to benefit from more shots for in-context learning. Few-shot exact match for CoLT5-Large on Natural Questions and TriviaQA dev sets as a function of input tokens, fitting as many examples as possible. Each example contains question, context, and answer. Inputs length used are 1024, 2048, 4096, 8192, 16384.**
\begin{table}
\begin{tabular}{l|c c c c c}
**Dataset** & **1024** & **2048** & **4096** & **8192** & **16384** \\ \hline \hline NQ & 0.1 & 0.7 & 1.7 & 3.4 & 5.6 \\ TriviaQA & 1.6 & 2.3 & 3.8 & 7.0 & 9.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Number of Natural Questions and TriviaQA examples that fit in input length.
to all inputs barely helps at sharply increased cost.
OtherWe compare CoLT5 to LongT5 with multi-query cross-attention, confirming that LongT5 indeed does not achieve an unexpected quality gain from MQA, and our conservative assumptions in Figures 2, 4 are valid. Next, we evaluate multi-head cross-attention for CoLT5, finding that it leads to modestly improved CoLT5 performance. However, as MHA exhibits order-of-magnitude slower inference, MQA is clearly favored. Finally, PEGASUS appears to fine-tune slightly better than UL2, though the difference is small and UL2 enables few-shot learning.
### Routing analysis
It is interesting to ask whether CoLT5 routed tokens line up with what we consider intuitively important tokens in each document. We investigate this question by studying routing patterns of a Large CoLT5 model fine-tuned on TriviaQA. We divide tokens into three categories: (1) question tokens, (2) answer tokens, and (3) other tokens. Figure 6 shows the average fraction of each type of token that is routed through the heavy path for MLP and attention layers on TriviaQA. We note that question and answer tokens are significantly more likely to be routed than other tokens, for feedforward as well as attention queries and keys/values. Appendix F presents more detailed routing analysis; e.g., semantically important tokens are much more likely to be selected in later layers.
## 5 Conclusion
We propose CoLT5, a new model for long-range inputs that employs conditional computation for higher quality and faster speed. CoLT5 has light feedforward and attention layers that apply to the entire input, as well as heavy branches that are applied only to a subset of important tokens selected by a learned router. We show that CoLT5 achieves stronger performance at any speed compared to LongT5 on a variety of long-input datasets, and can effectively and efficiently make use of extremely long inputs up to 64k tokens.
\begin{table}
\begin{tabular}{l l|c c|c c c c c c c c} \hline \hline \multirow{2}{*}{**Ablation**} & \multirow{2}{*}{**Model**} & **Avg** & **Inf** & **TQA** & **NQA** & **QAS** & **QuAL** & **CNLI** & **arX** & **SumS** & **QMS** & **GovR** \\ \cline{3-11} & & & **S/s** & **F1** & **F1** & **F1** & **EM** & **EM** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** & **R\({}_{\text{gm}}\)** \\ \hline Baseline & CoLT5-B & 42.5 & 11.2 & 82.4 & 23.1 & 38.3 & 36.6 & 87.8 & 35.3 & 19.3 & 20.5 & 39.4 \\ \hline \multirow{2}{*}{Routing} & Static & 40.5 & 11.6 & 79.7 & 19.2 & 34.2 & 34.5 & 86.4 & 34.9 & 18.1 & 18.9 & 38.8 \\ & Share QKV & 42.0 & 11.8 & 82.1 & 21.9 & 37.5 & 36.2 & 87.0 & 35.2 & 18.2 & 20.4 & 39.7 \\ \hline \multirow{2}{*}{Attention} & v=all & 42.5 & 9.4 & 82.4 & 22.3 & 38.6 & 37.2 & 87.8 & 35.3 & 19.1 & 20.3 & 39.8 \\ & v=q & 42.3 & 11.5 & 82.5 & 22.5 & 37.3 & 37.0 & 85.9 & 35.2 & 19.0 & 20.5 & 39.7 \\ \hline Routed & m=512 & 41.6 & 12.2 & 81.9 & 22.1 & 37.3 & 35.4 & 84.6 & 35.2 & 18.9 & 19.5 & 39.6 \\ Tokens & m=1536 & 42.9 & 10.4 & 82.6 & 23.5 & 39.8 & 37.5 & 87.5 & 35.4 & 19.4 & 20.8 & 40.0 \\ \hline Encoder & LongT5-B & 42.1 & 7.4 & 82.0 & 21.4 & 38.4 & 35.8 & 88.0 & 35.5 & 18.7 & 20.4 & 38.5 \\ \hline Decoder & Multi-head & 42.9 & 0.7 & 82.7 & 22.9 & 40.2 & 35.8 & 87.7 & 35.5 & 19.7 & 21.2 & 40.3 \\ \hline Objective & PEGASUS & 42.8 & 11.2 & 82.6 & 22.6 & 40.5 & 37.3 & 87.3 & 35.3 & 19.6 & 20.8 & 39.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: CoLT5 ablations. Each experiment modifies a component of the CoLT5 recipe for CoLT5-Base. Static routing divides the input into equal-length blocks and selects the first token in each block to be routed. Shared QKV routing shares routing decisions for queries and keys/values. In v=all the routed queries attend to the entire input, while v=q selects the same number of key and value tokens as query tokens. m=512 and m=1536 use different numbers of routed tokens. LongT5-B uses a LongT5 encoder while retaining other parts of the CoLT5 training recipe such as MQA and the UL2 objective. Multi-head refers to using multi-head cross-attention. The final ablation replaces the UL2 objective with PEGASUS as in LongT5.
Figure 6: Proportion of tokens routed for answer (string match), question, and other tokens by routing component for CoLT5 Large model, averaged over examples in TriviaQA dev set and all layers of model. |
2304.13975 | Prescribing Chern scalar curvatures on noncompact manifolds | In this paper, we investigate the noncompact prescribed Chern scalar
curvature problem which reduces to solve a Kazdan-Warner type equation on
noncompact non-K\"{a}hler manifolds. By introducing an analytic condition on
noncompact manifolds, we establish related existence results. As its another
application, we further give a new proof of a classical multiplicity theorem of
W.M. Ni. | Di Wu, Xi Zhang | 2023-04-27T06:47:53Z | http://arxiv.org/abs/2304.13975v1 | # Prescribing Chern scalar curvatures on noncompact manifolds
###### Abstract.
In this paper, we investigate the noncompact prescribed Chern scalar curvature problem which reduces to solve a Kazdan-Warner type equation on non-compact non-Kahler manifolds. By introducing an analytic condition on noncompact manifolds, we establish related existence results. As its another application, we further give a new proof of a classical multiplicity theorem of W.M. Ni [28].
Key words and phrases:Noncompact manifold, Non-Kahler metric, Prescribed Chern scalar curvature, Kazdan-Warner type equation, Multiplicity of solutions 2020 Mathematics Subject Classification: 35J60, 53A30, 53C55 The research was supported by the National Key R&D Program of China 2020YFA0713100. Both authors are partially supported by NSF in China No.12141104. The first author is also supported by the Jiangsu Funding Program for Excellent Postdoctoral Talent 2022ZB282.
We first provide an existence result on the prescribed Chern scalar curvature problem on noncompact non-Kahler manifolds. In fact, we prove
**Theorem 1.1** (See Theorem 2.1).: _Suppose that the Chern scalar curvature \(S^{Ch}_{g}\) satisfies \(\int_{X}S^{Ch}_{g}\operatorname{dvol}_{g}<0\) and \(|S^{Ch}_{g}|\leq\Lambda\phi_{X}\) for a constant \(\Lambda\). Let \(h\) be a smooth nonpositive and nonzero function on \(X\) with \(|h|\leq\Lambda\phi_{X}\), then there exists a bounded and conformally equivalent Hermitian metric \(\tilde{g}\) such that whose Chern scalar curvature \(S^{Ch}_{\tilde{g}}=h\)._
**Remark 1.1**.: _An important feature of Theorem 1.1 is that no additional hypothesis required other than our Main Assumption. This makes the result applicable on prescribing Chern scalar curvatures on quasi-compact complex manifolds(see Corollary 1.1) and solving the Kazdan-Warner type equation (1.2) on \(\mathbb{C}\)(see Proposition 4.1), as well as on \(M\times\mathbb{C}\) for any compact complex manifold \(M\)(see Proposition 4.2)._
In compact case, Theorem 1.1 recovers [16, Theorem 2.5] and [24, Proposition 2.1] which were proved via the standard method of super-sub solutions. The same conclusion was also obtained in [18, Theorem 1.1] and [34, Corollary 1.3] via the flow method, where the extra balanced condition \(d\omega_{g}^{n-1}=0\) is further assumed. In noncompact case, to prove Theorem 1.1, we employ the approach of a combination of heat flow and continuity methods to solve the following Kazdan-Warner type equation
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u+he^{u}=f, \tag{1.2}\]
on noncompact manifolds, where \(f\), \(h\) are two functions satisfying certain conditions. The key point in our argument is the uniform zeroth order estimate on noncompact spaces. Arising from the basic geometric problem on prescribing Gaussian curvatures on Riemannian surfaces, the classical Kazdan-Warner equation [20] takes the form
\[\Delta_{g}u+he^{2u}=f, \tag{1.3}\]
where \(\Delta_{g}\) denotes the Beltrami-Laplacian. Up to a constant multiple, (1.2) coincides with (1.3) for balanced metrics and they generally differ by a gradient term. It is also mentioned that our method can be further utilized for improving the results in [31, 32].
Recall the singular Yamabe problem: given a compact Riemannian manifold \((M,g)\) and a closed subset \(\Sigma\), find a conformally equivalent metric \(\tilde{g}\) on \(M\setminus\Sigma\) which has constant scalar curvature. The Chern-Yamabe problem generalizes naturally to the singular case as well and Corollary 1.1 below shows the problem can be solved in certain cases. As mentioned in [15], it would be interesting to find non-Kahler metrics which simultaneously solve both the singular Yamabe and singular Chern-Yamabe problems.
**Corollary 1.1** (See Proposition 2.1).: _Suppose that \(X=\overline{X}\setminus\Sigma\) is the complement of a complex analytic subset \(\Sigma\) in a compact complex manifold \(\overline{X}\) with complex codimension at least two, \(g=\overline{g}|_{X}\) for a Gauduchon metric \(\overline{g}\) on \(\overline{X}\) and \(\int_{X}S^{Ch}_{g}\operatorname{dvol}_{g}<0\). Let \(h\) be a smooth bounded nonpositive and nonzero function on \(X\), then there exists a bounded and conformally equivalent Hermitian metric \(\tilde{g}\) such that \(S^{Ch}_{\tilde{g}}=h\)._
Next we show Theorem 1.1 can be also used to detect the multiplicity of conformal metrics with prescribed curvatures on the entire plane, which is equivalent to study the non-uniqueness of solutions to the following semi-linear elliptic equation on \(\mathbb{R}^{2}\):
\[\Delta u+Ke^{2u}=0, \tag{1.4}\]
where \(\Delta=\partial_{x}^{2}+\partial_{y}^{2}\) and \(K\in C^{\infty}(\mathbb{R}^{2})\) is the candidate curvature function. The first result concerning (1.4) seems due to Ahlfors [1] and he proved that there is no entire solution if \(K\) is a negative constant. Sattinger [29] obtained a more general unsolvable result when \(K\leq 0\) and \(|K|\geq C|x|^{-s}\) at infinity for two positive constants \(C\) and \(s\leq 2\).
By explicitly constructing radially symmetric sub-solutions and super-solutions together with the method of super-sub solutions for the entire plane, the first nontrivial existence result on (multiple)solutions to (1.4) was founded by Ni.
**Theorem 1.2** (Ni [28], Theorem 1.3).: _If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq C|z|^{-s}\) at infinity, for two constants \(s>2\) and \(C\), then (1.4) possesses infinitely many solutions on \(\mathbb{R}^{2}\) and each of them has the following logarithmic growth at infinity for two constants \(C_{1}\) and \(C_{2}\),_
\[k\log|z|-C_{1}\leq u_{k}\leq k\log|z|+C_{2}. \tag{1.5}\]
**Remark 1.2**.: _A great deal of work has been devoted to understanding the large variety of solutions to (1.4) on \(\mathbb{R}^{2}\) since then, one may consult [4, 8, 9, 10, 11, 12, 13, 14, 21, 23, 25, 26] which is by far an incomplete list of important contributions._
Below comes our multiplicity result.
**Theorem 1.3** (See Proposition 3.2).: _If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq\Lambda(1+|z|^{2})^{-l}\) for two constants \(l>1\) and \(\Lambda\), then (1.4) possesses infinitely many solutions on \(\mathbb{R}^{2}\) and they have the forms_
\[2u_{k}=v_{k}+k\log(1+|z|^{2}), \tag{1.6}\]
_with \(v_{k}\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\), \(|dv_{k}|\in L^{2}(\mathbb{R}^{2})\) and \(k\) constant, subject to_
\[k\in[l-2,l-1)\cap(0,\infty). \tag{1.7}\]
_On the other hand, any solution \(u\) of the form_
\[2u=v+k\log(1+|z|^{2}), \tag{1.8}\]
_with \(v\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) and constant \(k\in[l-2,l-1)\cap(0,\infty)\), is exactly \(u_{k}\)._
**Remark 1.3**.: _The method to prove Theorem 1.3 is different in spirit from those already exist in the literature and the strategy seems to us more geometrically. Indeed, we shall transform the issue of the multiplicity of solutions into constructing infinitely many metrics meeting our Main Assumption so that Theorem 1.1 can be applied._
To recover Ni's Theorem 1.2 from Theorem 1.3, we may write \(s=2l\) for \(l>1\) and take large constant \(B\)(for which \(|K|\leq C|z|^{-s}\) once \(|z|\geq B\)),
\[\Lambda=\max\{2C^{l},\max_{|z|\leq B}|K|(1+|B|^{2})^{l}\}, \tag{1.9}\]
then it is easy to see \(|K|\leq\Lambda(1+|z|^{2})^{-l}\) and hence Theorem 1.3 guarantees the multiplicity of solutions with each of them satisfying
\[\frac{k}{2}\log(1+|z|^{2})-\frac{1}{2}\sup_{\mathbb{R}^{2}}|v_{k}|\leq u_{k} \leq\frac{1}{2}\sup_{\mathbb{R}^{2}}|v_{k}|+\frac{k}{2}\log(1+|z|^{2}), \tag{1.10}\]
and then the logarithmic growth (1.5) at infinity follows.
Finally, our results also solve the vortex equation on holomorphic line bundles over some noncompact manifolds, see Section 4. Note the vortex equation was introduced by Bradlow [5], the solvability was characterized for closed manifolds via the result in [20] and its higher rank analogue was studied in [6] which involves a stability-like criterion.
The rest of this paper is organized as follows. In Section 2, we first recall a few basic facts related the prescribed Chern scalar curvature problem. Then we establish an existence result and illustrate it applies to some quasi-compact complex manifolds. In Section 3, we first construct certain metrics on entire plane meeting the Main Assumption and then prove multiplicity results on the prescribed curvature equation. In Section 4, we apply the results to solve the vortex equation on holomorphic line bundles.
## 2. Noncompact prescribed Chern scalar curvature problem
Let \(X\) be a \(n\)-dimensional complex manifold with the natural complex structure
\[J:TX\to TX,\ J^{2}=-\operatorname{id}. \tag{2.1}\]
In the presence of \(J\), the tangent bundle \(TX\) is in particular a complex vector bundle which will be denoted by \((TX,J)\). Since \(J^{2}=-\operatorname{id}\), we have
\[TX\otimes\mathbb{C}=T^{1,0}X\oplus T^{0,1}X, \tag{2.2}\]
where \(T^{1,0}X\) and \(T^{1,0}X\) are the eigenspaces of eigenvalues \(\sqrt{-1}\) and \(-\sqrt{-1}\) respectively. In fact, for any \(x\in X\) we can find vectors \(\{e_{i},Je_{i}\}_{i=1}^{n}\) forming a basis of \(T_{x}X\) and then \(T_{x}^{1,0}X(T_{x}^{0,1}X)\) is generated by \(\{e_{i}-\sqrt{-1}Je_{i}\}_{i=1}^{n}(\{e_{i}+\sqrt{-1}Je_{i}\}_{i=1}^{n})\). Let us write
\[A^{p,q}(X)=\wedge^{p}(T^{1,0}X)^{*}\otimes\wedge^{q}(T^{0,1}X)^{*}. \tag{2.3}\]
Then the usual exterior differential operator can be written as \(d=\partial+\overline{\partial}\) with
\[\partial:A^{p,q}(X)\to A^{p+1,q}(X),\ \overline{\partial}:A^{p,q}(X)\to A^{p,q+1}(X). \tag{2.4}\]
We shall often identify \(TX\) with \(T^{1,0}X\) via
\[e_{i}\leftrightarrow\frac{1}{\sqrt{2}}(e_{i}-\sqrt{-1}Je_{i}),\ Je_{i} \leftrightarrow\frac{\sqrt{-1}}{\sqrt{2}}(e_{i}+\sqrt{-1}Je_{i}),\ i=1,...,n. \tag{2.5}\]
Given a Hermitian metric \(g\) on \((X,J)\) which means a Riemannian metric satisfying \(g(a,b)=g(Ja,Jb)\) for any \(a,b\in TX\), we can equip \((TX,J)\) with a Hermitian structure
\[H_{g}=g-\sqrt{-1}\omega_{g}, \tag{2.6}\]
where \(\omega_{g}(a,b)=g(Ja,b)\) is the associated fundamental form. Using (2.5) and (2.6), we know that \(T^{1,0}X\) inherits a Hermitian structure given by
\[H_{g}(E_{i},E_{j})=g(E_{i},\overline{E_{j}}), \tag{2.7}\]
where \(E_{i}=e_{i}-\sqrt{-1}Je_{i}\), \(\overline{E}_{j}=e_{j}+\sqrt{-1}Je_{j}\) and \(g\) has been extended by \(\mathbb{C}\)-linearity. Hence \((T^{1,0}X,H_{g})\) is Hermitian vector bundle equipped with a holomorphic structure induced by \(J\), there is a unique connection \(\nabla\)(called the Chern connection) on \(T^{1,0}X\) which is compatible with \(H_{g}\) and also the holomorphic structure.
In the local holomorphic coordinate, we write
\[\omega_{g}=\sqrt{-1}g_{i\overline{j}}dz^{i}\wedge d\overline{z}^{j},\ g_{i \overline{j}}=g(\frac{\partial}{\partial z^{i}},\frac{\partial}{\partial \overline{z}^{j}}). \tag{2.8}\]
The Chern connection \(\nabla\) is characterized by the connection \(1\)-form
\[A=\partial g\cdot g^{-1} \tag{2.9}\]
where \(g=(g_{i\overline{j}})_{n\times n}\) is regarded as a Hermitian matrix. Precisely, we have
\[\nabla\frac{\partial}{\partial z_{i}}=\frac{\partial g_{i\overline{l}}}{ \partial z^{k}}\bar{g}^{\overline{l}j}\frac{\partial}{\partial z^{j}}\otimes dz ^{k}. \tag{2.10}\]
From this, it is known that \(\nabla\) being torsion-free if and only of the Hermitian metric \(g\) is Kahler(that is, \(d\omega_{g}=0\)). The curvature \(R_{\nabla}\) is given by the curvature \(2\)-form
\[\Omega=dA-A\wedge A=\overline{\partial}(\partial g\cdot g^{-1}). \tag{2.11}\]
Precisely, the curvature tensor has components
\[R_{i\overline{j}k\overline{l}}=-\frac{\partial^{2}g_{k\overline{l}}}{ \partial z^{i}\partial\overline{z}^{j}}+g^{\overline{l}p}\frac{\partial g_{p \overline{l}}}{\partial\overline{z}^{j}}\frac{\partial g_{k\overline{q}}}{ \partial z^{i}}, \tag{2.12}\]
where \(R_{i\overline{j}k\overline{l}}=g([\nabla_{\frac{\partial}{\partial z^{i}}}, \nabla_{\frac{\partial}{\partial\overline{z}^{j}}}]\frac{\partial}{\partial z ^{k}}-\nabla_{[\frac{\partial}{\partial z^{i}}},\frac{\partial}{\partial z^{ j}}]\frac{\partial}{\partial z^{k}},\frac{\partial}{\partial\overline{z}^{l}})\). Then the first Chern-Ricci form
\[Ric^{(1)}=\frac{\sqrt{-1}}{2\pi}\operatorname{tr}\Omega=\frac{\sqrt{-1}}{2\pi }R^{(1)}_{i\overline{j}}dz^{i}\wedge d\overline{z}^{j},\ R^{(1)}_{i\overline{j }}=-\frac{\partial^{2}\log\det g}{\partial z^{i}\partial\overline{z}^{j}}, \tag{2.13}\]
represents the first Chern class \(c_{1}(T^{1,0}X)\). The Chern scalar curvature is defined by
\[S^{Ch}_{g}=g^{\overline{j}i}R^{(1)}_{i\overline{j}}=-\sqrt{-1}\Lambda_{\omega_ {g}}\partial\overline{\partial}\log\det g, \tag{2.14}\]
where \(\Lambda_{\omega_{g}}\) denotes the operator obtained as the adjoint of the multiplication of \(\omega_{g}\).
Taking a conformal change \(\tilde{g}=e^{\frac{u}{n}}g\), the Chern scalar curvature changes as follows,
\[S^{Ch}_{\tilde{g}}=e^{-\frac{u}{n}}(S^{Ch}_{g}-\sqrt{-1}\Lambda_{\omega_{g}} \partial\overline{\partial}u). \tag{2.15}\]
Given a smooth function \(h\) on \(X\), \(S^{Ch}_{\tilde{g}}=h\) if and only if
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u+he^{\frac{u}{n}}=S^ {Ch}_{g}. \tag{2.16}\]
Now Theorem 1.1 follows from Theorem 2.1 below.
**Theorem 2.1**.: _Let \(f\) and \(h\) be two smooth nonzero functions on \(X\) such that_
\[\int_{X}f\operatorname{dvol}_{g}<0,\ |f|\leq\Lambda\phi_{X},\ -\Lambda\phi_{X} \leq h\leq 0, \tag{2.17}\]
_for a constant \(\Lambda\), then there exists a bounded smooth function \(u\) such that \(|du|\in L^{2}(X,g)\) and it solves the following Kazdan-Warner type equation on whole \(X\):_
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u+he^{u}=f. \tag{2.18}\]
Proof.: Let \(\{X_{j}\}\) be an exhaustion series of closed submanifolds with nonempty boundaries(see [27, Lemma 2.31]). For any \(X_{j}\) and \(\epsilon\in[0,1]\), we consider
\[\left\{\begin{array}{l}\frac{\partial\overline{u}_{\epsilon,j}}{\partial t} =\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}\overline{u}_{ \epsilon,j}+he^{\overline{u}_{\epsilon,j}}-f-\epsilon\overline{u}_{\epsilon,j},\\ \overline{u}_{\epsilon,j}(0)=0,\ \overline{u}_{\epsilon,j}|_{\partial X_{j}}=0.\end{array}\right. \tag{2.19}\]
It is not hard to show the convergence at infinity and hence it yields a solution \(u_{\epsilon,j}\) to the Dirichlet problem(see [31]):
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u_{\epsilon,j}+he^{u_{ \epsilon,j}}=f+\epsilon u_{\epsilon,j},\ u_{\epsilon,j}|_{\partial X_{j}}=0,\ \epsilon\in[0,1]. \tag{2.20}\]
Since \(x\leq xe^{x}\) for \(x\in\mathbb{R}\) and \(h\leq 0\), we compute
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u_{ \epsilon,j}^{2} =2u_{\epsilon,j}\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{ \partial}u_{\epsilon,j}+2|\partial u_{\epsilon,j}|^{2}\] \[\geq 2u_{\epsilon,j}(f+\epsilon u_{\epsilon,j}-he^{u_{\epsilon, j}})\] \[\geq 2u_{\epsilon,j}(f+\epsilon u_{\epsilon,j}-h)\] \[\geq 2|u_{\epsilon,j}|(\epsilon|u_{\epsilon,j}|-|f|-|h|), \tag{2.21}\]
and therefore
\[|u_{\epsilon,j}|\leq\frac{||f||_{L^{\infty}}+||h||_{L^{\infty}}}{\epsilon} \leq\frac{2\Lambda}{\epsilon}||\phi_{X}||_{L^{\infty}}. \tag{2.22}\]
By the Gauduchon condition and \(h\leq 0\), integrating by part gives
\[\int_{X_{j}}|du_{\epsilon,j}|^{2}\,\mathrm{dvol}_{g} =2\int_{X_{j}}\sqrt{-1}\Lambda_{\omega_{g}}(\partial u_{\epsilon, j}\wedge\overline{\partial}u_{\epsilon,j})\,\mathrm{dvol}_{g}\] \[=2\int_{X_{j}}\sqrt{-1}\partial u_{\epsilon,j}\wedge\overline{ \partial}u_{\epsilon,j}\wedge\frac{\omega_{g}^{n-1}}{(n-1)!}\] \[=\int_{X_{j}}-2\sqrt{-1}u_{\epsilon,j}\wedge\partial\overline{ \partial}u_{\epsilon,j}\wedge\frac{\omega_{g}^{n-1}}{(n-1)!}+\sqrt{-1 }\partial u_{\epsilon,j}^{2}\wedge\frac{\partial\omega_{g}^{n-1}}{(n-1)!}\] \[=\int_{X_{j}}-2u_{\epsilon,j}\sqrt{-1}\Lambda_{\omega_{g}} \partial\overline{\partial}u_{\epsilon,j}\,\mathrm{dvol}_{g}+\sqrt{-1} \overline{\partial}(u_{\epsilon,j}^{2}\wedge\frac{\partial\omega_{g}^{n-1}}{( n-1)!})\] \[=2\int_{X_{j}}u_{\epsilon,j}(he^{u_{\epsilon,j}}-f-\epsilon u_{ \epsilon,j})\,\mathrm{dvol}_{g}\] \[\leq 2\int_{X_{j}}u_{\epsilon,j}(h-f)\,\mathrm{dvol}_{g}\] \[\leq\frac{8\Lambda^{2}}{\epsilon}||\phi_{X}||_{L^{\infty}}\int_{ X}\phi_{X}\,\mathrm{dvol}_{g}\,. \tag{2.23}\]
Thanks to the zeroth order estimate (2.22) and standard elliptic estimates, for any positive \(\epsilon\) we have a subsequence \(\{u_{\epsilon,j_{\epsilon}}\}_{j_{\epsilon}\in\mathbb{N}}\) converging in \(C^{\infty}_{\mathrm{loc}}\)-topology to \(u_{\epsilon}\) solving the following equation on whole \(X\):
\[\sqrt{-1}\Lambda_{\omega}\partial\overline{\partial}u_{\epsilon}+he^{u_{ \epsilon}}=f+\epsilon u_{\epsilon}, \tag{2.24}\]
and satisfying
\[|u_{\epsilon}|\leq\frac{2\Lambda}{\epsilon}||\phi_{X}||_{L^{\infty}}, \tag{2.25}\]
\[||du_{\epsilon}||_{L^{2}}\leq\left(\frac{8\Lambda^{2}}{\epsilon}||\phi_{X}||_ {L^{\infty}}\int_{X}\phi_{X}\,\mathrm{dvol}_{g}\right)^{\frac{1}{2}}. \tag{2.26}\]
We shall show that \(\{u_{\epsilon}\}_{\epsilon>0}\) must subconverge to some \(u_{0}\) solving desired equation. It suffices to prove the uniform zeroth order estimate. To this end, we compute
\[\begin{split}\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{ \partial}\log(e^{u_{\epsilon}}+e^{-u_{\epsilon}})&=\frac{\sqrt{- 1}\Lambda_{\omega_{g}}\partial\overline{\partial}(e^{u_{\epsilon}}+e^{-u_{ \epsilon}})}{e^{u_{\epsilon}}+e^{-u_{\epsilon}}}-\frac{|\partial(e^{u_{\epsilon }}+e^{-u_{\epsilon}})|^{2}}{(e^{u_{\epsilon}}+e^{-u_{\epsilon}})^{2}}\\ &=\frac{e^{u_{\epsilon}}-e^{-u_{\epsilon}}}{e^{u_{\epsilon}}+e^{-u _{\epsilon}}}\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}u_{ \epsilon}+\frac{e^{u_{\epsilon}}+e^{-u_{\epsilon}}}{e^{u_{\epsilon}}+e^{-u_{ \epsilon}}}|\partial u_{\epsilon}|^{2}\\ &-\frac{(e^{u_{\epsilon}}-e^{-u_{\epsilon}})^{2}}{(e^{u_{\epsilon }}+e^{-u_{\epsilon}})^{2}}|\partial u_{\epsilon}|^{2}\\ &\geq\frac{e^{u_{\epsilon}}-e^{-u_{\epsilon}}}{e^{u_{\epsilon}}+e ^{-u_{\epsilon}}}(f+\epsilon u_{\epsilon}-he^{u_{\epsilon}})\\ &=\frac{e^{u_{\epsilon}}-e^{-u_{\epsilon}}}{e^{u_{\epsilon}}+e^{- u_{\epsilon}}}\left(f-h+\epsilon u_{\epsilon}-h(e^{u_{\epsilon}}-1)\right)\\ &\geq\frac{e^{u_{\epsilon}}-e^{-u_{\epsilon}}}{e^{u_{\epsilon}}+e ^{-u_{\epsilon}}}(f-h)\\ &\geq-|f|-|h|\\ &\geq-2\Lambda\phi_{X},\end{split} \tag{2.27}\]
where we have also used \(h\leq 0\), and therefore the Main Assumption implies
\[\begin{split}|u_{\epsilon}|&\leq\log(e^{u_{\epsilon }}+e^{-u_{\epsilon}})\\ &\leq C_{1}(1+\int_{X}|\log(e^{u_{\epsilon}}+e^{-u_{\epsilon}})| \phi_{X}\operatorname{dvol}_{g})\\ &\leq C_{1}\left(1+\int_{X}(|u_{\epsilon}|+\log 2)\phi_{X} \operatorname{dvol}_{g}\right)\\ &\leq C_{2}(1+\int_{X}|u_{\epsilon}|\phi_{X}\operatorname{dvol}_ {g})\\ &\leq C_{2}(1+\frac{2\Lambda||\phi_{X}||_{L^{\infty}}}{\epsilon} \int_{X}\phi_{X}\operatorname{dvol}_{g}),\end{split} \tag{2.28}\]
where \(C_{1}\), \(C_{2}\) are independent of \(\epsilon\). If \(\{u_{\epsilon}\}_{\epsilon>0}\) are not uniformly bounded, there is a subsequence \(l_{i}\to\infty\), where for simplicity \(l_{i}=\int_{X}|u_{i}|\phi_{X}\operatorname{dvol}_{g}>1\) and \(u_{i}=u_{\epsilon_{i}}\). Then
\[|v_{i}|\leq C_{3},\ \int_{X}|v_{i}|\phi_{X}\operatorname{dvol}_{g}=1, \tag{2.29}\]
where \(v_{i}=l_{i}^{-1}u_{i}\) and \(C_{3}\) is independent of \(\epsilon\). Recall that
\[\int_{X_{j_{i}}}|du_{i,j_{i}}|^{2}\operatorname{dvol}_{g}=2\int_{X_{j_{i}}}u_{ i,j_{i}}(he^{u_{i,j_{i}}}-f-\epsilon u_{i,j_{i}})\operatorname{dvol}_{g}, \tag{2.30}\]
where \(u_{i,j_{i}}=u_{\epsilon_{i},j_{\epsilon_{i}}}\) and \(X_{j_{i}}=X_{j_{\epsilon_{i}}}\). It follows
\[\int_{X_{j_{i}}}|dv_{i,j_{i}}|^{2}\operatorname{dvol}_{g}\leq 2l_{i,j_{i}}^{-1} \int_{X_{j_{i}}}v_{i,j_{i}}(he^{u_{i,j_{i}}}-f)\operatorname{dvol}_{g}, \tag{2.31}\]
where \(v_{i,j_{i}}=l_{i,j_{i}}^{-1}u_{i,j_{i}}\) and \(l_{i,j_{i}}=\int_{X_{j_{i}}}|u_{i,j_{i}}|\phi_{X}\operatorname{dvol}_{g}\). We may assume \(l_{i,j_{i}}\geq 1\) for any \(i\) and \(j_{i}\). Note for any \(\alpha>0\), we have for \(\tilde{j}_{i}\geq j_{i}>>1\),
\[\int_{X_{\tilde{j}_{i}}-X_{j_{i}}}(|h|+|f|)\operatorname{dvol}_{g}\leq\alpha, \tag{2.32}\]
which yields
\[\int_{X_{j_{i}}}|dv_{i,\tilde{j}_{i}}|^{2}\operatorname{dvol}_{g} \leq\int_{X_{\tilde{j}_{i}}}|dv_{i,\tilde{j}_{i}}|^{2}\operatorname {dvol}_{g}\] \[\leq 2l_{i,\tilde{j}_{i}}^{-1}(\int_{X_{\tilde{j}_{i}}-X_{j_{i}}} +\int_{X_{j_{i}}})v_{i,\tilde{j}_{i}}(he^{u_{i},\tilde{j}_{i}}-f)\operatorname {dvol}_{g}\] \[\leq 2C_{3}(e^{\frac{2\Lambda}{\epsilon_{i}}||\phi_{X}||_{L^{ \infty}}}+1)\int_{X_{\tilde{j}_{i}}-X_{j_{i}}}(|h|+|f|)\operatorname{dvol}_{g}\] \[+2l_{i,\tilde{j}_{i}}^{-1}\int_{X_{j_{i}}}v_{i,\tilde{j}_{i}}(he^ {u_{i},\tilde{j}_{i}}-f)\operatorname{dvol}_{g}\] \[\leq 2C_{3}(e^{\frac{2\Lambda}{\epsilon_{i}}||\phi_{X}||_{L^{ \infty}}}+1)\alpha+2l_{i,\tilde{j}_{i}}^{-1}\int_{X_{j_{i}}}v_{i,\tilde{j}_{i}} (he^{u_{i},\tilde{j}_{i}}-f)\operatorname{dvol}_{g}. \tag{2.33}\]
Taking \(\tilde{j}_{i}\to\infty\), \(j_{i}\to\infty\) and noting \(\alpha\) is arbitrary, we arrive at
\[\int_{X}|dv_{i}|^{2}\operatorname{dvol}_{g} \leq 2l_{i}^{-1}\int_{X}v_{i}(he^{u_{i}}-f)\operatorname{dvol}_{g}\] \[\leq 2l_{i}^{-1}\int_{X}v_{i}(h-f)\operatorname{dvol}_{g}\] \[\leq 4\Lambda C_{3}l_{i}^{-1}\int_{X}\phi_{X}\operatorname{dvol}_{ g}. \tag{2.34}\]
Up to a subsequence, it is known that \(v_{i}\to v_{0}\) weakly in \(L_{1}^{2}\)-topology and \(v_{0}\) equals to a constant almost everywhere. For any \(\beta>0\) we have for \(j>>1\),
\[1-\beta \leq(\int_{X}-\int_{X-X_{j}})|v_{i}|\phi_{X}\operatorname{dvol}_{g}\] \[=\int_{X_{j}}|v_{i}|\phi_{X}\operatorname{dvol}_{g}\] \[\leq 1, \tag{2.35}\]
\[\int_{X_{j}}|v_{i}|\phi_{X}\operatorname{dvol}_{g} \xrightarrow{i\to\infty}\int_{X_{j}}|v_{0}|\phi_{X} \operatorname{dvol}_{g}\] \[\xrightarrow{j\to\infty}\int_{X}|v_{0}|\phi_{X}\operatorname{dvol }_{g}. \tag{2.36}\]
It follows \(\int_{X}|v_{0}|\phi_{X}\,\mathrm{dvol}_{g}=1\) and \(v_{0}\neq 0\). (2.34) and \(xe^{x}\geq-e^{-1}\) for \(x\in\mathbb{R}\) imply
\[\begin{split}\int_{X}fv_{0}\,\mathrm{dvol}_{g}&=\lim_ {i\to\infty}\int_{X}fv_{i}\,\mathrm{dvol}_{g}\\ &\leq\lim_{i\to\infty}\int_{X}hv_{i}e^{l_{i}v_{i}}\,\mathrm{dvol} _{g}\\ &\leq-\lim_{i\to\infty}e^{-1}l_{i}^{-1}\int_{X}h\,\mathrm{dvol} _{g}\\ &\leq-\lim_{i\to\infty}\Lambda e^{-1}l_{i}^{-1}\int_{X}\phi_{X} \,\mathrm{dvol}_{g}\\ &=0,\end{split} \tag{2.37}\]
we conclude from above that \(v_{0}>0\) since \(\int_{X}f\,\mathrm{dvol}_{g}<0\). Furthermore, let us choose a small \(\gamma\in(0,v_{0})\) and a nonnegative smooth function \(\eta_{\gamma}\) such that
\[\eta_{\gamma}(x)=\left\{\begin{array}{ll}1,\ x\geq v_{0},\\ 0,\ x\leq\gamma.\end{array}\right. \tag{2.38}\]
For any \(x\in\mathbb{R}^{+}\) and \(i>>1\) we have
\[\eta_{\gamma}(x)\leq xl_{i}^{-2}e^{xl_{i}}, \tag{2.39}\]
where we have used \(y^{-2}e^{xy}\to\infty\) as \(y\to\infty\). It follows for large \(i\) that \(v_{i}>0\) and thus
\[\begin{split}\int_{X}h\eta_{\gamma}(v_{i})\,\mathrm{dvol}_{g}& \geq l_{i}^{-2}\int_{X}hv_{i}e^{l_{i}v_{i}}\,\mathrm{dvol}_{g}\\ &\geq l_{i}^{-2}\int_{X}v_{i}f\,\mathrm{dvol}_{g}\,.\end{split} \tag{2.40}\]
Next for any compact subset \(\Omega\subset X\), we may assume \(v_{i}\to v_{0}\) strongly in \(L^{2}(\Omega)\) and
\[\begin{split}\int_{\Omega}h\,\mathrm{dvol}_{g}&= \int_{\Omega}\eta_{\gamma}(v_{0})h\,\mathrm{dvol}_{g}\\ &=\lim_{i\to\infty}\int_{\Omega}\eta_{\gamma}(v_{i})h\,\mathrm{ dvol}_{g}+\lim_{i\to\infty}\int_{\Omega}(\eta_{\gamma}(v_{0})-\eta_{\gamma}(v_{i}))h \,\mathrm{dvol}_{g}\\ &\geq\lim_{i\to\infty}\int_{X}\eta_{\gamma}(v_{i})h\,\mathrm{ dvol}_{g}\\ &\geq\lim_{i\to\infty}l_{i}^{-2}\int_{X}v_{i}f\,\mathrm{dvol}_{g} \\ &\geq-\lim_{i\to\infty}\Lambda C_{3}l_{i}^{-2}\int_{X}\phi_{X}\, \mathrm{dvol}_{g}\\ &=0.\end{split} \tag{2.41}\]
This is impossible since \(h\) is nonpositive and not identically zero.
Corollary 1.1 is a consequence of Theorem 2.1 and the following
**Proposition 2.1**.: _Suppose that \(X=\overline{X}\setminus\Sigma\) is the complement of a complex analytic subset \(\Sigma\) in a compact complex manifold \(\overline{X}\) with complex codimension at least two and \(g=\overline{g}|_{X}\) for a Gauduchon metric \(\overline{g}\) on \(\overline{X}\), then \((X,g)\) satisfies the Main Assumption._
Proof.: We take \(\phi_{X}=1\) and may choose a family of cut-off functions \(\{\eta_{\delta}\}_{\delta>0}\) such that \(\eta_{\delta}=1\) on \(\overline{X}\setminus B_{2\delta}(\Sigma)\), \(\eta=0\) on \(B_{\delta}(\Sigma)\)(where \(B_{\delta}(\Sigma)=\{x\in\overline{X},d(x,\Sigma)<\delta\}\)) and
\[\lim_{\delta\to 0}\int_{\overline{X}}\left|d\eta_{\delta}\right|^{2}\mathrm{ dvol}_{\overline{g}}=O(\delta^{-2}\delta^{4})=0. \tag{2.42}\]
Assume \(f\in C^{\infty}(X)\) is nonnegative and bounded with \(\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}f\geq-A\) for a positive constant \(A\). We set \(\tilde{f}=f+1\) so that \(\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}\tilde{f}\geq-A\tilde {f}\), then for any \(q\geq 1\),
\[-A\int_{\overline{X}}\eta_{\delta}^{2}\tilde{f}^{q+1}\omega_{ \overline{g}}^{n} \leq\int_{\overline{X}}n\eta_{\delta}^{2}\tilde{f}^{q}\sqrt{-1} \partial\overline{\partial}\tilde{f}\wedge\omega_{\overline{g}}^{n-1}\] \[=n\int_{\overline{X}}-\sqrt{-1}\partial(\eta_{\delta}^{2}\tilde{ f}^{q})\wedge\overline{\partial}\tilde{f}\wedge\omega_{\overline{g}}^{n-1}+ \sqrt{-1}\eta_{\delta}^{2}\tilde{f}^{q}\overline{\partial}\tilde{f}\wedge \partial\omega_{\overline{g}}^{n-1}\] \[\leq\int_{\overline{X}}(C_{1}\eta_{\delta}\tilde{f}^{q}|\partial \eta_{\delta}|\partial\tilde{f}|+C_{2}\eta_{\delta}^{2}\tilde{f}^{q}|\partial \tilde{f}|-qC_{3}\eta_{\delta}^{2}\tilde{f}^{q-1}|\partial\tilde{f}|^{2}) \omega_{\overline{g}}^{n} \tag{2.43}\]
where and henceforth, \(C_{i}(i=1,2,3...)\) are suitable uniform constants. By the Cauchy inequality, it holds for any positive \(\epsilon\),
\[C_{1}\eta_{\delta}\tilde{f}^{q}|\partial\eta_{\delta}||\partial \tilde{f}|\leq\frac{\epsilon}{2}C_{1}\eta_{\delta}^{2}\tilde{f}^{q-1}|\partial \tilde{f}|^{2}+\frac{C_{1}}{2\epsilon}\tilde{f}^{q+1}|\partial\eta_{\delta}|^{ 2}, \tag{2.45}\] \[C_{2}\eta_{\delta}^{2}\tilde{f}^{q}|\partial\tilde{f}|\leq\frac{ \epsilon}{2}C_{2}\eta_{\delta}^{2}\tilde{f}^{q-1}|\partial\tilde{f}|^{2}+\frac {C_{2}}{2\epsilon}\eta_{\delta}^{2}\tilde{f}^{q+1}. \tag{2.44}\]
Taking \(\epsilon=\frac{qC_{3}}{C_{1}+C_{2}}\), it follows
\[\frac{qC_{3}}{2}\int_{\overline{X}}\eta_{\delta}^{2}\tilde{f}^{q- 1}|\partial\tilde{f}|^{2}\omega_{\overline{g}}^{n} \leq(A+\frac{C_{2}(C_{1}+C_{2})}{2qC_{3}})\int_{\overline{X}} \eta_{\delta}^{2}\tilde{f}^{q+1}\omega_{\overline{g}}^{n}\] \[+\frac{C_{1}(C_{1}+C_{2})}{2qC_{3}}\int_{\overline{X}}\tilde{f}^{ q+1}|\partial\eta_{\delta}|^{2}\omega_{\overline{g}}^{n}, \tag{2.46}\]
and therefore
\[\int_{\overline{X}}|\partial(\eta_{\delta}\tilde{f}^{\frac{q+1}{ 2}})|^{2}\,\mathrm{dvol}_{\overline{g}} =\int_{\overline{X}}\tilde{f}^{q+1}|\partial\eta_{\delta}|^{2}+ \eta_{\delta}^{2}|\partial\tilde{f}^{\frac{q+1}{2}}|^{2}\,\mathrm{dvol}_{ \overline{g}}\] \[=\int_{\overline{X}}\tilde{f}^{q+1}|\partial\eta_{\delta}|^{2}\, \mathrm{dvol}_{\overline{g}}+\frac{(q+1)^{2}}{4}\int_{\overline{X}}\eta_{ \delta}^{2}\tilde{f}^{q-1}|\partial\tilde{f}|^{2}\,\mathrm{dvol}_{\overline{g}}\] \[\leq(\frac{q+1}{q})^{2}(C_{4}+qC_{5})\int_{\overline{X}}\eta_{ \delta}^{2}\tilde{f}^{q+1}\,\mathrm{dvol}_{\overline{g}}\] \[+C_{6}(\frac{q+1}{q})^{2}\int_{\overline{X}}\tilde{f}^{q+1}| \partial\eta_{\delta}|^{2}\,\mathrm{dvol}_{\overline{g}}\,. \tag{2.47}\]
Using the Sobolev inequality, we have
\[(\int_{\overline{X}}|\eta_{\delta}\tilde{f}^{\frac{q+1}{2}}|^{ \frac{2n}{n-1}}\,\mathrm{dvol}_{\overline{g}})^{\frac{n-1}{n}} \leq C_{7}\int_{\overline{X}}\eta_{\delta}^{2}\tilde{f}^{q+1}+2| \partial(\eta_{\delta}\tilde{f}^{\frac{q+1}{2}})|^{2}\,\mathrm{dvol}_{ \overline{g}}\] \[\leq C_{8}(q+1)^{2}\int_{\overline{X}}(\eta_{\delta}^{2}+| \partial\eta_{\delta}|^{2})\tilde{f}^{q+1}\,\mathrm{dvol}_{\overline{g}}\,. \tag{2.48}\]
Hence as \(\delta\to 0\), it holds
\[(\int_{X}\tilde{f}^{\frac{n}{n-1}(q+1)}\,\mathrm{dvol}_{g})^{\frac{n-1}{n}}\leq C _{8}(q+1)^{2}\int_{X}\tilde{f}^{q+1}\,\mathrm{dvol}_{g}, \tag{2.49}\]
and then we finish the proof via an iteration procedure.
## 3. Multiplicities of the prescribed curvature equation on \(\mathbb{R}^{2}\)
In the following, we write \(g_{0}=dx\otimes dx+dy\otimes dy\) for the standard metric.
**Lemma 3.1**.: _If \(\alpha,\phi\in C^{\infty}(\mathbb{R}^{2})\) satisfy_
1. \(\alpha>0\) _and_ \(\alpha(z^{-1})|z|^{-4}\) _extends to a positive smooth function on_ \(\mathbb{R}^{2}\)_,_
2. \(\phi\geq 0\)_,_ \(\alpha\phi^{-1}\in L^{\infty}(\mathbb{R}^{2})\) _and_ \(\alpha^{1-q}\phi^{q}\in L^{1}(\mathbb{R}^{2})\) _for some_ \(q>1\)_,_
_then \((\mathbb{R}^{2},g_{0})\) satisfies the Main Assumption with \(\phi_{\mathbb{R}^{2}}=\phi\)._
Proof.: Let us consider \(\mathbb{CP}^{1}\) as a compactification of \(\mathbb{C}\cong\mathbb{R}^{2}\), denote
\[U_{1}=\{[(z_{0},z_{1})]\in\mathbb{CP}^{1},z_{0}\neq 0\},\ U_{2}=\{[(z_{0},z_{1 })]\in\mathbb{CP}^{1},z_{1}\neq 0\}, \tag{3.1}\]
and coordinate functions \(z=z_{1}^{-1}z_{2},\ w=z_{2}^{-1}z_{1}\) on \(U_{1}\), \(U_{2}\) respectively. We set
\[\omega_{g}=\sqrt{-1}\alpha dz\wedge d\overline{z}=\sqrt{-1}\alpha(w^{-1})|w|^{ -4}dw\wedge d\overline{w}, \tag{3.2}\]
which it is a fundamental form of a Hermitian metric \(g\) on \(\mathbb{CP}^{1}\) and it holds
\[\omega_{g}=2\alpha\omega_{g_{0}},\ \sqrt{-1}\Lambda_{\omega_{g}}\partial \overline{\partial}=2^{-1}\alpha^{-1}\sqrt{-1}\Lambda_{\omega_{g_{0}}} \partial\overline{\partial}. \tag{3.3}\]
Assume \(f\in C^{\infty}(\mathbb{R}^{2})\) is nonnegative and bounded with
\[\sqrt{-1}\Lambda_{\omega_{g_{0}}}\partial\overline{\partial}f\geq-A\phi, \tag{3.4}\]
for a positive constant \(A\), then it is equivalent to
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}f\geq-2^{-1}A\alpha^{ -1}\phi. \tag{3.5}\]
We compute
\[\begin{split}||\phi||_{L^{1}(\mathbb{R}^{2})}&=||(2 \alpha)^{-1}\phi||_{L^{1}(\mathbb{R}^{2},g)}\\ &\leq||(2\alpha)^{-1}\phi||_{L^{q}(\mathbb{R}^{2},g)}\operatorname {vol}(\mathbb{R}^{2},g)^{\frac{q-1}{q}}\\ &=||(2\alpha)^{1-q}\phi^{q}||_{L^{1}(\mathbb{R}^{2})}^{\frac{1}{q }}\operatorname{vol}(\mathbb{R}^{2},g)^{\frac{q-1}{q}}\\ &<\infty,\end{split} \tag{3.6}\]
by [30, Proposition 2.2] we conclude the following inequality holds weakly on \(\mathbb{CP}^{1}\),
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}f\geq-2^{-1}A\alpha^{ -1}\phi. \tag{3.7}\]
Therefore there is a constant \(C\) independent of \(f\) such that
\[\begin{split} f&\leq C(1+\int_{\mathbb{CP}^{1}}f \omega_{g})\\ &=C(1+2\int_{\mathbb{R}^{2}}f\alpha\operatorname{dvol}_{g_{0}})\\ &\leq C(1+2\sup_{\mathbb{R}^{2}}\alpha\phi^{-1}\int_{\mathbb{R}^ {2}}f\phi\operatorname{dvol}_{g_{0}})\\ &\leq(C+2\sup_{\mathbb{R}^{2}}\alpha\phi^{-1})(1+\int_{\mathbb{R }^{2}}f\phi\operatorname{dvol}_{g_{0}}),\end{split} \tag{3.8}\]
where we have used \(\alpha^{-1}\phi\in L^{q}(\mathbb{R}^{2},g)\) for some \(q>1\).
If we take \(\alpha=(1+|z|^{2})^{-2}\), \(\phi=\phi_{p}=(1+|z|^{2})^{-p}\), it follows
**Corollary 3.1**.: \((\mathbb{R}^{2},g_{0})\) _satisfies the Main Assumption with \(\phi_{\mathbb{R}^{2}}=\phi_{p}\) and \(p\in(1,2]\)._
The following lemma demonstrates that \(\mathbb{R}^{2}\) can also satisfy our Main Assumption when the underlying metric is not the standard one.
**Lemma 3.2**.: _If \(\alpha,\phi,\psi\in C^{\infty}(\mathbb{R}^{2})\) satisfy_
1. \(\alpha>0\) _and_ \(\alpha(z^{-1})|z|^{-4}\) _extends to a positive smooth function on_ \(\mathbb{R}^{2}\)_,_
2. \(\phi>0\)_,_ \(\psi\geq 0\)_,_ \(\alpha\phi^{k}\psi^{-1}\in L^{\infty}(\mathbb{R}^{2})\) _and_ \(\alpha^{1-q}(\phi^{-k}\psi)^{q}\in L^{1}(\mathbb{R}^{2})\) _for some_ \(q>1\)_,_
_then \((\mathbb{R}^{2},g_{k})\) satisfies the Main Assumption with \(\phi_{\mathbb{R}^{2}}=\psi\), where \(g_{k}=\phi^{-k}g_{0}\)._
Proof.: We use the notations in Lemma 3.1 and note
\[\begin{split}||\psi||_{L^{1}(\mathbb{R}^{2},g_{k})}& =||\phi^{-k}\psi||_{L^{1}(\mathbb{R}^{2})}\\ &=||(2\alpha)^{-1}\phi^{-k}\psi||_{L^{1}(\mathbb{R}^{2},g)}\\ &\leq||(2\alpha)^{-1}\phi^{-k}\psi||_{L^{q}(\mathbb{R}^{2},g)} \operatorname{vol}(\mathbb{R}^{2},g)^{\frac{q-1}{q}}\\ &=||(2\alpha)^{1-q}(\phi^{-k}\psi)^{q}||_{L^{1}(\mathbb{R}^{2})} ^{\frac{1}{q}}\operatorname{vol}(\mathbb{R}^{2},g)^{\frac{q-1}{q}}\\ &<\infty.\end{split} \tag{3.9}\]
Assume \(f\in C^{\infty}(\mathbb{R}^{2})\) is nonnegative and bounded with
\[\sqrt{-1}\Lambda_{\omega_{g_{k}}}\partial\overline{\partial}f\geq-A\psi, \tag{3.10}\]
for a positive constant \(A\), then
\[\sqrt{-1}\Lambda_{\omega_{g_{0}}}\partial\overline{\partial}f\geq-A\phi^{-k}\psi. \tag{3.11}\]
Therefore there is a uniform constant \(C\) such that
\[\begin{split} f&\leq(C+2\sup_{\mathbb{R}^{2}}\alpha \phi^{k}\psi^{-1})(1+\int_{\mathbb{R}^{2}}f\phi^{-k}\psi\operatorname{dvol}_{ g_{0}})\\ &=(C+2\sup_{\mathbb{R}^{2}}\alpha\phi^{k}\psi^{-1})(1+\int_{ \mathbb{R}^{2}}f\psi\operatorname{dvol}_{g_{k}}).\end{split} \tag{3.12}\]
Let \(\alpha=(1+|z|^{2})^{-2}\), \(\phi=(1+|z|^{2})^{-1}\) and \(\psi=\psi_{l}=(1+|z|^{2})^{-l}\), it is easy to see
**Corollary 3.2**.: \((\mathbb{R}^{2},g_{k})\) _satisfies the Main Assumption with \(\phi_{\mathbb{R}^{2}}=\psi_{l}\), where \(g_{k}=\phi^{-k}g_{0}\) and \(l>0\), \(k\in[l-2,l-1)\) are two constants._
Let us consider \((\mathbb{R}^{2},g_{k})\) and \(\phi_{\mathbb{R}^{2}}\) given in Lemma 3.2. Using (2.15), we have
\[S_{g_{k}}^{Ch}=k\phi^{k}\sqrt{-1}\Lambda_{\omega_{g_{0}}}\partial\overline{ \partial}\log\phi=\frac{k}{2}\phi^{k}\Delta\log\phi. \tag{3.13}\]
In view of Theorem 2.1, we have
**Proposition 3.1**.: _Let \(\alpha,\phi,\psi\in C^{\infty}(\mathbb{R}^{2})\) be three functions satisfying_
1. \(\phi>0\) _and_ \(\psi\geq 0\)_,_
2. \(\alpha>0\) _and_ \(\alpha(z^{-1})|z|^{-4}\) _extends to a positive smooth function on_ \(\mathbb{R}^{2}\)
_If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq\Lambda\psi\) for a constant \(\Lambda\), then for any constant \(k\) such that_
1. \(\alpha\phi^{k}\psi^{-1}\in L^{\infty}(\mathbb{R}^{2})\) _and_ \(\alpha^{1-q}(\phi^{-k}\psi)^{q}\in L^{1}(\mathbb{R}^{2})\) _for some_ \(q>1\)_,_
2. \(\int_{\mathbb{R}^{2}}k\Delta\log\phi<0\) _and_ \(|k\phi^{k}\Delta\log\phi|\leq 2\Lambda\psi\)_,_
_the prescribed curvature equation_
\[\sqrt{-1}\Lambda_{\omega_{g_{k}}}\partial\overline{\partial}v+Ke^{v}=S^{Ch}_{g_ {k}} \tag{3.14}\]
_possesses a solution \(v_{k}\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) with \(|dv_{k}|\in L^{2}(\mathbb{R}^{2})\)._
Since \(v\) satisfies (3.14) if and only if the metric
\[e^{v}g_{k}=e^{u}g_{0},\ u=v-k\log\phi, \tag{3.15}\]
has Chern scalar curvature \(K\), which is also equivalent to
\[\sqrt{-1}\Lambda_{\omega_{g_{0}}}\partial\overline{\partial}u+Ke^{u}=0. \tag{3.16}\]
**Theorem 3.1**.: _Let \(\alpha,\phi,\psi\in C^{\infty}(\mathbb{R}^{2})\) be three functions satisfying_
1. \(\phi>0\) _and_ \(\psi\geq 0\)_,_
2. \(\alpha>0\) _and_ \(\alpha(z^{-1})|z|^{-4}\) _extends to a positive smooth function on_ \(\mathbb{R}^{2}\)_._
_If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq\Lambda\psi\) for a constant \(\Lambda\), then for any constant \(k\) such that_
1. \(\alpha\phi^{k}\psi^{-1}\in L^{\infty}(\mathbb{R}^{2})\) _and_ \(\alpha^{1-q}(\phi^{-k}\psi)^{q}\in L^{1}(\mathbb{R}^{2})\) _for some_ \(q>1\)_,_
2. \(\int_{\mathbb{R}^{2}}k\Delta\log\phi<0\) _and_ \(|k\phi^{k}\Delta\log\phi|\leq 2\Lambda\psi\)_,_
(1.4) _possesses a solution on_ \(\mathbb{R}^{2}\) _and it has the following form_
\[2u_{k}=v_{k}-k\log\phi, \tag{3.17}\]
_where \(v_{k}\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) with \(|dv_{k}|\in L^{2}(\mathbb{R}^{2})\)._
**Remark 3.1**.: _Let \(k_{1}\), \(k_{2}\) be two constants satisfying the conditions of \(k\), then_
\[C_{1}-(k_{1}-k_{2})\log\phi\leq 2(u_{k_{1}}-u_{k_{2}})\leq C_{2}-(k_{1}-k_{2}) \log\phi, \tag{3.18}\]
_where \(C_{1}=\inf_{\mathbb{R}^{2}}(v_{k_{1}}-v_{k_{2}})\) and \(C_{2}=\sup_{\mathbb{R}^{2}}(v_{k_{1}}-v_{k_{2}})\)._
We also have the uniqueness.
**Theorem 3.2**.: _Any solution \(u\) to (1.4) which takes the form_
\[2u=v-k\log\phi, \tag{3.19}\]
_with \(v\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\), \(k\) and \(\phi\) given in Theorem 3.1, is exactly \(u_{k}\)._
Proof.: One sees that \(v\) and \(v_{k}\) satisfy
\[\sqrt{-1}\Lambda_{\omega_{g_{k}}}\partial\overline{\partial}v+Ke^{v}=S^{Ch}_{g_ {k}}, \tag{3.20}\]
\[\sqrt{-1}\Lambda_{\omega_{g_{k}}}\partial\overline{\partial}v_{k}+Ke^{v_{k}}=S^{ Ch}_{g_{k}}, \tag{3.21}\]
where \(g_{k}=\phi^{-k}g_{0}\). We compute
\[\begin{split}\sqrt{-1}\Lambda_{g_{k}}\partial\overline{\partial}(e ^{v-v_{k}}+e^{v-v_{k}})&=(e^{v-v_{k}}-e^{v_{k}-v})\sqrt{-1} \Lambda_{g_{k}}\partial\overline{\partial}(v-v_{k})\\ &+(e^{v-v_{k}}+e^{v_{k}-v})|\partial(v-v_{k})|_{g_{k}}^{2}\\ &\geq-K(e^{v-v_{k}}-e^{v_{k}-v})(e^{v}-e^{v_{k}})\\ &\geq 0.\end{split} \tag{3.22}\]
Namely, \(e^{v-v_{k}}+e^{v-v_{k}}\) is a bounded sub-harmonic function on \(\mathbb{R}^{2}\) and hence constant by standard Liouville theorem, then it follows \(u=u_{k}\).
If we consider \((\mathbb{R}^{2},g_{k})\) and \(\phi_{\mathbb{R}^{2}}\) given in Corollary 3.2, we have
\[\begin{split} k\phi^{k}\Delta\log\phi&=4k\phi^{k} \frac{\partial^{2}\log\phi}{\partial z\partial\overline{z}}\\ &=\frac{-4k}{(1+|z|^{2})^{k}}(\frac{\frac{\partial^{2}|z|^{2}}{ \partial z\partial\overline{z}}}{\frac{\partial^{2}|z\partial\overline{z}}{ 1+|z|^{2}}}-\frac{|\partial|z|^{2}|^{2}}{(1+|z|^{2})^{2}})\\ &=\frac{4k}{(1+|z|^{2})^{k}}(\frac{|z|^{2}}{(1+|z|^{2})^{2}}- \frac{1}{1+|z|^{2}})\\ &=\frac{-4k}{(1+|z|^{2})^{k+2}}.\end{split} \tag{3.23}\]
If \(k>0\) and \(\Lambda\geq 2k\), we have
\[\int_{\mathbb{R}^{2}}k\Delta\log\phi<0,\ |k\phi^{k}\Delta\log\phi|\leq\frac{4k} {(1+|z|^{2})^{l}}=2\Lambda\phi_{\mathbb{R}^{2}}. \tag{3.24}\]
It follows from Theorem 3.1, Theorem 3.2 and above that
**Proposition 3.2**.: _If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq\Lambda(1+|z|^{2})^{-l}\) for two constants \(l>1\) and \(\Lambda\). Then for any constant_
\[k\in[l-2,l-1)\cap(0,\frac{\Lambda}{2}], \tag{3.25}\]
(1.4) _possesses a solution on \(\mathbb{R}^{2}\) and it has the following form_
\[2u_{k}=v_{k}+k\log(1+|z|^{2}), \tag{3.26}\]
_with \(v_{k}\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) and \(|dv_{k}|\in L^{2}(\mathbb{R}^{2})\). On the other hand, any solution \(u\) of the form_
\[2u=v+k\log(1+|z|^{2}), \tag{3.27}\]
_with \(v\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) and constant \(k\in[l-2,l-1)\cap(0,\frac{\Lambda}{2}]\), is exactly \(u_{k}\)._
**Remark 3.2**.: _By Remark 3.1, \(u_{k_{1}}\neq u_{k_{2}}\) for any different \(k_{1},k_{2}\in[l-2,l-1)\cap(0,\frac{\Lambda}{2}]\)._
The geometric interpretation is as follows.
**Corollary 3.3**.: _If \(K\in C^{\infty}(\mathbb{R}^{2})\) is nonpositive and nonzero with decay \(|K|\leq\Lambda(1+|z|^{2})^{-l}\) for two constants \(l>1\) and \(\Lambda\). Then for any constant_
\[k\in[l-2,l-1)\cap(0,\frac{\Lambda}{2}], \tag{3.28}\]
_there exists a conformal metric \(\tilde{g}_{k}=e^{v_{k}}(1+|z|^{2})^{k}g_{0}\) on \(\mathbb{R}^{2}\) such that \(S^{Ch}_{g_{k}}=K\), where \(v_{k}\in C^{\infty}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) and \(|dv_{k}|\in L^{2}(\mathbb{R}^{2})\)._
## 4. Vortex equation on holomorphic line bundles
Assume \(L\) is a holomorphic line bundle over \(X\) and \(\varphi\in A^{0}(L)\) is a nonzero section. Consider two Hermitian structures \(H\) and \(K\), related by \(H=e^{f}K\), then
\[\sqrt{-1}\Lambda_{\omega_{g}}F_{H}+\frac{1}{2}\varphi\otimes\varphi^{*H}= \sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\sqrt{-1}\Lambda_{\omega_{g}}\partial \overline{\partial}f+\frac{1}{2}|\varphi|^{2}e^{f}, \tag{4.1}\]
where \(F_{\bullet}\) denotes the Chren curvature of \(\bullet\). So the solvability of the vortex equation
\[\sqrt{-1}\Lambda_{\omega_{g}}F_{H}+\frac{1}{2}\varphi\otimes\varphi^{*H}= \frac{\lambda}{2},\ \lambda\in C^{\infty}(X), \tag{4.2}\]
is equivalent to the solvability of
\[\sqrt{-1}\Lambda_{\omega_{g}}\partial\overline{\partial}f-\frac{1}{2}|\varphi |^{2}e^{f}=\sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\frac{\lambda}{2}. \tag{4.3}\]
Hence Theorem 4.1 below is a consequence of the proof of Theorem 2.1.
**Theorem 4.1**.: _Given a Hermitian structure \(K\) on \(L\) such that_
\[\int_{X}(2\sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\lambda)\operatorname{dvol}_{g}< 0,\ |\sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\frac{\lambda}{2}|+|\varphi|^{2}\leq \Lambda\phi_{X}, \tag{4.4}\]
_for a constant \(\Lambda\), then (4.2) admits a solution._
**Corollary 4.1**.: _Let \(X=\overline{X}\setminus\Sigma\) be the complement of a complex analytic subset \(\Sigma\) in a compact complex manifold \(\overline{X}\) with complex codimension at least two and \(g=\overline{g}|_{X}\) for a Gauduchon metric \(\overline{g}\) on \(\overline{X}\). Given a Hermitian structure \(K\) on \(L\) such that_
\[\int_{X}(2\sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\lambda)\operatorname{dvol}_{g}< 0,\ |\sqrt{-1}\Lambda_{\omega_{g}}F_{K}-\frac{\lambda}{2}|+|\varphi|^{2}\in L^{ \infty}(X), \tag{4.5}\]
_then (4.2) admits a solution._
**Proposition 4.1**.: _For two constants \(l>0\) and \(k\in[l-2,l-1)\), we set_
\[\omega_{k}=\frac{\sqrt{-1}}{2}(1+|z|^{2})^{k}dz\wedge d\overline{z} \tag{4.6}\]
_and assume \(f,h\) are smooth nonzero functions on \(\mathbb{C}\) satisfying_
\[\int_{\mathbb{C}}f\omega_{k}<0,\ |f|\leq\Lambda(1+|z|^{2})^{-l},\ -\Lambda(1+|z|^{2})^{-l}\leq h\leq 0, \tag{4.7}\]
_for a constant \(\Lambda\), then the equation_
\[\sqrt{-1}\Lambda_{\omega_{k}}\partial\overline{\partial}u+he^{u}=f \tag{4.8}\]
_admits a bounded smooth function on \(\mathbb{C}\)._
Proof.: It follows from Theorem 2.1 and Corollary 3.2.
**Proposition 4.2**.: _Let \(M\) be a compact complex manifold with Gauduchon metric \(g_{M}\). Assume \(f,h\) are smooth nonzero functions on \(M\times\mathbb{C}\) satisfying_
\[\int_{M\times\mathbb{C}}f\operatorname{dvol}_{g_{M}\times g_{0}}<0,\ |f|\leq \Lambda(1+|z|^{2})^{-l},\ -\Lambda(1+|z|^{2})^{-l}\leq h\leq 0, \tag{4.9}\]
_at \((m,z)\in M\times\mathbb{C}\) for two constants \(l\in(1,2]\) and \(\Lambda\), then_
\[\sqrt{-1}\Lambda_{\omega_{g_{M}\times g_{0}}}\partial\overline{\partial}u+he^ {u}=f \tag{4.10}\]
_admits a bounded smooth function on \(M\times\mathbb{C}\)._
Proof.: By Theorem 2.1, it suffices to prove \((M\times\mathbb{C},g_{M}\times g_{0})\) satisfis the Main Assumption which can be directly checked by Moser iteration, see [35]. Below we sketch the proof just for readers' conveniences. If \(f\in C^{\infty}(M\times\mathbb{C})\) is nonnegative, bounded and
\[\sqrt{-1}\Lambda_{\omega_{g_{M}\times g_{0}}}\partial\overline{\partial}f\geq -A(1+|z|^{2})^{-l}, \tag{4.11}\]
for a positive constant \(A\), we set \(\tilde{f}(z)=\int_{M}f(m,z)\operatorname{dvol}_{g_{M}}\) and it follows
\[\sqrt{-1}\Lambda_{\omega_{g_{0}}}\partial\overline{\partial}\tilde{f}\geq-A \operatorname{vol}(M,g_{M})(1+|z|^{2})^{-l}, \tag{4.12}\]
from which and Corollary 3.1 it holds
\[\sup_{\mathbb{C}}\int_{M}f(m,z)\operatorname{dvol}_{g_{M}}\leq C_{1}\left(1+ \int_{M\times\mathbb{C}}f(1+|z|^{2})^{-l}\operatorname{dvol}_{g_{M}\times g_{0 }}\right), \tag{4.13}\]
for a positive constant \(C_{1}\). In addition, a Moser iteration procedure on (4.11) yields
\[\sup_{M\times B_{1}(z_{0})}f\leq C_{2}(1+\int_{M\times B_{2}(z_{0})}f \operatorname{dvol}_{g_{M}}), \tag{4.14}\]
for any \(z_{0}\in\mathbb{C}\) and a positive constant \(C_{2}\), where \(B_{1}(z_{0})=\{z\in\mathbb{C},|z-z_{0}|<1\}\), \(B_{2}(z_{0})=\{z\in\mathbb{C},|z-z_{0}|<2\}\). By (4.13) and (4.14), we have
\[\sup_{M\times B_{1}(z_{0})}f\leq C_{3}(1+\int_{M\times\mathbb{C}}f(1+|z|^{2})^ {-l}\operatorname{dvol}_{g_{M}\times g_{0}}), \tag{4.15}\]
for a positive constant \(C_{3}\) and the proof is complete since \(z_{0}\) is arbitrary.
Applying Proposition 4.1 and Proposition 4.2 on (4.3), we have
**Corollary 4.2**.: _Let \(X=\mathbb{C}\) and given a Hermitian structure \(K\) on \(L\), two constants \(k\), \(l\) such that \(l>0\), \(k\in[l-2,l-1)\) and_
\[\int_{\mathbb{C}}(2\sqrt{-1}F_{K}-\lambda\omega_{k})<0,\ |\sqrt{-1}\Lambda_{ \omega_{k}}F_{K}-\frac{\lambda}{2}|+|\varphi|^{2}\leq\Lambda(1+|z|^{2})^{-l}, \tag{4.16}\]
_where \(\omega_{k}=\frac{\sqrt{-1}}{2}(1+|z|^{2})^{k}dz\wedge d\overline{z}\), then the curvature equation_
\[\sqrt{-1}F_{H}+\frac{1}{2}\varphi\otimes\varphi^{*H}\omega_{k}=\frac{\lambda}{ 2}\omega_{k} \tag{4.17}\]
_admits a solution._
**Corollary 4.3**.: _Let \(X=M\times\mathbb{C}\), where \(M\) is a compact complex manifold with Gauduchon metric \(g_{M}\). Given a Hermitian structure \(K\) on \(L\) such that_
\[\int_{M\times\mathbb{C}}\left(2\sqrt{-1}\Lambda_{\omega_{g_{M}\times g_{0}}}F_{ K}-\lambda\right)\mathrm{dvol}_{g_{M}\times g_{0}}<0, \tag{4.18}\]
\[|\sqrt{-1}\Lambda_{\omega_{g_{M}\times g_{0}}}F_{K}-\frac{\lambda}{2}|+| \varphi|^{2}\leq\Lambda(1+|z|^{2})^{-l}, \tag{4.19}\]
_at \((m,z)\in M\times\mathbb{C}\) for two constants \(l\in(1,2]\) and \(\Lambda\), then_
\[\sqrt{-1}\Lambda_{\omega_{g_{M}\times g_{0}}}F_{H}+\frac{1}{2}\varphi\otimes \varphi^{*H}=\frac{\lambda}{2} \tag{4.20}\]
_admits a solution._
|
2307.14592 | Short-range correlations and momentum distributions in mirror nuclei 3H
and 3He | Motivated by recent high-energy electron and $\rm ^3H$ and $\rm ^3He$ nuclei
scattering experiment in Jefferson Lab (Nature 609, 41 (2022)), the short-range
correlations (SRCs) between nucleon pairs for 3-nucleon systems are
microscopically studied using realistic $NN$ 2-body interaction and
two-Gaussian type $NNN$ 3-body interaction. The wave functions of both $\rm
^3H$ and $\rm ^3He$ are obtained by solving 3-body Schr\"{o}dinger equations
using Gaussian expansion method (GEM). The differences of one-nucleon and
nucleon-nucleon momentum distributions between $\rm ^3H$ and $\rm ^3He$ are
analyzed in detail. The results show that the percentages of $pn$-SRC pairs are
significantly enhanced as compared with those of $nn(pp)$-SRC ones in $\rm ^3H$
and $\rm ^3He$ nuclei, which is consistent with the experimental findings. | Qi Meng, Ziyang Lu, Chang Xu | 2023-07-27T02:35:35Z | http://arxiv.org/abs/2307.14592v1 | # Short-range correlations and momentum distributions in mirror nuclei \({}^{3}\)H and \({}^{3}\)He
###### Abstract
Motivated by recent high-energy electron and \({}^{3}\)H and \({}^{3}\)He nuclei scattering experiment in Jefferson Lab (Nature 609, 41 (2022)), the short-range correlations (SRCs) between nucleon pairs for 3-nucleon systems are microscopically studied using realistic \(NN\) 2-body interaction and two-Gaussian type \(NNN\) 3-body interaction. The wave functions of both \({}^{3}\)H and \({}^{3}\)He are obtained by solving 3-body Schrodinger equations using Gaussian expansion method (GEM). The differences of one-nucleon and nucleon-nucleon momentum distributions between \({}^{3}\)H and \({}^{3}\)He are analyzed in detail. The results show that the percentages of \(pn\)-SRC pairs are significantly enhanced as compared with those of \(nn(pp)\)-SRC ones in \({}^{3}\)H and \({}^{3}\)He nuclei, which is consistent with the experimental findings.
pacs: 21.30.-x, 21.60.-n, 21.45.+v
## I Introduction
Short-range correlations (SRCs) between pairs of nucleons are important aspects in nuclear physics, which are considered to be generated from the strong, short-distance part of nucleon-nucleon (\(NN\)) interactions. SRCs are important for comprehensive understanding of not only the essential feature of nuclear dynamics but also the nuclear forces at short distance and how they are generated from the strong interaction between quarks in nucleons [1]. The nucleon-nucleon SRC pair is considered to have large relative momentum and small total momentum, leading to a high-momentum tail in one-nucleon and nucleon-nucleon momentum distributions. The study of SRCs and its high-momentum feature will deepen our understanding of the properties of finite nuclei at normal density and nuclear matter at supra-saturation density, which probably has important implications in determining the internal structure and evolution of stellar objects such as neutron stars.
Sophisticated theoretical approaches, using modern realistic interactions [2; 3; 4; 5; 6], can be applied to study the correlated many-body wave functions and SRCs, such as correlated basis function theory [7; 8; 9; 10], self-consistent Green's function method [11; 12], approximate schemes like cluster expansions [13; 14; 15; 16], Tensor-optimized high-momentum antisymmetrized molecular dynamics [17], generalized nuclear contact formalism [18] and variational Monte Carlo calculations [19; 20; 21; 22; 23]. In general, the high-momentum tail of \(pn\)-SRCs in light nuclei have been demonstrated to be a universal feature with these state-of-art approaches. Various experimental efforts have also been devoted to the investigation of SRCs with the aim of probing the short-range properties of nuclear force [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Thanks to the high-energy and large momentum transfer electron and proton scattering experiments, it becomes possible to resolve the structure and dynamics of individual nucleons and nucleon pairs with precise measurements of small cross sections. Experimental data have showed that about 20% of the nucleons in nuclei have momentum larger than the Fermi momentum \(k_{F}\) in saturated nuclear matter [1; 25; 35; 36; 37; 38]. Follow-up experiments probing the isospin composition of nucleon-nucleon SRCs were successfully conducted in both balanced and imbalanced nuclei, indicating that the \(pn\)-SRCs are much more dominating than the \(pp\) and \(nn\) ones.
Recently, an experiment conducted in the Jefferson Lab accurately measured the \(pn\)-SRC pairs and \(pp\)-SRC ones in 3-nucleon systems, using high-energy electron and \({}^{3}\)H and \({}^{3}\)He nuclei scattering experiment [39]. This experiment took advantage of the mirror properties of \({}^{3}\)H and \({}^{3}\)He and avoided the direct measurement of high-momentum nucleons in the final state [39], which improved the experimental accuracy and greatly reduced the uncertainties. Very interestingly, the experimental data show the ratio of \(pn\)-SRCs to \(pp\)-SRCs over the pair-counting prediction \(P_{np/pp}=NZ/(Z(Z-1)/2)\) for \(A=3\) nuclei is \(2.17^{+0.25}_{-0.20}\), which is much smaller than that in heavy nuclei.
Motivated by this unexpected experimental result, we investigate the \(pn\)-SRCs and \(pp(m)\)-SRCs in mirror nuclei \({}^{3}\)H and \({}^{3}\)He. We obtain both one-nucleon and nucleon-nucleon momentum distributions from an _ab initio_ calculation of solving 3-body Schrodinger equation with a realistic \(NN\) 2-body interaction, _i.e._ Argonne \(v^{\prime}_{\rm g}\) (AV8') interaction, and a two-Gaussian type \(NNN\) 3-body interaction. The numerical method we applied to obtain the accurate correlated wave functions is the Gaussian expansion method (GEM) [41], which has been successfully used in both nuclear physics and hadron physics [42; 43; 44]. For instance, we have applied the GEM to both bound and resonant states problems of tetraquark and pentaquark and satisfactory results have been obtained [45; 46; 47]. Realistic momentum distributions are obtained from the Fourier transform of correlated wave functions and the differences between SRCs in \({}^{3}\)H and \({}^{3}\)He are analyzed in detail in the present work. The comparison of SRCs in such imbalanced mirror nuclei with fully microscopic calculations may shed light on the equation of state (EoS) of asymmetric nuclear matter and the density-dependence of nuclear symmetry energy [48; 49; 50; 51; 52; 53; 54].
## II Methodology
The Hamiltonian for a 3-nucleon system is given by
\[H=\sum_{i=1}^{3}\left(m_{i}+\frac{p_{i}^{2}}{2m_{i}}\right)-T_{G}+\sum_{i<j=1 }^{3}V_{ij}+V^{NNN}, \tag{1}\]
where \(m_{i}\) and \(\mathbf{p}_{i}\) are the mass and momentum of the \(i\)-th nucleon, respectively. \(T_{G}\) is the kinetic energy of the center-of-mass (c.o.m.) motion of the 3-nucleon system. The complete 2-body interaction for a given \(NN\) pair \(ij\), \(V_{ij}\), is composed of the strong interaction \(V_{ij}^{NN}\) and electromagnetic interaction \(V_{ij}^{EM}\),
\[V_{ij}=V_{ij}^{NN}+V_{ij}^{EM}. \tag{2}\]
For the \(NN\) strong interaction, we employ the Argonne \(v_{8}^{\prime}\) (AV8') interaction [4]. For the electromagnetic interaction, we consider the Coulomb force between proton-proton. The 3-body interaction \(V^{NNN}\) we applied is a two-Gaussian type \(NNN\) 3-body interaction taken from Ref.[40], which is optimized by fitting the bound states of \({}^{3}\)H, \({}^{3}\)He and \({}^{4}\)He. Its function form is given by
\[V^{NNN}=\sum_{n=1}^{2}V_{n}^{(3)}e^{-\mu_{n}(r_{12}^{2}+r_{23}^{2}+r_{11}^{2})}. \tag{3}\]
In Gaussian expansion method, the variational wave function of a 3-nucleon system in coordinate space, \(\Psi_{TM_{r},JM}\) with isospin \(T\), its \(z\)-component \(M_{T}\), total angular momentum \(J\) and its \(z\)-component \(M\), is given by,
\[\Psi_{TM_{r}JM}=\sum_{C=1}^{3}\sum_{\alpha}A_{\alpha}\Big{[}[ \eta_{\frac{1}{2}}\eta_{\frac{1}{2}}]\eta_{\frac{1}{2}}\Big{]}_{TM_{T}}\times\] \[\Big{[}[[\chi_{\frac{1}{2}}\chi_{\frac{1}{2}}]\chi_{\frac{1}{2}}] \zeta_{\frac{1}{2}}[\phi_{nl}(r^{(C)})\psi_{NL}(\mathbf{R}^{(C)})]_{I}\Big{]}_{JM}, \tag{4}\]
where \(\eta_{\frac{1}{2}}\), \(\chi_{\frac{1}{2}}\) are the isospin and spin wave functions of a single nucleon, respectively. \(\phi\) and \(\psi\) denote spatial wave functions with principal quantum number \(n\), \(N\) and orbit angular momentum \(l\), \(L\), respectively.
The label \((C)\) specifies a set of Jacobi coordinates shown in Fig.1. \(A_{\alpha}\) specifies the expansion coefficients which are determined by matrix diagonalization, where the label \(\alpha\) includes all quantum numbers for the expansion, \(\alpha\equiv\{t,T,\)\(s,S,n,l,N,L,l\}\). \((J,T,M_{T})\) are \((\frac{1}{2},\frac{1}{2},-\frac{1}{2})\) and \((\frac{1}{2},\frac{1}{2},\frac{1}{2})\) for \({}^{3}\)H and \({}^{3}\)He, respectively.
The one-body and two-body density distributions are defined as
\[\rho_{N}(r)=\frac{1}{M_{N}}\Big{\langle}\Psi\Big{|}\sum_{l<j}^{M_{N}}P_{N}^{( i)}\delta(r-|\mathbf{r}_{i}-\mathbf{R}_{cm}|)\Big{|}\Psi\Big{\rangle}, \tag{5}\]
\[\rho_{NN}(r)=\frac{1}{M_{NN}}\Big{\langle}\Psi\Big{|}\sum_{l<j}^{M_{NN}}P_{N}^ {(i)}\delta(r-|\mathbf{r}_{i}-\mathbf{r}_{j}|)\Big{|}\Psi\Big{\rangle}, \tag{6}\]
respectively. \(P_{N}^{(i)}=\frac{1}{2}(1\pm\tau_{z,i})\) and \(P_{NN}^{(i)}=\frac{1}{4}(1\pm\tau_{z,i})(1\pm\tau_{z,j})\) are one-body and two-body isospin projection operators, respectively and the subscript \(N\) labels proton \(p\) and neutron \(n\). \(M_{N}\) and \(M_{NN}\) stand for the number of corresponding nucleon and nucleon-nucleon pair. The normalizations are \(4\pi\int\rho_{N}(r)r^{2}dr=1\) and \(4\pi\int\rho_{NN}(r)r^{2}dr=1\).
The basis wave functions of a 3-nucleon system in momentum space are obtained by the Fourier transform of the Gaussian basis functions in coordinate space, \(\varphi(\mathbf{k})=(\frac{1}{2\pi})^{3/2}\int\varphi(\mathbf{r})e^{-i\mathbf{k}\cdot\mathbf{r }}d\mathbf{r}\) and \(\varphi^{\prime}(\mathbf{K})=(\frac{1}{2\pi})^{3/2}\int\psi(\mathbf{R})e^{-i\mathbf{K}\cdot \mathbf{R}}d\mathbf{R}\). Then using the 3-body Jacobi coordinates in momentum space, \(\mathbf{k}_{1}=(\mathbf{p}_{3}-\mathbf{p}_{2})/2\), \(\mathbf{k}_{2}=(\mathbf{p}_{1}-\mathbf{p}_{3})/2\), \(\mathbf{k}_{3}=(\mathbf{p}_{2}-\mathbf{p}_{1})/2\), \(\mathbf{K}_{1}=2(\mathbf{p}_{1}-\frac{1}{2}\mathbf{p}_{2}-\frac{1}{2}\mathbf{p}_{3})/3\), \(\mathbf{K}_{2}=2(\mathbf{p}_{2}-\frac{1}{2}\mathbf{p}_{3}-\frac{1}{2}\mathbf{p}_{1})/3\), and \(\mathbf{K}_{3}=2(\mathbf{p}_{3}-\frac{1}{2}\mathbf{p}_{1}-\frac{1}{2}\mathbf{p}_{2})/3\), we obtain the total wave function \(\Phi\) of the 3-nucleon system in momentum space,
\[\Phi_{TM_{r}JM}=\sum_{C=1}^{3}\sum_{\alpha}A_{\alpha}\Big{[}[ \eta_{\frac{1}{2}}\eta_{\frac{1}{2}}]\eta_{\frac{1}{2}}\Big{]}_{TM_{T}}\times\] \[\Big{[}[[\chi_{\frac{1}{2}}\chi_{\frac{1}{2}}]\chi_{\frac{1}{2}}] \zeta_{\frac{1}{2}}[\varphi_{nl}(\mathbf{k}^{(C)})\varphi^{\prime}_{NL}(\mathbf{k}^{(C )})]_{I}\Big{]}_{JM}, \tag{7}\]
where \(\mathbf{k}\) and \(\mathbf{K}\) stand for the relative momentum between two nucleons and relative momentum between \(NN\) pair and the third nucleon, respectively. The c.o.m. momentum of \(NN\) pair \(\mathbf{Q}=-\mathbf{K}\) when we omit the c.o.m. motion of the 3-nucleon system.
## III Results
We firstly calculate the binding energies (B.E.) and the root-mean-square (R.M.S.) radii for \({}^{3}\)H and \({}^{3}\)He, respectively. The R.M.S. radius \(R\) is defined as \(R=(\int r^{2}\rho(r)r^{2}dr/\int\rho(r)r^{2}dr)^{1/2}\). In the diagonalization of the 3-body Hamiltonian, we use basis functions with \(l\leq 2\) and \(L\leq 2\), which are enough to make the eigenvalues convergence quickly. The comparison between the calculated results and the experimental data is given in Table 1. It is clearly shown that the binding energies and the proton R.M.S. radii for the bound states of \({}^{3}\)H and \({}^{3}\)He nuclei are both well reproduced. We also calculate the expectation values of kinetic energy, each part of potential energies and potential energies in different isospin \(t\) and spin \(s\) channels. The results, which are listed in Table 2, show that the central potential exists in all \((t,s)\) channels but mainly contributes as attraction in the \((0,1)\) and \((1,0)\) channels. Spin-orbit potential and tensor potential only exist in \(s=1\) channels. It should be emphasized that the tensor potential in the \((t,s)=(0,1)\) channel is important and contributes \(\sim 55\%\) of total attraction. Repulsive Coulomb potential is considered only between \(pp\) in the \(t=1\) channels for \({}^{3}\)He. The 3-body interaction serves as an attractive potential for both \({}^{3}\)H and \({}^{3}\)He and its contribution is relatively small.
The one-body momentum distribution is defined as
\[\rho_{N}(k)=\frac{1}{M_{N}}\Big{\langle}\Phi\Big{|}\sum_{l=1}^{M_{N}}P_{N}^{(i)} \delta(k-|\mathbf{k}_{i}|)\Big{|}\Phi\Big{\rangle}, \tag{8}\]
Figure 1: 3-body Jacobi coordinates of 3-nucleon system in coordinate space.
with the normalization condition \(4\pi\int\rho_{N}(k)k^{2}dk=1\) and the c.o.m. motion of the 3-nucleon system is omitted. In Fig. 2\((a)\) and Fig. 2\((b)\), we display the calculated one-body proton and neutron momentum distributions for \({}^{3}\)H and \({}^{3}\)He, respectively. One can see that the proton and neutron momentum distributions both have max values at \(k=0\) fm\({}^{-1}\), and fall down rapidly in the range of \(0<k<2.0\) fm\({}^{-1}\). As expected, the high-momentum tail appears with \(k>2\) fm\({}^{-1}\), which is attributed to the effect of SRCs between pairs of nucleons. But differences are seen between the proton and neutron for \({}^{3}\)H and \({}^{3}\)He, namely, the minority nucleon (proton for \({}^{3}\)H and neutron for \({}^{3}\)He) has larger high-momentum tail. This is considered to be a natural consequence of the short-range tensor interaction. Taking \({}^{3}\)He as an example, the \(pn\)-SRC generated from tensor interaction populates one proton and one neutron in high-momentum state while the remaining proton (majority nucleon) is in relatively low momentum state. Thus the neutron (minority nucleon) has larger high-momentum tail and larger kinetic energy compared with the proton. This feature also manifests itself in heavy nuclei such as \({}^{27}\)Al, \({}^{56}\)Fe and \({}^{208}\)Pb. The average proton kinetic energy in these nuclei is found to be larger than that of the neutron one in a \(pn\)-dominance toy model [31].
Fig. 3 shows the ratios of proton momentum distribution to neutron one (\(\rho_{p}/\rho_{n}\)) for \({}^{3}\)H (red curve) and \({}^{3}\)He (blue curve). Two curves are roughly symmetric about the horizontal line \(\rho_{p}(k)/\rho_{n}(k)=1\). The behavior of the ratio is determined mainly by the competition between the tensor interaction and the repulsive hard-core. Taking the red curve for \({}^{3}\)H as an example, the ratio of minority nucleon to majority nucleon keeps increasing in the range of \(0<k<2.0\) fm\({}^{-1}\). This is expected because the tensor interaction plays a more and more important role with the increasing of \(k\). The decreasing of the ratio beyond \(k=2.0\) fm\({}^{-1}\) is because the short-range repulsive hard-core starts to contribute largely and reduces the dominance of tensor interaction. Note that the short-range repulsive hard-core exists in all \(NN\) channels including \(nn\) and \(pp\) channels. For very large \(k\), the ratio of \(\rho_{p}/\rho_{n}\) is expected to become smaller. At \(k=5.0\) fm\({}^{-1}\), this ratio reduces to approximately \(\rho_{p}/\rho_{n}=1.25\), indicating that the tensor interaction still contributes but in less dominance. The blue line for \({}^{3}\)He has similar behavior except that it is shown in the ratio of majority nucleon to minority one. We do not repeat the discussion here.
The two-body momentum distribution \(\rho_{NN}\) is a function of \(NN\) relative momentum \(k\) after integrating over all values of of c.o.m. momentum of \(NN\) pairs \(\mathbf{Q}\),
\[\rho_{NN}(k)=\frac{1}{M_{NN}}\Big{\langle}\Phi\Big{|}\sum_{i<j}^{M_{NN}}P_{NN }^{(ij)}\delta(k-|\mathbf{k}_{i}-\mathbf{k}_{j}|)\Big{|}\Phi\Big{\rangle}, \tag{9}\]
with the normalization \(4\pi\int\rho_{NN}(k)k^{2}dk=1\). We display the calculated two-body momentum distributions of different \(NN\) pairs for \({}^{3}\)H and \({}^{3}\)He in Fig. 4\((a)\) and Fig. 4\((b)\), respectively, with \(k\) ranging from 0 to 5.0 fm\({}^{-1}\). In general, the behavior of two-body momentum distributions is similar to that of one-body ones. When \(k>2\) fm\({}^{-1}\), the \(pn\) pair in \({}^{3}\)H shows a large high-momentum tail while that in the \(nn\) pair is much smaller. Similar to the case of \({}^{3}\)H, the high-momentum tail appears in the \(pn\) pairs rather than in the \(pp\) pair for \({}^{3}\)He.
The ratios of \(pn\) to \(pp(nn)\) pairs as function of \(k\) are shown in Fig. 5 (red curve for \({}^{3}\)H and blue curve for \({}^{3}\)He), which can be approximately divided into three regions. The first re
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{AV8\({}^{*}\)} & \multicolumn{1}{c}{\(\langle\)\(V^{Cat}\rangle\)} & \multicolumn{1}{c}{\(\langle\)\(V^{LS}\rangle\)} & \multicolumn{1}{c}{\(\langle\)\(V^{Tem}\rangle\)} & \multicolumn{1}{c}{\(\langle\)\(V^{NNN}\rangle\)} \\ \hline \({}^{3}\)H & B.E.(MeV) & -7.77 & -8.44 & -8.48 \\ & \(R_{p}\)(fm) & 1.637 & 1.597 & 1.59 \\ & \(R_{n}\)(fm) & 1.790 & 1.740 & & \\ & \(R_{pn}\)(fm) & 2.922 & 2.846 & & \\ & \(R_{nn}\)(fm) & 3.189 & 3.094 & & \\ \hline \({}^{3}\)He & B.E.(MeV) & -7.11 & -7.76 & -7.72 \\ & \(R_{p}\)(fm) & 1.824 & 1.770 & 1.76 \\ & \(R_{n}\)(fm) & 1.660 & 1.617 & & \\ & \(R_{pn}\)(fm) & 2.967 & 2.886 & & \\ & \(R_{pp}\)(fm) & 3.256 & 3.152 & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: The calculated \({}^{3}\)H and \({}^{3}\)He binding energies (B.E.) and root-mean-square (R.M.S.) radii using AV8\({}^{*}\) interaction (Cal.(AV8’:)) and AV8’ and \(NNN\) 3-body interaction (Cal.(AV8’+3NI)), compared with the experimental values (Exp.).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(t\)} & \multirow{2}{*}{\(s\)} & \multirow{2}{*}{\(K\)} & \multicolumn{4}{c}{AV8\({}^{*}\)} & \multirow{2}{*}{\(\langle\)\(V^{Com}\rangle\)} & \multirow{2}{*}{\(\langle\)\(V^{Tem}\rangle\)} & \multirow{2}{*}{\(\langle\)\(V^{NNN}\rangle\)} \\ \cline{3-3} \cline{5-7} & & & \(\langle\)\(V^{Cat}\rangle\) & & & \\ \hline \multirow{3}{*}{\({}^{3}\)H} & 0 & & 0.02 & 0 & 0 & 0 & \\ & 0 & 1 & & -8.72 & -1.97 & -31.50 & 0 & \\ & 1 & 0 & & -14.74 & 0 & 0 & 0 & \\ & 1 & 1 & & 0.19 & -0.10 & -0.24 & 0 & \\ \cline{2-7} & sum & 49.54 & -23.25 & -2.07 & -31.74 & 0 & -0.92 \\ \hline \multirow{3}{*}{\({}^{3}\)He} & 0 & 0 & 0.02 & 0 & 0 & 0 & \\ & 0 & 1 & & -8.63 & -1.95 & -31.16 & 0 & \\ \cline{1-1} & 1 & 0 & & -14.36 & 0 & 0 & 0.61 & \\ \cline{1-1} & 1 & & & 0.19 & -0.10 & -0.23 & 0.06 & \\ \cline{1-1} \cline{2-7} & sum & 48.69 & -22.78 & -2.05 & -31.39 & 0.67 & -0.91 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The expectation values of kinetic energy \(K\) and potential energies of central \(\langle\)\(V^{Cm}\rangle\), spin-orbit \(\langle\)\(V^{LS}\rangle\), tensor \(\langle\)\(V^{Tem}\rangle\), Coulomb \(\langle\)\(V^{Com}\rangle\) and \(NNN\) 3-body interaction \(\langle\)\(V^{NNN}\rangle\) in different isospin \(t\) and spin \(s\) channels for \({}^{3}\)H and \({}^{3}\)He (Unit:MeV).
ion (\(0<k<1.5\) fm\({}^{-1}\)) is considered to be dominated by the long-range one-pion-exchange potential and the variation of \(p\)\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\mathbf{Q}>0\). One would expect the ratio of \(pn\)-SRC pairs to \(nn(pp)\) ones to be smaller and very likely to be more consistent with our theoretical prediction.
## IV Summary
We have performed microscopic calculations of the one-, and two-nucleon momentum distributions and the \(pn/nn(pp)\) SRC ratios for mirror nuclei \({}^{3}\)H and \({}^{3}\)He. We show that the \(pn\)-SRCs are enhanced compared with the \(nn(pp)\)-SRCs, which is consistent with the recent experimental data. We also show that the tensor-force-induced SRC competes strongly with the hard-core-induced SRC beyond the Fermi momentum. The tensor SRC pairs dominate in the inter-medium region of \(1.5\) fm\({}^{-1}<k<3.0\) fm\({}^{-1}\) while the hard-core SRC ones in higher momentum region \(k>3.0\) fm\({}^{-1}\). The present microscopic GEM calculations can possibly be extended to heavier systems in which the percentage of \(pn\)-SRCs is expected to be further enhanced. A comparison of 3-nucleon systems and heavier ones should be helpful to better understand the short-distance part of nuclear force and its isospin-dependence.
###### Acknowledgements.
The authors would like to thank Zhihong Ye, Mengjiao Lyu, and Emiko Hiyama for the helpful discussions. The work is supported by the National Natural Science Foundation of China (Grant No. 12275129) and the Fundamental Research Funds for the Central Universities (Grant No. 020414380209).
|
2309.02884 | Aligning Large Language Models for Clinical Tasks | Large Language Models (LLMs) have demonstrated remarkable adaptability,
showcasing their capacity to excel in tasks for which they were not explicitly
trained. However, despite their impressive natural language processing (NLP)
capabilities, effective alignment of LLMs remains a crucial challenge when
deploying them for specific clinical applications. The ability to generate
responses with factually accurate content and to engage in non-trivial
reasoning steps are crucial for the LLMs to be eligible for applications in
clinical medicine. Employing a combination of techniques including
instruction-tuning and in-prompt strategies like few-shot and chain-of-thought
prompting has significantly enhanced the performance of LLMs. Our proposed
alignment strategy for medical question-answering, known as
'expand-guess-refine', offers a parameter and data-efficient solution. A
preliminary analysis of this method demonstrated outstanding performance,
achieving a score of 70.63% on a subset of questions sourced from the USMLE
dataset. | Supun Manathunga, Isuru Hettigoda | 2023-09-06T10:20:06Z | http://arxiv.org/abs/2309.02884v2 | # Aligning Large Language Models for Clinical Tasks
###### Abstract
Large Language Models (LLMs) have demonstrated remarkable adaptability, showcasing their capacity to excel in tasks for which they were not explicitly trained. However, despite their impressive natural language processing (NLP) capabilities, effective alignment of LLMs remains a crucial challenge when deploying them for specific clinical applications. The ability to generate responses with factually accurate content and to engage in non-trivial reasoning steps are crucial for the LLMs to be eligible for applications in clinical medicine. Employing a combination of techniques including instruction-tuning and in-prompt strategies like few-shot and chain-of-thought prompting has significantly enhanced the performance of LLMs. Our proposed alignment strategy for medical question-answering, known as 'expand-guess-refine', offers a parameter and data-efficient solution. A preliminary analysis of this method demonstrated outstanding performance, achieving a score of 70.63% on a subset of questions sourced from the USMLE dataset.
Large Language Models Clinical Applications Alignment Strategy Medical Question-Answering
## 1 Introduction
Until recent past, Artificial Intelligence (AI) research was mainly focusing on specific tasks like mastering the game of chess and Go [1, 2]. However, the advancement of the deep learning techniques, particularly the transformer models has revolutionized the way humans interact with AI models, especially in the realm of Natural Language Processing (NLP) [3]. The transformer architecture has laid the groundwork for Large Language Models (LLMs), which exhibit the remarkable capacity to perform tasks they weren't explicitly trained for, a phenomenon observed as these models are scaled to substantial capacities [4]. The development of these expansive LLMs may be bringing us closer to the threshold of realizing Artificial General Intelligence [5, 6].
LLMs have been trained on large text corpora containing medical knowledge and this knowledge becomes ingrained in their neural weights [7]. Capitalizing on the task-agnostic nature of LLMs, they find utility across a spectrum of clinical medicine tasks, ranging from information retrieval and summarization to decision-making and diagnostics [4]. However, given the sensitive nature of clinical medicine, it is imperative for these models to aptly grasp the nuances of tasks, extract pertinent information, and engage in reasoned analysis with a certain level of discernment. Mechanisms have to be devised to mitigate hallucination, guarding against harmful content, and ensuring the model's alignment with medical ethics [8, 9, 10].
Therefore, despite impressive NLP capabilities exhibited by LLMs, they need to be aligned before deploying for specific clinical tasks [11]. Different alignment techniques try to achieve different goals. Instruction finetuning or instruction-tuning trains the model to follow human instruction better, making the model outputs more truthful, less toxic, and structured in a specific way [12]. Finetuning has been one of the most utilized methods, which involves adjusting the weights of the pre-trained model via a supervised dataset, typically with thousand to hundreds of thousands of examples [13]. The disadvantages in this approach are the need of a fresh dataset tailored to each new task and the compute-heavy nature of the process [13].
An alternative for this is few-shot learning, wherein the model is expected to perform a specific task with only a handful of demonstrations prepended to the input context [14]. Chain-of-thought (CoT) prompting is another technique which improves the model's reasoning prowess inducing a step-by-step thinking process akin to human cognition [15]. In-context prompting strategies like few-shot prompting and chain-of-thought (CoT) have led to substantial enhancements in reasoning capabilities, obviating the need for task-specific datasets. However, the performance of these approaches might not match that of finetuned models [13, 15].
Studies have explored the effectiveness of training smaller-scale LLMs exclusively on scientific and biomedical corpora [16, 17, 18, 19, 20, 21, 22]. Given the multifaceted nature of diverse clinical tasks, it is reasonable to anticipate that larger models with heightened reasoning capacities would outperform smaller counterparts, particularly when complemented by refined alignment methodologies, as opposed to smaller models trained on meticulously curated datasets [7, 8, 11]. Yet, the recently released PubMedGPT 2.7B model challenged this notion by achieving a score of 50.3% on the USMLE dataset [23].
## 2 LLMs as medical question-answering models
The USMLE dataset, which is a subset extracted from the larger MedQA dataset, comprises multiple-choice questions sourced from the medical board exams in the United States [24]. Typically, these questions demand multi-hop reasoning that traverses a spectrum of medical knowledge. They are commonly employed alongside other datasets to evaluate and benchmark the performance of Large Language Models (LLMs) in the context of medical question answering [25].
Several studies have investigated the utility of large-scale LLMs in medical question answering [7, 8, 11]. Lievin et al. found that the code-finetuned code-davinci-002 175 B parameter GPT-3.5 series model scored 53.1% when was combined with retrieval augmentation and multiple-prompting on USMLE dataset [8, 24]. They have used a BM25 retriever made of Wikipedia articles for grounding [26]. The study has shown that even without using retrieval augmentation, zero-shot GPT-3.5 performance was superior to that of finetuned BERT, indicating that GPT-3.5 was able to leverage implicit knowledge and reasoning better in the domain of USMLE question-answering tasks. Therefore, the researchers inferred that LLMs of the scale of GPT-3.5 family can efficiently tap into the parametric medical knowledgebase and execute non-trivial reasoning steps. Furthermore, the study demonstrated that when the inference-time compute was sufficiently increased by sampling multiple generations through CoT, such models can virtually surpass the pass-mark for USMLE [8].
With increasing utilization of LLMs, the necessity for comprehensive benchmarks to evaluate them across different domains emerged. Singhal et al. aggregated various existing medical question-answering (QA) datasets with the addition of a new dataset that encompasses commonly searched health questions to curate the dataset MultiMedQA [7, 24, 27, 28, 29]. The authors have also developed an instruction prompt tuning technique which is both data and parameter efficient in aligning LLMs to medical domain tasks. Their model built upon an instruction-tuned variant of the 540 B parameter PaLM model (Flan-PaLM) exhibited exceptional performance on the USMLE dataset with an accuracy of 67.6% [30]. This achievement was made possible through a combination of strategies including few-shot prompting, chain-of-thought and self-consistency [31].
In April 2023, Microsoft and OpenAI published their results of GPT-4 on medical benchmarks [32]. GPT-4 may be the largest language model ever created even though OpenAI has not disclosed the exact number of parameters in the model. Some experts speculate that its parameter count exceeds 1.7 trillion [33]. The GPT-4 base model without any finetuning scored 83.76% on USMLE dataset on zero-shot prompting [32]. This accomplishment potentially underscores efficient knowledge retrieval and advanced reasoning with increasing model size and training data [34].
Google Research and DeepMind announced the model Med-PaLM 2 in May 2023, as an improvement over their preceding iteration, Med-PaLM [11]. They have used medical domain-specific finetuning and a novel prompting strategy termed ensemble refinement. The ensemble refinement technique draws from a foundation of chain-of-thought prompting, self-consistency, and self-refinement mechanisms [35]. This two-step process begins by sampling multiple generations, each accompanied by explanations via few-shot Chain-of-Thought (CoT) prompting. The second step involves combining the initial question with the concatenated multiple generations from the previous stage, and generating a refined answer. The model achieved state-of-the-art performance on the USMLE dataset with an accuracy of 85.4%.
### Mitigating factual inconsistency
Prior research has investigated the dual-role of LLMs as implicit knowledgebases and reasoning models [36, 27, 37]. The parameterized knowledgebase encoded in the model weights cannot be easily updated or expanded. It is difficult to 'prove' the factual accuracy of generated responses because of the implicit nature of the knowledgebase,
which functions as a latent representation of the training data [38, 39]. Factual inconsistencies and the potential for generating inaccurate information present significant obstacles when leveraging the LLM's latent knowledgebase, especially in sensitive domains like medicine [40]. Additional mechanisms need to be implemented to verify the outputs generated by LLMs in such occasions [41].
Integrating a non-parametric memory with the LLM to create a hybrid model offers a promising solution to address some of these challenges. Defining an explicit knowledgebase and augmenting the LLM generation with the retrieved information from the non-parametric memory makes it possible to examine the source of the information of the LLM generated output [38]. Several studies have examined the performances of such Retrieval Augmented Generation (RAG) models when both the retriever and the generator were trained end-to-end [38, 39]. These studies have showcased superior performance on open-domain question-answering benchmarks [37, 42, 43, 44].
### Explainable knowledge and reasoning
It is evident that with progressive upscaling, finetuning and improved prompting strategies, LLMs are acquiring the ability to manipulate clinical knowledge. Nevertheless, it becomes imperative to employ transparent mechanisms for knowledge retrieval and reasoning, aligning with the demands of clinical medicine where the precision of information holds paramount importance [45, 46]. In this context, the utilization of an explicit non-parametric knowledgebase gains significance. Such knowledgebases can be easily updated and are data and parameter efficient since the LLM does not need to be retrained to infuse new knowledge [47].
We observed the vulnerability of LLMs to diversion into incorrect lines of reasoning, potentially due to the undue emphasis placed on irrelevant contextual information, leading to the generation of unrelated or potentially harmful outputs [48]. The common benchmarks that are used to evaluate the performance of LLMs in clinical settings including USMLE mostly comprise relevant information to arrive at the correct answer. However, real-world instances frequently encompass extraneous information that necessitates the model's ability to discern and discount such distractions. It has been shown that when irrelevant information appears in the context, LLMs tend to make mistakes unless specific measures like instructed prompting and introduction of exemplar challenges containing distractors are implemented [48].
In an attempt to overcome these problems, we propose a strategy involving retrieval augmented generation using dense vectors and a prompting strategy termed 'expand-guess-refine'. This strategy operates in a zero-shot manner, without model finetuning, rendering it considerably computationally efficient than preceding methods.
## 3 Methodology
### Model
The LLM that was used for the preliminary evaluation was OpenAI gpt-3.5-turbo 175B parameter model.
### Vector database
The vector database was compiled by segmenting the text of 18 medical books, which were originally collected as PDF versions and converted into text via optical character recognition. The books were released upon the license agreement of research use only, with the MedQA dataset. There were 231,581 total paragraphs containing 12,727,711 tokens. Based on a preliminary analysis conducted by Jin et al., human experts could find enough evidence 88% of the times to answer a random set of 100 questions from the development split of the USMLE dataset. However, only 2% of the questions assessed the knowledge on a single knowledge point while the rest of the questions simulated complex clinical cases [24].
The whole text corpus of all the books was split into chunks of maximum of 3000 characters with 1000-character overlaps using recursive text splitter (RTS) method. RTS tries to split the text based on a parameterized list of characters. The splits were subsequently embedded using 1536-dimensional OpenAI text-embedding-ada-002 embedding model and stored in FAISS vector database [49].
### Expand-guess-refine prompting
This prompting strategy consists of three components.
OriginalQuestion:
A researcher is conducting an experiment on the physiology of body fluids. In documents specific observations and plots were of them as a driver/general figures. In this study, some displays are normal, while most are shorter, the shortcomings in the diagram are mostly the form of changes in the \(x\)-axis, changes in the \(y\)-axis of changes in the \(y\)-axis of changes in the \(y\)-axis, changes in the \(z\)-axis of changes in the \(z\)-axis in the \(z\)-axis in the \(z\)-axis. When or the following parameters are most likely to affect the \(x\)-axis in the \(z\)-axis.
A field containing \(x\)-axis of \(y
Original Question 1:
A 36-year-old suite undergoes a pancerciotechnology directory for resection of a tumor of the breast of the pancreas. His solidist history is otherwise insignificant except for the past month. The patient records the patient last start retailes to the batch, the series of minor respiratory and post-intestinal infections increase childhood that were effectively treated (with 10%) individuals. Before today, he has not had any prior
Original question:
A 21-year-old male comes to the physician's office with complaints of fatigue. The patient is a college intramural athlete and has noticed that during hard workouts with his team, the gets severe muscle pain and weakness to the point where he has to stop and rest, usually he is able to recover and the cramps subside after rest. After strenuous workouts he has also noticed that his urine is dark. On exam, the patient has no acute symptoms and appears to be sitting comfortably with a completely normal exam. The patient does not have any tenderness to palpation of his muscles. Which of the following enzymes is most likely defective in this patient?:
A: Glucose-6-phosphate
B: Debranching enzyme
C: Glycogen phosphorylase
D: Acid maltase
Expansion:
A 21-year-old male comes to the physician's office with complaints of fatigue. The patient is a college intramural athlete and has noticed that during hard workouts with his team, be gets severe muscle pain and weakness to the point where he has to stop and rest. Usually he is able to recover and the cramps subside after rest. After strenuous workouts he has also noticed that his urine is dark. On exam, the patient has no acute symptoms and appears to be sitting comfortably with a completely normal exam. The patient does not have any tenderness to palpation of his muscles. The physician suspects a possible enzyme defect in the patient.
Direct question: Which specific enzyme is most likely defective in this patient?
Guess:
The specific enzyme that is most likely defective in this patient is myphosphorylase. The passage states that McArdle disease is characterized by a deficiency of myphosphorylase, which prevents the conversion of glycogen to glucose-6-phosphate. This deficiency leads to severe muscle pain and weakness during strenuous workouts, as well as dark urine after exercise.
Refine:
The most likely enzyme that is defective in this patient is C. Glycogen phosphorylase. The passage states that McArdle disease, which is characterized by severe muscle pain and weakness during strenuous workouts, is caused by a deficiency of myphosphorylase. Myphosphorylase is the muscle isozyme of glycogen phosphorylase, so a deficiency in this enzyme would prevent the conversion of glycogen to glucose-6-phosphate, leading to the symptoms described in the patient.
Options:
{Options}
An example is shown in Figure 3.
## 4 Preliminary Analysis
A preliminary analysis was conducted on the first 100 questions and 50 random questions from the USMLE development data split. Seven image-based questions were excluded and the model achieved an accuracy of 70.63% while the accuracy achieved by chatGPT was 59.44%. The improvement achieved by the expand-guess-refine model was statistically significant (p-value 0.031) for two-sample test for equality of proportions.
## 5 Discussion
With the recent advancements in LLMs, their integration within healthcare has been evaluated in a diverse set of tasks such as summarizing patients' health records, writing discharge summaries, getting assistance for medical research and evaluating clinical scenarios to formulate differential diagnoses [24], [46], [50], [51]. In our perspective, the eligibility of LLMs for deployment in clinical related tasks hinges on their proficiency in two critical dimensions: their capability to serve as a robust and reliable knowledge repository and their capacity to effectively function as intelligent processors of natural language.
Figure 3: Expand-guess-refine strategy
Medicine is a rapidly evolving field. More than 1.3 million new citations have been indexed in MEDLINE database in the fiscal year 2022 [52]. Therefore, it is important to utilize methods that facilitate the seamless updating of LLM knowledgebases. It is equally important that the knowledge to be explainable. The factual accuracy of the generations of the LLM should be verifiable by inspecting the sources of information of the LLM generated content. The preliminary analysis of this study has suggested that augmenting the implicit knowledgebase of the LLM with a high-quality task-specific non-parametric knowledgebase can significantly improve performance as well.
LLMs create an internal latent representation of the training data which is pivotal for achieving generalization. Consequently, the outputs generated by LLMs can exhibit a form of "apparent" reasoning [53]. However, it is essential to recognize that this type of reasoning or 'thought process' significantly diverges from human cognitive processes. In evaluating the logic underpinning the LLM generated content, there arises a need to translate the apparent reasoning of LLMs into a step-by-step framework akin to human thinking. This translation is fundamental for gauging the coherence and accuracy of the generated logic. Thus, the dependability of LLM-generated content depends not solely on the capability to produce intelligible knowledge but also on the capacity to generate comprehensible reasoning.
In addition to model finetuning, aggregating results across multiple generations and in-prompt alignment strategies like few-show CoT, we have demonstrated in this preliminary analysis that retrieval augmentation and expand-guess-refine prompting can significantly improve LLM performance with the additional advantages of generating explainable knowledge and reasoning.
Code is available at [https://github.com/ssm123ssm/medGPT](https://github.com/ssm123ssm/medGPT)
|
2302.11234 | Cluster Purging: Efficient Outlier Detection based on Rate-Distortion
Theory | Rate-distortion theory-based outlier detection builds upon the rationale that
a good data compression will encode outliers with unique symbols. Based on this
rationale, we propose Cluster Purging, which is an extension of
clustering-based outlier detection. This extension allows one to assess the
representivity of clusterings, and to find data that are best represented by
individual unique clusters. We propose two efficient algorithms for performing
Cluster Purging, one being parameter-free, while the other algorithm has a
parameter that controls representivity estimations, allowing it to be tuned in
supervised setups. In an experimental evaluation, we show that Cluster Purging
improves upon outliers detected from raw clusterings, and that Cluster Purging
competes strongly against state-of-the-art alternatives. | Maximilian B. Toller, Bernhard C. Geiger, Roman Kern | 2023-02-22T09:32:37Z | http://arxiv.org/abs/2302.11234v1 | # Cluster Purging: Efficient Outlier Detection based on Rate-Distortion Theory
###### Abstract
Rate-distortion theory-based outlier detection builds upon the rationale that a good data compression will encode outliers with unique symbols. Based on this rationale, we propose Cluster Purging, which is an extension of clustering-based outlier detection. This extension allows one to assess the representability of clusterings, and to find data that are best represented by individual unique clusters. We propose two efficient algorithms for performing Cluster Purging, one being parameter-free, while the other algorithm has a parameter that controls representtivity estimations, allowing it to be tuned in supervised setups. In an experimental evaluation, we show that Cluster Purging improves upon outliers detected from raw clusterings, and that Cluster Purging competes strongly against state-of-the-art alternatives.
Outlier Detection, Clustering Algorithms, Rate-Distortion Theory
## 1 Introduction
In present days, there exists an abundance of datasets containing individual observations that greatly deviate from the remaining observations, commonly called _outliers_ or _anomalies_. The task of finding such outlying/anomalous observations in datasets is relevant in a multitude of applications and has received much attention in the last decades [1]. Traditionally, outlier detection was mostly approached from a statistical perspective, where data are modeled with distributions, while recently database-oriented methods that focus on efficiency and scalability have become more popular [2]. A major part of contemporary research concentrates on using deep learning to detect outliers in semi-supervised [3, 4] or unsupervised [5, 6, 7] settings. These approaches are well motivated for high-dimensional datasets and have yielded significantly improved outlier detection accuracy on benchmark datasets [8, 9, 10], yet deep learning techniques are also criticized for being data hungry [11] and lacking interpretability [12]. Both of these deficits gravely affect outlier detection since in many research fields large training datasets are not available [4]. Further, outlier detection techniques are commonly used in high-risk applications such as intrusion detection [1], where black-box models should generally be avoided [13].
In contrast, _clustering-based_ outlier detection methods [1] resort to very intuitive concepts of what an outlier might possibly be; for instance observations that have abnormal local density [14]; or observations that do not fit well into any cluster [15, 16, 17]. A trait that these methods have in common is that they detect outliers during clustering, for instance by assigning outliers to a special outlier cluster. While this trait can be advantageous in several settings, it also has the downside that outliers are only detected as a "side-product" of clustering [1]. As a consequence, outliers detected by methods such as [14, 15, 16, 17] are observations that are irregular in the respective clustering, yet not necessarily irregular with respect to the (unclustered) data.
Another type of clustering-based methods infers outliers after the raw data were clustered. For instance, the Cluster-Based Local Outlier Factor (CBLOF) [18] scales distances between observations and cluster centers by cluster sizes, regardless of which clustering technique was used. Hence, CBLOF allows one to choose a clustering method that is well-suited for the data at hand. However, outlier detection techniques such as CBLOF [18, 19, 20] still have the same drawback as the methods mentioned above: They assume that the computed clustering is sufficient for describing outliers in raw data, which can be problematic in scenarios where it is challenging to perform a good clustering, e.g. in high-dimensional data [21].
To address this issue, one may resort to information theory. From an information-theoretic perspective, a clustering is a lossy compression of the raw data [22], where a raw observation is represented by the cluster it was assigned to. The loss (distortion) that occurs during such a clustering-compression can be combined with a cluster's degree of compression (rate) to quantify how well this cluster represents the observations that are assigned to it. Further, rate-distortion theory allows one to infer how the representtivity of a clustering would change if one were to modify this clustering, and which observations would be better represented by different clusters (cf. [23, 24]). Observations that are hard to represent by a meaningful cluster and that are best represented by themselves can then be considered as outliers.
This description outlines a technique that we refer to as _Cluster Purging_, in analogy to the act of purging in authoritarian political systems where deviating individuals that are not well-represented by such systems are removed from
society1. In short, Cluster Purging is performed by modifying a clustering (or by analyzing a set of given clusterings), and then isolating observations that are not represented well by their cluster, regardless of how one modifies it (or which of the clusterings one considers). As such, Cluster Purging is, to the best of our knowledge, a conceptually novel approach to cluster-based outlier detection, and the main contributions of this work stem from it:
Footnote 1: None of the authors or their affiliations approve of political purges in any form.
* Review of related work, outlining the differences between Cluster Purging and existing methods (Section 2).
* Theoretical formalization of Cluster Purging and description of required concepts from information theory (Section 3).
* Description of a parameter-free algorithm for Cluster Purging and discussion of various aspects that are relevant in practice, i.e. efficiency, interpretation of proposed outliers, how one can introduce parameters for improved performance, and limitations (Section 4).
* Empirical demonstration that Cluster Purging improves upon outliers detected from clustering alone, and that Cluster Purging strongly competes against state-of-the-art alternatives (Section 5).
## 2 Related Work
In general, cluster-based outlier detection techniques can be split into three categories depending on how they define outliers [1]:
1. Outliers are observations that do not fit into any cluster.
2. Outliers are far away from their cluster's centroid.
3. Outliers are assigned to small or sparse clusters.
Conceptually, category 1 is most closely related to Cluster Purging, since in our method outliers are observations that cannot be represented well by any cluster. There are several existing methods that fall into category 1, for instance Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [14], extensions of DBSCAN such as [25, 26], k-Means-[15] and k-Means with Outlier Removal [16]. However, a key difference between these methods and Cluster Purging is that our method is not bound to a specific clustering. Even if one bases Cluster Purging on one of the above clusterings, the results can be very different since our method does not assume that a single clustering necessarily describes outliers in the raw data.
Surprisingly, one can argue that our method should also fall into category 2, since the theoretical formulation of Cluster Purging permits setups where outliers are observations that are far away from a centroid (see Section 3). Related methods from this category are techniques that combine centroid-based clusterings with a distance threshold, for instance [20, 27]. One can distinguish Cluster Purging from these methods by the simple fact that our method does not require a distance threshold (although Cluster Purging can be adapted to require one, should an application demand this (see Section 4)).
Typical methods of the third category are Local Outlier Factor [28] and its numerous variants, e.g. [29, 30, 31]. The Cluster-Based Local Outlier Factor (CBLOF) [18] is particularly noteworthy, since this method is directly applicable to any clustering, similar to Cluster Purging. The main difference between CBLOF and Cluster Purging is that, while our method can be based on local densities, it does not require a threshold parameter to infer critical differences in local densities and does not consider a single clustering as sufficient for describing outliers.
From a theoretical perspective, the most closely related method to ours is the one-class rate-distortion model (OCRD) [32]. The brief description of Cluster Purging given above can be seen as a single (half-)step of the Blahut-Arimoto algorithm [23, 24, 33], which OCRD adapts for one-class classification. However, while OCRD is optimal in a rate-distortion theoretic sense, we here do not aim for this optimality. Instead, Cluster Purging supports arbitrary clustering techniques, allowing for a greater flexibility. In our experiments, we demonstrate that rate-distortion optimal clusterings are not necessarily optimal for detecting outliers in real data (Section 5).
## 3 Theoretical Formulation
In this section, the theoretical background of Cluster Purging is explained and the concept of representivity is introduced. In short, clustering can be interpreted as a form of data compression that yields cluster assignments and a representation. One can measure how representative such a representation is via its surplus complexity when compared to the most representative clustering at a given inaccuracy. Since directly finding the most representative clustering is often infeasible, we show how representivity can be efficiently estimated from a small set of available clusterings. Finally, we show how one can detect outliers under the premise that a good clustering would represent outliers by themselves, i.e. with an additional cluster.
### _Background_
#### 3.1.1 Data Compression
Let \(\mathbf{x}=\{x_{1},\ldots,x_{n}\}\) be a dataset of \(n\) observations in \(\mathbb{R}^{d}\) consisting of \(u\approx n\) unique values. A common data analysis goal is to obtain a representation of \(\mathbf{x}\) that has fewer unique values without losing too much information [34, 35, 36]. In coding theory, the task of finding such a representation consisting of \(\nu\ll u\) unique symbols is referred to as lossy data compression. Clustering can be seen as a typical example for lossy data compression. In detail, a successful compression via (non-fuzzy) clustering yields two objects
1. A list of \(n\) cluster assignments \(\mathbf{c}=(c_{1},\ldots,c_{n})\), where \(c_{j}\in{1,\ldots,\nu}\) is the index of the cluster that contains observation \(x_{j}\).
2. A low-dimensional representation \(\mathbf{r}=(r_{1},\ldots,r_{\nu})\) describing \(\nu\) different clusters.
A visualization can be seen in Fig. 1. Not all clustering techniques return both of these objects, e.g. DBSCAN only
gives cluster assignments \(\mathbf{c}\) yet no representation \(\mathbf{r}\). Details on how to obtain representations in such cases are given in Section 4.4.
Further, assume that a small subset of outliers \(\mathbf{x_{y}}=\{x_{y_{1}},\ldots,x_{y_{m}}\}\) with \(m\ll n\) is part of the dataset. Since outliers are commonly assumed to deviate significantly from the remaining observations [37], compressing a dataset that contains outliers will either require additional unique symbols for outliers or else lead to a less effective compression [38]. Let
\[d(\mathbf{x},\mathbf{r})=\sum_{j=1}^{n}d(x_{j},r_{c_{j}}) \tag{1}\]
be a separable distortion function, i.e. a measure describing how accurately \(\mathbf{r}\) represents dataset \(\mathbf{x}\). If an outlier is represented by the same symbol as an inlier, then this will increase the overall distortion since inliers and outliers are assumed to be dissimilar. Consequently, one can reduce the overall distortion by compressing outliers to unique symbols. In the context of clusterings, this translates to assigning outliers to singleton clusters, i.e. an additional cluster that only contains \(x_{y_{j}}\). However, adding unique outlier clusters also increases the overall _complexity_ of the compression.
#### 3.1.2 The Empirical Rate-Distortion Function
Rate-distortion theory seeks to describe this trade-off between representation complexity (_rate_) and inaccuracy (_distortion_) in the context of random variables. Formally, the rate-distortion function \(R(D)\) of a random variable \(X\) is defined as (cf. [33])
\[R(D)=\min_{P(\hat{X}|X)}H(\hat{X})-H(\hat{X}|X)\textbf{ subject to }d(X,\hat{X})\leq D \tag{2}\]
where \(P(\cdot)\) and \(H(\cdot)\) are the probability and entropy functions, respectively, \(\hat{X}\) is a stochastic compression of \(X\), and \(D\) is a specific distortion value, e.g. the sum of squared errors in a \(k\)-means clustering. Intuitively, the rate-distortion function describes the smallest complexity one can achieve while compressing \(X\) at a given distortion, regardless of how the compression is performed.
To transfer this stochastic definition into a real-data context, let
\[h(\mathbf{c})=-\sum_{f\in\mathbf{f}^{e}}\frac{f}{n}\log\frac{f}{n}. \tag{3}\]
be the empirical counterpart to the theoretical entropy \(H(\hat{X})\) as per [33], where \(\mathbf{f}^{e}=\{f_{1}^{e},\ldots,f_{\nu}^{e}\}\) are the numbers of observations assigned to each cluster. Then, inspired by (2), we define the empirical rate-distortion function of a dataset \(\mathbf{x}\) as
\[R(D,\mathbf{x},C):=\min_{\{C(\mathbf{x},\mathbf{\theta}),\mathbf{\theta}\in\mathbf{\Theta}\}}h( \mathbf{c})\textbf{ subject to }d(\mathbf{x},\mathbf{r})\leq D \tag{4}\]
with \(C(\mathbf{x},\mathbf{\theta})=(\mathbf{c},\mathbf{r})\), where \(C(\cdot)\) is a deterministic compression function (i.e. a non-fuzzy clustering technique) and \(\mathbf{\theta}\) are its parameters and where \(\mathbf{\Theta}\) is the set of all possible parametrizations. Intuitively, the empirical rate-distortion function can be seen as the strongest degree of compression one can achieve on a dataset with a fixed compression method without exceeding the required distortion. As such, it describes the trade-off between compression complexity and inaccuracy for a fixed dataset and a specific clustering method. The term \(h(\mathbf{c}|\mathbf{x})\) was omitted from (4), since \(h(\mathbf{c}|\mathbf{x})=0\) for all non-fuzzy clustering techniques. A visualization of theoretical and empirical rate-distortion functions is depicted in Fig. 2.
### _Measuring Cluster Representtivity_
#### 3.2.1 Theoretical Representativity
From a rate-distortion theoretical perspective, there are two quantities that measure how "good" a clustering \((\mathbf{c},\mathbf{r})\) represents the raw data
1. The degree of compression (the rate), computed via entropy \(h(\mathbf{c})\);
2. How accurate the representation is (the distortion), computed via distortion \(d(\mathbf{x},\mathbf{r})\).
While the empirical rate-distortion function \(R(D,\mathbf{x},C)\) describes the best achievable trade-off between these quantities in a given setup, the average result of a clustering algorithm typically offers a worse trade-off. More concretely, for every clustering \(C(\mathbf{x},\mathbf{\theta})=(\mathbf{c},\mathbf{r})\) it holds that
\[R(d(\mathbf{x},\mathbf{r}),\mathbf{x},C)\leq h(\mathbf{c}) \tag{5}\]
since the rate-distortion function describes the global minimum over all parametrizations, i.e the best achievable representation at distortion \(d(\mathbf{x},\mathbf{r})\). Due to this inequality there is always a nonnegative surplus complexity between \((\mathbf{c},\mathbf{r})\)
Fig. 1: Compression via \(k\)-means clustering. _Left_: A dataset consisting of 65 observations. _Middle_: Cluster assignments, indicated by color. _Right_: Symbols representing each cluster.
Fig. 2: Comparison of theoretical and empirical rate-distortion functions.
and (4). Thus, one can measure the theoretical representivity of a clustering via
\[\rho(\mathbf{x},\mathbf{c},\mathbf{r},C):=R(d(\mathbf{x},\mathbf{r}),\mathbf{x},C)\ /\ h(\mathbf{c}). \tag{6}\]
However, computing \(R(d(\mathbf{x},\mathbf{r}),\mathbf{x},C)\) and thus \(\rho(\mathbf{x},\mathbf{c},\mathbf{r},C)\) is infeasible for many clustering techniques, since this would require one to compute \(C(\mathbf{x},\mathbf{\theta})\) for all possible clustering parameters \(\mathbf{\theta}\). Therefore, it is more practical to estimate clustering representivity relative to a small set of representations, obtained from parametrizations \(\{\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{t}\}\). We refer to this estimate as rate-distortion _hull_.
**Definition 1**.: _Rate-distortion hull. Let \(\underline{\mathbf{c}}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{t})\) and \(\underline{\mathbf{r}}=(\mathbf{r}_{1},\ldots,\mathbf{r}_{t})\) be a set of clustering assignments and representations, respectively, obtained by evaluating clustering technique \(C(\cdot)\) on dataset \(\mathbf{x}\) with parametrizations \(\{\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{t}\}\). Further, let \(\mathbf{v}=[v_{1},\ldots,v_{s}]\) be the indices of the lower convex hull of the arising distortion-entropy pairs \(\{[d(\mathbf{x},\mathbf{r}_{1}),h(\mathbf{c}_{1})],\ldots,[d(\mathbf{x},\mathbf{r}_{t}),h(\mathbf{c}_{ t})]\}\). Then, the rate-distortion hull of \(\underline{\mathbf{c}}\) and \(\underline{\mathbf{r}}\) is given by_
\[\mathcal{L}(D,\underline{\mathbf{c}},\underline{\mathbf{r}}):=\kappa_{i} D+\delta_{i}\quad\forall i\in\{2,\ldots,s\} \tag{7}\] \[D\in[d(\mathbf{x},\mathbf{r}_{v_{1}}),d(\mathbf{x},\mathbf{r}_{v_{s}})]\]
_where_
\[\kappa_{i}=\frac{h(\mathbf{c}_{v_{i}})-h(\mathbf{c}_{v_{i-1}})}{d(\mathbf{x},\mathbf{r}_{v_{i }})-d(\mathbf{x},\mathbf{r}_{v_{i-1}})} \tag{8}\]
_and_
\[\delta_{i}=h(\mathbf{c}_{v_{i}})-\kappa_{i}\cdot d(\mathbf{x},\mathbf{r}_{v_{i}}) \tag{9}\]
_are the slopes and vertical intercepts of the arising linear pieces, with \(d(\mathbf{x},\mathbf{r}_{v_{1}})<\cdots<d(\mathbf{x},\mathbf{r}_{v_{s}})\)._
Intuitively, a rate-distortion hull is a linear interpolation of the lower convex hull of the entropy and distortion values associated with observed clusterings \((\underline{\mathbf{c}},\underline{\mathbf{r}})\). A visualization of a rate-distortion hull is shown in Fig. 3.
Further, since \(\mathcal{L}(\cdot,\underline{\mathbf{c}},\underline{\mathbf{r}})=\mathcal{L}(\cdot, \underline{\mathbf{c}}_{\mathbf{v}},\underline{\mathbf{r}}_{\mathbf{v}})\), we assume without loss of generality that \(v_{i}=i\) and \(s=t\) to keep the notation simple.
#### 3.2.2 Representivity after Modification
Naturally, it is not possible to directly estimate the theoretical representivity of clusterings \((\underline{\mathbf{c}},\underline{\mathbf{r}})\) based on a rate-distortion hull \(\mathcal{L}(\cdot,\underline{\mathbf{c}},\underline{\mathbf{r}})\) constructed from the same clusterings. However, one can use \(\mathcal{L}(\cdot,\underline{\mathbf{c}},\underline{\mathbf{r}})\) for estimating how the representivity of a particular clustering \((\mathbf{c}_{i},\mathbf{r}_{i})\in(\underline{\mathbf{c}},\underline{\mathbf{r}})\) reacts to arbitrary modifications via
\[\hat{\rho}(\mathbf{x},\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime},\underline{\mathbf{c }},\underline{\mathbf{r}}):=\mathcal{L}(d(\mathbf{x},\mathbf{r}_{i}^{\prime}),\underline{ \mathbf{c}},\underline{\mathbf{r}})\ /\ h(\mathbf{c}_{i}^{\prime}) \tag{10}\]
where \(\mathbf{c}_{i}^{\prime}\) and \(\mathbf{r}_{i}^{\prime}\) are arbitrarily modified versions of \(\mathbf{c}_{i}\) and \(\mathbf{r}_{i}\) respectively, with
\[\mathbf{c}_{i}^{\prime}\notin\underline{\mathbf{c}}\quad\text{and}\quad\mathbf{r}_{i}^{ \prime}\notin\underline{\mathbf{r}}.\]
Note that the error between measurements \(\hat{\rho}(\mathbf{x},\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime},\underline{\mathbf{c }},\underline{\mathbf{r}})\) and \(\rho(\mathbf{x},\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime},C)\) will not only depend on the clusterings used for constructing the rate-distortion hull. It will also depend on how many \(c\in\mathbf{c}_{i}\) and \(r\in\mathbf{r}_{i}\) were modified. Generally speaking, the more similar modified clustering \((\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime})\) is to \((\mathbf{c}_{i},\mathbf{r}_{i})\), the smaller the error between \(\hat{\rho}(\mathbf{x},\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime},\underline{\mathbf{c }},\underline{\mathbf{r}})\) and \(\rho(\mathbf{x},\mathbf{c}_{i}^{\prime},\mathbf{r}_{i}^{\prime},C)\) will be.
### _Detecting Outliers with Cluster Representivity_
#### 3.3.1 Definition of Rate-Distortion Outliers
Since \(\hat{\rho}(\mathbf{x},\cdot,\cdot,\underline{\mathbf{c}},\underline{\mathbf{r}})\) allows one to measure the effect of arbitrary modifications to a clustering, one can also measure how assigning an individual observation to a new, unique cluster would affect representivity. Now recall from above that an outlier is an observation that will likely need a unique symbol for an effective compression [38]. If changing the cluster assignment of observation \(x_{j}\) in \(\mathbf{c}_{i}\) to a new additional cluster would improve \(\mathbf{r}_{i}\)'s representivity, then \(x_{j}\) should be labeled as outlier. This intuition can be formalized as follows.
**Definition 2**.: _Rate-distortion outlier. Let \(\mathbf{x}\) be a dataset and \((\underline{\mathbf{c}},\underline{\mathbf{r}})\) a set of clusterings. Then observation \(x_{j}\) is a rate-distortion outlier if_
\[\hat{\rho}\left(\mathbf{x},\mathbf{c}_{(i,j)}^{\prime},\mathbf{r}_{(i,j)}^{\prime}, \underline{\mathbf{c}},\underline{\mathbf{r}}\right)\geq 1\quad\forall i\in[2\ldots,t] \tag{11}\]
_with_
\[\mathbf{c}_{(i,j)}^{\prime}=(c_{i,1},\ldots,c_{i,j-1},\nu+1,c_{i,j+1},\ldots,c_{i,n}) \tag{12}\]
_and_
\[\mathbf{r}_{(i,j)}^{\prime}=(r_{i,1},\ldots,r_{i,\nu},\mathbf{r}^{\star}) \tag{13}\]
_where \(\mathbf{r}^{\star}\) is a representation of \(x_{j}\) such that \(d(x_{j},\mathbf{r}^{\star})=0\)._
In simple terms, Definition 2 states that \(x_{j}\) is a rate-distortion outlier if assigning it to \(\mathbf{r}^{\star}\) would improve the representivity of all clusterings \((\underline{\mathbf{c}},\underline{\mathbf{r}})\).
3.2 Computation of \(\hat{\rho}(\mathbf{x},\mathbf{c}_{(i,j)}^{\prime},\mathbf{r}_{(i,j)}^{\prime},\underline{\mathbf{c }},\underline{\mathbf{r}})\)
A key advantage of defining outliers as in Definition 2 is that \(\hat{\rho}(\underline{\mathbf{c}},\cdot,\cdot,\underline{\mathbf{c}},\underline{\mathbf{r}})\) can be computed for \(\mathbf{c}_{(i,j)}^{\prime}\) and \(\mathbf{r}_{(i,j)}^{\prime}\) from a set of clusterings \((\underline{\mathbf{c}},\underline{\mathbf{r}})\) in \(\mathcal{O}(n)\) time. This works, since the change in entropy from \(\mathbf{c}_{i}\) to \(\mathbf{c}_{(i,j)}^{\prime}\) and the change in distortion from \(\mathbf{r}_{i}\) to \(\mathbf{r}_{(i,j)}^{\prime}\) can be computed independently from the remaining clusterings in \((\underline{\mathbf{c}},\underline{\mathbf{r}})\).
Fig. 3: Comparison of theoretical rate-distortion function, empirical rate-distortion function and rate-distortion hull. It ideal clusterings are selected for estimating the empirical rate-distortion function, then the resulting rate-distortion hull is equal to the lower convex hull of the empirical rate-distortion function.
**Proposition 1**.: _Let \(\mathbf{c}\) be a list of cluster assignments and let \(\mathbf{f^{c}}=\{f_{c}^{\mathsf{e}},\ldots,f_{\mathbf{c}_{j}}^{\mathsf{e}}\}\) be the numbers of observations assigned to each cluster. Then the change in entropy caused by assigning \(x_{j}\) to an additional unique cluster, yielding \(\mathbf{c}^{\prime}\), depends only on \(f_{c_{j}}^{\mathsf{e}}\) and is given by_
\[h(\mathbf{c}^{\prime})\text{ - }h(\mathbf{c})=\frac{1}{n}\left(f_{c_{j}}^{\mathsf{e}} \log f_{c_{j}}^{\mathsf{e}}\text{ - }(f_{c_{j}}^{\mathsf{e}}\text{ - }1)\log(f_{c_{j}}^{\mathsf{e}} \text{ - }1)\right). \tag{14}\]
Proof.: The entropy of \(\mathbf{c}\) as given in (3) can be rewritten as
\[h(\mathbf{c})=\log n-\frac{1}{n}\sum_{f\in\mathbf{f^{c}}}f\log f \tag{15}\]
since \(\log\frac{f}{n}=\log f-\log n\). The entropy of \(\mathbf{c}^{\prime}\) is given by
\[h(\mathbf{c}^{\prime})=\log n-\frac{1}{n}\sum_{f\neq f_{c_{j}}^{\mathsf{e}}}f\log f \text{ \ }-\frac{1}{n}\left((f_{c_{j}}^{\mathsf{e}}\text{ - }1)\log(f_{c_{j}}^{\mathsf{e}}\text{ - }1)\right) \tag{16}\]
since \(1\) observation is removed from cluster \(c_{j}\) and a unique cluster is added with entropy \(1\log 1=0\). Subtracting (15) from (16) yields (14).
The change in distortion from \(\mathbf{r}_{i}\) to \(\mathbf{r}_{(i,j)}^{\prime}\) is given by
\[d(\mathbf{x},\mathbf{r}_{(i,j)}^{\prime})-d(\mathbf{x},\mathbf{r}_{i})=-d(x_{j},r_{c_{i,j}}) \tag{17}\]
which follows by assumption from Definition 2. Intuitively, when one assigns \(x_{j}\) to a new unique symbol, then this symbol perfectly represents \(x_{j}\) and hence the total distortion decreases by \(d(x_{j},\mathbf{r}_{i,j})\). Note that (17) only depends on observation \(x_{j}\) and the cluster representative \(x_{j}\) is assigned to, i.e. \(r_{c_{i,j}}\).
To evaluate \(\hat{\rho}(\mathbf{x},\mathbf{c}_{(i,j)}^{\prime},\mathbf{r}_{(i,j)}^{\prime},\underline{ \mathbf{c}},\underline{\mathbf{r}})\), one can combine (14) and (17) in the following way:
**Proposition 2**.: _Let \(\mathbf{x}\) be a dataset and \((\underline{\mathbf{c}},\underline{\mathbf{r}})\) a set of clusterings. If \(\mathbf{c}_{(i,j)}^{\prime}\) and \(\mathbf{r}_{(i,j)}^{\prime}\) are defined as in (12) and (13), respectively, then it holds that_
\[\begin{split}&\hat{\rho}\left(\mathbf{x},\mathbf{c}_{(i,j)}^{\prime},\mathbf{r}_{( i,j)}^{\prime},\underline{\mathbf{c}},\underline{\mathbf{r}}\right)\geq 1\\ &\Leftrightarrow\\ & d(x_{j},r_{c_{i,j}})\geq\frac{h(\mathbf{c}_{(i,j)}^{\prime})\text{ - }h(\mathbf{c}_{i})}{ \text{ -}\kappa_{i}}\end{split} \tag{18}\]
_where \(\kappa_{i}\) is the slope of the rate-distortion hull between \(d(\mathbf{x},\mathbf{r}_{i\text{-}1})\) and \(d(\mathbf{x},\mathbf{r}_{i})\), with \(i\neq 1\)._
Note that \(i\neq 1\) in Proposition 2 is necessary since there is no slope \(\kappa_{0}\) left of \(\mathbf{r}_{1}\) in the rate-distortion hull.
Proof.: Inserting (7) into the left expression of (18) gives
\[\left(\kappa_{\ell}\cdot d(\mathbf{x},\mathbf{r}_{(i,j)}^{\prime})+\delta_{\ell} \right)\,/\,h(\mathbf{c}_{(i,j)}^{\prime})\geq 1 \tag{19}\]
where \(\ell\) is the index of the slope and vertical intercept at \(d(\mathbf{x},\mathbf{r}_{(i,j)}^{\prime})\). Since it holds that \(d(\mathbf{x},\mathbf{r}_{(i,j)}^{\prime})\leq d(\mathbf{x},\underline{\mathbf{r}}_{i})\) and due to the convexity of \(\mathcal{L}(\cdot)\), we can assume without loss of generality that \(\ell=i\). Then, inserting (9) into (19) and factorizing \(\kappa_{i}\) gives
\[\left(\kappa_{i}\cdot\left(d(\mathbf{x},\mathbf{r}_{(i,j)}^{\prime})-d(\mathbf{x},\mathbf{r}_{ i})\right)+h(\mathbf{c}_{i})\right)/h(\mathbf{c}_{(i,j)}^{\prime})\geq 1. \tag{20}\]
Finally, after inserting (17) into (20), the resulting expression can easily be rearranged into the right side of (18).
The main point of Prop. 2 is that \(\hat{\rho}(\mathbf{x},\mathbf{c}_{(i,j)}^{\prime},\mathbf{r}_{(i,j)}^{\prime},\underline{ \mathbf{c}},\underline{\mathbf{r}})\) can be easily computed from the available clusterings. A visual intuition of how \(\hat{\rho}(\cdot)\) is computed can be seen in Fig. 4. A concrete algorithm is described in Section 4.2. Computational speedups implied by (14) and (18) are discussed in Section 4.3.
Fig. 4: Geometric interpretation of the computation of cluster representity. For every clustering on the rate-distortion hull, one can compute how the entropy would change if \(x_{j}\) were represented by a new cluster. If this new clustering has a distortion that is sufficiently small to enter the area beneath the rate-distortion hull, then \(x_{j}\) is an outlier that needs to be represented by itself rather than a cluster.
## 4 Practical Aspects
After formalizing the theoretical background needed to efficiently perform Cluster Purging, we now address several practical issues and formulate concrete algorithms for an efficient computation.
### _Interpretation_
Recall that any clustering is a representation of the raw data, and that a cluster is a representation of the data assigned to it. In essence, the theoretical foundation of Cluster Purging concerns itself with the representivity of clusterings. If a cluster would represent its data better if one of them were removed (purged), then that deviating observation is considered an outlier. To make the concept of representivity more tangible, we address four critical questions that may be non-obvious to the reader.
#### 4.1.1 How can rate-distortion outliers be interpreted?
In simple terms, a rate-distortion outlier is an observation that is "far away" from its cluster. How "far" this needs to be is determined by a threshold that we call _purging boundary_. This purging boundary is inferred from cluster sizes and distortions across multiple clusterings, as well as from the raw dataset (see Eq. (18)). Hence, an accurate interpretation of rate-distortion outliers depends on how these quantities are measured. For example, under Manhattan distances and a \(k\)-means clustering, all purging boundaries are hypercubes that are centered at the cluster's centroid and enclose inliers. For DBSCAN and Euclidean distance, every observation within a specific cluster is surrounded by a hypersphere that encloses its nearest neighbor unless it is an outlier. See Fig. 5 for a visualization.
In the context of high-dimensional data, interpretability is often addressed via dimensionality reductions such that every outlier can be described by a small subset of the original dimensions, see [39, 40]. Similarly, rate-distortion outliers can be characterized by their low-entropy representation: They are observations that make the representation unnecessarily complicated.
#### 4.1.2 How is Cluster Purging different from distance-based outlier detection with clustering?
Cluster Purging permits setups, e.g. centroid-based clustering and Euclidean distortion, that are very similar to conventional distance-based outlier detection methods such as [20, 27]. The main difference between Cluster Purging and such methods is that purging boundaries are inferred based on a different clustering, and not based on a parameter. Further, Cluster Purging is not limited to distance-based setups and is compatible with any well-defined dissimilarity measure and clustering technique, e.g. Kullback-Leibler divergence [41] paired with fuzzy C-means clustering [42].
1.3 Isn't Cluster Purging just another clustering-based outlier detection technique that fails if the clustering is bad?
Not necessarily. Cluster Purging considers the original raw data via (18) in addition to all available clusterings. Further, the rate-distortion hull (7) allows one to determine which clusterings among the available ones are best in terms of rate-distortion theory. If all available clusterings are "bad", then Cluster Purging may fail to find correct outliers, yet if a single "good" clustering is available, then Cluster Purging will identify this clustering and use it for outlier detection.
Fig. 5: Cluster Purging (CP) based on DBSCAN with \(\varepsilon=0.8\), minPts\(=20\) and a max-max perturbation (cf. Section 5.1 ). The parametrization of DBSCAN is suboptimal, and the clustering representation can be improved by purging (i.e. uniquely encoding) outliers detected by CP. Overlapping purging boundaries were depicted as union of discs for readability. Note that observations within the \(\varepsilon\) region and purging boundary may also be outliers if they are alone in their cluster (cluster size = 1).
1.4 Can outliers really be detected via representivity? It seems strange that whether data are outliers depends on the size of their cluster.
We describe a short example where rate-distortion theory-based representivity is intuitive for outlier detection: A group of 100 people is asked to form small "parties" to represent their political opinions. 95 people consider themselves _moderate_ and form a moderate party, whereas 4 people form an _extremist_ party and 1 person has no opinion. If this 1 person joined the small extremist party (clustering A), then this would have a more noticeable (outlying) effect on this party's political orientation than if the 1 person joined the large moderate party (clustering B). Likewise, purging boundaries grow logarithmically as clusters become larger (see Eq. (14)).
### _Algorithms for Cluster Purging_
#### 4.2.1 Parameter-free Cluster Purging
From the theoretical formulations in Section 3, one can directly derive an algorithm for Cluster Purging. This algorithm takes a dataset \(\mathbf{x}\) and a set of clusterings \((\underline{c},\underline{r})\) as input and returns a set of outliers without requiring any additional parameters. In simple terms, this algorithm can be summarized as
1. Compute the entropy and distortion of all clusterings.
2. Find the lower convex hull of the resulting entropy-distortion pairs to construct a rate-distortion hull.
3. For every cluster in every clustering on this rate-distortion hull, compute how the entropy would change if an observation in this cluster were removed.
4. Based on the resulting changes of entropy and the slope of the rate-distortion hull, compute how much the distortion must change to pass the "purging boundary".
5. Data that, when purged, would be outside of the purging boundary, as well as clusters of size 1, are outliers.
A visual intuition of how this computation is performed is depicted in Figs. 4 and 5, whereas pseudo-code for this algorithm is listed in Algorithm 1. An \(R\) implementation can be found online2. Note that the selected distortion measure \(d(\cdot)\) should be equal to the distortion measure that was used to compute clusterings, e.g. for \(k\)-means clustering \(d(\cdot)\) should be Euclidean distance, for DBSCAN it should be nearest neighbor distance. We confirmed this insight in preliminary experiments, where it turned out that heterogeneous distortion pairs were inferior to homogeneous distortion pairs in all settings we tested.
Footnote 2: [https://tinyurl.com/f59ezjhk](https://tinyurl.com/f59ezjhk)
#### 4.2.2 Parametric Cluster Purging
In some settings, it may be desirable to tune cluster purging to a specific dataset. While the parameter-free nature of the theoretical formulation of Cluster Purging prevents this, one can "cheat" by replacing the estimate of cluster representivity \(\hat{\rho}(\cdot)\) with its true value \(\rho(\cdot)\). Of course, \(\rho(\cdot)\) is not known, yet in supervised settings it can be learned from a training set, or a user may simply guess its value or use a default parametrization.
In particular, the concrete value of \(\rho(\cdot)\) at a specific clustering \((\mathbf{c},\mathbf{r})\) is not even needed. According to (18), it is sufficient if slope \(\kappa\) of the rate-distortion function at \(d(\mathbf{x},\mathbf{r})\) is passed as parameter, since the remaining quantities needed to perform Cluster Purging can be easily inferred from \(\kappa\). A concrete algorithm is listed in Algorithm 2.
```
0:\(\mathbf{x},\underline{c},\mathbf{r},\kappa\)
1: outliers \(\leftarrow\emptyset\)
2:for cluster \(g\in(\mathbf{c},\mathbf{r})\)do
3: Compute change of entropy \(\Delta_{g}\) according to (14);
4:endfor
5:for\(j\in(1,\ldots,n)\)do
6:if\(d(x_{j},r_{ej})\cdot\kappa\leq\Delta_{c_{j}}\)then
7: outliers \(\leftarrow\) outliers \(\cup\)\(x_{j}\);
8:endif
9:endfor
10:return outliers
```
**Algorithm 2** Parametric Cluster Purging
A clear advantage of this parametric variant of Cluster Purging is that, if the true slope is passed to the algorithm, it will necessarily be superior to the parameter-free variant. Further, this variant only needs a single clustering, and is very simple overall. However, we believe that the parameter-free algorithm should generally be preferred over its parametric counterpart (cf. [43]).
### _Efficiency_
In the pseudo-code of Algorithms 1 and 2 there are several verbose instructions whose computational complexity might be non-obvious. In Algorithm 1, lines 3 and 4 require
\(\mathcal{O}(n)\) steps, whereas all remaining verbose steps in both algorithms require at most \(O(\nu td)\) steps. Asymptotically, \(\nu\) is the largest number of clusters, \(t\) the number of clusterings, and \(d\) the dimensionality of the dataset. Since all three of these quantities were assumed to be constant, these steps can hence be performed in \(\mathcal{O}(1)\) time. Consequently, the time complexity of both Algorithms can be reduced to \(\mathcal{O}(n)\).
In terms of space complexity, one will naturally require at least \(\mathcal{O}(tn)\) space to store all clusterings. The remaining memory overhead of both algorithms is constant.
### _Obtaining Multiple Clusterings \((\mathbf{\varepsilon},\mathbf{r})\)_
In recent years, datasets have become increasingly large and _"in many situations, the knowledge extraction process has to be very efficient and close to real time because storing all observed data is nearly infeasible"_[44]. Consequently, it may occur in practice that computing multiple good clusterings of a dataset may be too costly, although the above formulation of rate-distortion hulls would require this. To address this issue, we here discuss methods for efficiently obtaining similar clusterings, i.e. perturbations, from a single "seed" clustering.
In general the theoretical formulations of Cluster Purging permit arbitrary perturbations. However, the quality of a clustering representity estimate depends on how "strongly" the seed clustering was perturbed. Hence, from a rate-distortion theoretic perspective, it is desirable that clustering \((\mathbf{c},\mathbf{r})\) and its perturbation \((\mathbf{\tilde{c}},\mathbf{\tilde{r}})\) are as similar as possible, yet not identical. To achieve this, it is typically sufficient to modify the cluster assignment and representation of a single observation \(x_{j}\), given that this change results in a different entropy-distortion pair, i.e. \([h(\mathbf{c}),d(\mathbf{x},\mathbf{r})]\neq[h(\mathbf{\tilde{c}}),d(\mathbf{x},\mathbf{\tilde{r}})]\). A concrete change that causes this is typically given by selecting the cluster with the largest size, i.e. \(\text{argmax}\mathbf{f}^{\mathbf{c}}\), and removing the observation that causes the largest distortion in this cluster. At first glance, this may seem counterintuitive, since the aim of a perturbation is to cause a small yet sufficiently large change in the clustering, and hence removing the observation from the smallest cluster with the smallest distortion would seem better. We elaborate on this and empirically compare other perturbation strategies in Section 5.1.
### _Nearest Neighbor Representations_
A further issue may occur when the selected clustering technique, e.g. DBSCAN, yields cluster assignments \(\mathbf{c}\) yet no representations \(\mathbf{r}\). In such cases, one can jointly infer \(\mathbf{r}\) from \(\mathbf{x}\) and \(\mathbf{c}\) based on the following intuition: Since clustering techniques group data according to some similarity measure [45], this similarity measure implicitly contains information on what a representation for such a clustering technique might be. In the case of DBSCAN, which clusters data according to nearest neighbor distances, one can simply represent every \(x_{j}\) by its nearest neighbor within the cluster of \(x_{j}\). While using such representations leads to no compression of the data, this is still meaningful if one wants to detect outliers. We demonstrate this empirically in Section 5.2, whereas a visualization can be seen in Fig. 6.
### _Rules of Thumb_
Since Cluster Purging allows highly diverse setups, we formulate three rules of thumb for guiding practitioners:
First, different clusterings offer different entropy-distortion trade-offs, e.g. a clustering with \(n\) clusters leads to a lossless representation yet no compression, whereas a representation with a single cluster leads to good compression yet large distortion. Since purging boundaries depend on cluster sizes, they will adapt to different entropy-distortion trade-offs. Generally speaking, Cluster Purging will work well under many different trade-offs as long as one avoids the extremes of the empirical rate-distortion function.
Secondly, it is desirable that the selected clusterings and/or perturbations have similar entropy-distortion trade-offs. The reason for this is that the estimated rate-distortion slope between two clusterings becomes less accurate the further these clusterings are apart in rate-distortion space. Hence, it is generally not a good idea to combine different clustering techniques, e.g. \(k\)-means and DBSCAN. Pairing similar clusterings is usually better, e.g. 7-means with \(8\)-means. Fixing a single clustering \((\mathbf{c},\mathbf{r})\) and computing a slight perturbation \((\mathbf{\tilde{c}},\mathbf{\tilde{r}})\) by changing the cluster assignment of a single observation is likely best.
Thirdly, the selected distortion measure should be related to the selected clustering technique. For instance, it is often better to pair \(k\)-means with Euclidean distortion than with Hamming distortion, and for hierarchical clusterings one should use the same distance function for computing the clustering and for measuring distortion. For probabilistic clustering techniques, distortion should likely be measured via Kullback-Leibler divergence.
### _Limitations_
The concept of rate-distortion outliers describes _individual_ observations that are outlying. Collective outliers [1] and outlying clusters are not covered and will be addressed in future work. Further, in rare cases it may occur that the computed rate-distortion hull has an increasing segment. In such an increasing region (18) does not hold, and it is best to ignore this region of the rate-distortion hull. Finally, while Algorithms 1 and 2 can be computed in \(\mathcal{O}(n)\) time,
Fig. 6: Cluster Purging applied to a synthetic dataset [38] clustered with DBSCAN. Detected outliers are depicted in red (\(\times\)). _Left:_ For every cluster, a single Euclidean centroid was used as representative, resulting in large, spherical purging boundaries. _Right:_ For every observation, its nearest neighbor within the same cluster was used as representative, resulting in tight boundaries that fit the data well.
the computation of the clusterings they are based on may be more costly.
## 5 Experimental Evaluation
To evaluate the practical applicability and correctness of rate-distortion theory for outlier detection, we conduct a case study in which different perturbation strategies are analyzed (Section 5.1). In Section 5.2, we compare our method Cluster Purging (CP) with other state-of-the-art outlier detection methods in an experimental evaluation on benchmark datasets. Further, we also analyze how frequently Cluster Purging improves upon outliers detected by an existing clustering. Throughout all experiments, we use Euclidean distance as distance measure in all clustering techniques, and consequently also as distortion measure. We avoid using non-distance distortion measures such as Kullback-Leibler divergence, since this would make a fair comparison of Cluster Purging with distance-based outlier detectors difficult. Centroids are computed as the arithmetic mean of all observations in a cluster whenever needed. The **source code** for reproducing all results, as well as all data can be accessed online3.
Footnote 3: [https://tinyurl.com/f59e2jk](https://tinyurl.com/f59e2jk)
### _Case Study: Perturbation for Map Denoising_
From the elaborations made in Section 4.4, one can derive four different perturbation strategies4
Footnote 4: In all four perturbation strategy descriptions, “purge” is short for “reassign to additional unique cluster”.
1. \(\min\)-\(\min\): Select smallest cluster, purge least distorted observation.
2. \(\min\)-\(\max\): Select smallest cluster, purge most distorted observation.
3. \(\max\)-\(\min\): Select largest cluster, purge least distorted observation.
4. \(\max\)-\(\max\): Select largest cluster, purge most distorted observation.
We compare all four strategies in a case study, where the goal is to denoise a dataset via \(k\)-means clustering and outlier detection. The dataset contains coordinates of a map of the continent Europe [46] with \(100\) artificially added noise points. Since \(k\)-means clustering algorithms are sensitive to the selected initial centers, we fix the number of centroids to \(k=225\), and compute \(1000\) different initializations, each for \(10\) different initial random seeds. For every computed clustering, we perform Cluster Purging based on all \(4\) perturbation strategies with noise points considered as outliers. As evaluation measure, we use \(F_{1}=2\cdot\frac{\text{precision-recall}}{\text{precision-recall}}\). Further, since \(\text{inlier}\) and outlier classes are heavily imbalanced (\(169673:100\)) we compute average class-wise \(F_{1}\)-scores in addition to average raw \(F_{1}\)-scores. The results of this case study are reported in Table I, whereas a visualization can be seen in Fig. 7.
### _Competitive Evaluation on Benchmark Datasets_
#### 5.2.1 Setup
We compare both variants of our method, Cluster Purging (CP) and Parametric Cluster Purging (CPP) against closely related outlier detection methods mentioned in Section 2:
* The one-class rate-distortion model (OCRD) [32].
* Variants of \(k\)-means that detect outliers, i.e. \(k\)-means- (KM-) [15] and \(k\)-means with outlier removal (KMOR) [16].
* Raw clusterings, i.e. \(k\)-means clustering, Hierarchical Agglomerative Clustering (HAC) with complete linkage and DBSCAN [14], with singelton clusters considered as outliers (these variants are referred to as _Vanilla_ detectors)
* Cluster-based local outlier factor (CBLOF) [19] based on all vanilla clusterings and raw local outlier factor (LOF) [28].
* Outlier detection for high-dimensional data via Local Projection Score (LPS) [47].
* Cluster Purging (CP) with a single \(\max\)-\(\max\) perturbation and Parametric Cluster Purging (CPP), both based on all vanilla clustering techniques (\(t=1\) clustering each). Other perturbation methods are addressed in Section 5.2.4.
We omit [25, 26] since they use soft clusterings; [20] and [27] because they have high computational cost and are not reproducible, respectively; [30, 31, 29] since we found that two variants of the Local Outlier Factor are sufficient. To enable a comparison with LOF, CBLOF and LPS, which return outlier scores instead of outliers indices, we take the top \(m=|\mathbf{y}|\) scores of these methods, where \(m\) is the true number of outliers in dataset \(\mathbf{x}\). As evaluation measure, we use \(F_{1}\)-score. Further, since all clustering algorithms
\begin{table}
\begin{tabular}{c|c c c c} & \multicolumn{4}{c}{Perturbation Strategy} \\ \hline Measure & \(\min\)-\(\min\) & \(\min\)-\(\max\) & \(\max\)-\(\min\) & \(\max\)-\(\max\) \\ \hline Outlier \(F_{1}\)-score & **0.17** & 0.08 & 0.00 & 0.16 \\ Inlier \(F_{1}\)-score & 0.43 & 0.94 & 0.16 & **0.97** \\ \hline Combined \(F_{1}\)-score & 0.30 & 0.51 & 0.08 & **0.56** \\ \end{tabular}
\end{table} TABLE I: Case Study: Average Class-Wise \(F_{1}\)-scores
Fig. 7: Case Study: Comparison of Purging Boundaries (blue) with \(\min\)-\(\min\) perturbation and \(\max\)-\(\max\) perturbation. True noise points are depicted in red (\(x\)), while detected outliers are not depicted for readability (the left plot would be covered in outliers). _Left_: Purging boundaries derived from a \(\min\)-\(\min\) perturbation are so small that they are barely visible. _Right_: Purging boundaries derived from a \(\max\)-\(\max\) perturbation are \(\approx 67\) times larger than \(\min\)-\(\min\) purging boundaries, almost fully covering the map of Europe.
under consideration (and most outlier detectors) have parameters, it is difficult to generalize outlier detection performances based on a single arbitrarily selected parametrization. Hence, the parameters of all clustering techniques (and outlier detection methods) are grid searched over their respective parameter space towards maximizing \(F_{1}\)-_score_. For methods having several parameters where a grid search would be infeasible, some parameters are set according to literature recommendations. The detailed grid search setups and parametrizations are listed in Table II.
Additionally, to evaluate the claimed computational efficiency of CP and CPP, we track the average runtime of each method per call. We report this quantity instead of overall runtime since the total number of needed calls to each outlier detection method varies for each grid search.
#### 5.2.2 Datasets
The experimental evaluation of all detectors is performed on 13 publicly available benchmark datasets, taken from [48]. These datasets come from diverse domains such as medicine, space, and telecommunications, and were commonly used as benchmarks in literature. More detailed descriptions of the domain background of these datasets can be found in [48]. For this experimental evaluation, dataset Arrhythmia is particularly noteworthy since it is high-dimensional with \(n\approx d\), and Heart, Pima and Ionosphere since they have an outlier ratio \(\frac{m}{n}\) close to \(50\%\).
#### 5.2.3 Main Results
The main results of the competitive evaluation are depicted in Table III. Overall, detectors based on \(k\)-means clusterings performed worse than detectors based on other clusterings. The overall highest average \(F_{1}\)-score was achieved by CBLOF based on HAC clustering. For other clustering methods, CPP performed best. The average performance of OCRD, which is bound to a Blahut-Arimoto-like clustering, was competitive with detectors based on \(k\)-means clusterings, yet lower than that of detectors based on HAC and DBSCAN.
Regarding computational efficiency, vanilla clusterings were faster than methods based on these clusterings. The fastest method was vanilla \(k\)-means, while CPP had the overall lowest surplus runtime after its clustering was computed. The slowest method was LOF followed by LPS.
When considering on how many datasets detectors with exchangeable clusterings did not perform worse than the respective vanilla clustering, there is a clear ranking. Our method CPP performed best (100%), followed by CP (85%), followed by CBLOF (62%).
#### 5.2.4 Detailed Results per Perturbation Method
In the bottom of Table III, average \(F_{1}\)-scores and runtimes of all four considered perturbation strategies are listed per clustering. In terms of average \(F_{1}\)-scores, the \(\max\)-\(\max\) perturbation scored highest most often, whereas differences in runtime between perturbation strategies are negligible. For this reason and due to lack of space, only the detailed scores per dataset of CP with \(\max\)-\(\max\) perturbations are listed in Table III.
## 6 Discussion
The results of the case study indicate that the \(\max\)-\(\max\) perturbation is slightly superior over the other considered perturbation strategies. This is in accordance with the results of the competitive evaluation, and hence we overall argue that \(\max\)-\(\max\) perturbations should be preferred.
In the benchmark evaluation, the parameter-free variant of Cluster Purging seems to be competitive with other detectors, yet does not demonstrate superior detection performances. However, this lack of superiority may be tolerable when one considers that a parameter-free algorithm was compared against parametric ones--where CBLOF, the strongest competitor, received information on how many outliers are present in the dataset. Of course, one may argue that Cluster Purging is not truly parameter-free if only a single clustering is provided, since the selected perturbation strategy can also be seen as a parameter. Yet, when one considers that multiple different perturbation strategies may lead to similar detection results (cf. Table II \(\min\)-\(\max\) and \(\max\)-\(\max\)), then it can be argued that Cluster Purging is still "less" parameter-dependent than other competing methods. Further, if a single parameter is allowed (rate-distortion hull slope \(\kappa\)), then one can use the parametric variant of Cluster Purging, which overall seems to compete strongly against the state-of-the-art. The slow runtime of the seemingly efficient method LOF can be explained by the need of computing up to \(n-1\) nearest neighbors during parameter optimization.
It is also noteworthy that Cluster Purging--especially its parametric variant--performed (or was tied for) best on high-dimensional and outlier heavy datasets Arrhythmia, Heart, Pima and Ionosphere. Hence, one can expect Cluster Purging to tolerate high-dimensional data or high outlier ratios even if clustering such data is challenging.
\begin{table}
\begin{tabular}{l|l|l}
**Methods** & **Grid searched parameters** & **Hard coded parameters** \\ \hline OCRD & \(\beta\): \([0.1,\ldots,10]\) (\(n\) steps) & \(q(0)=0.5\) \\ & & uniform prior \\ \hline \multicolumn{3}{c}{_k_-means} \\ \hline Vanilla & \(k=[2,\ldots,10]\) & \(n_{\text{start}}\)=1000 \\ KM\(\cdots\) & \(k=[2,\ldots,10]\) & \(n_{\text{outline}}=m\) \\ KMOR & \(k=[2,\ldots,10]\), & \(\delta=1\) \\ & \(\gamma=[0.1,\ldots,10]\) (\(n\) steps) & \\ CBLOF & (vanilla parameters) & \\ CP & (vanilla parameters), & \\ & \(\kappa=[0.1,\ldots,10]\) (\(n\) steps) & \\ \hline \multicolumn{3}{c}{HAC} \\ \hline Vanilla & \(k=[1,\ldots,n]\) & \\ CBLOF & (vanilla parameters) & \\ CP & (vanilla parameters) & \\ & \(\kappa=[0.1,\ldots,10]\) (\(n\) steps) & \\ \hline \multicolumn{3}{c}{DBSCAN} \\ \hline Vanilla & \(\min_{p}[d+1,\ldots,d+10]\) & \\ & \(\varepsilon=\text{unique}\min_{p}\)-NN disks. & \\ CBLOF & (vanilla parameters) & \\ CP & (vanilla parameters) & \\ CPP & (vanilla parameters), & \\ & \(\kappa=[0.1,\ldots,10]\) (\(n\) steps) & \\ \hline \multicolumn{3}{c}{No Clustering} \\ \hline LOF & \(k=[1,\ldots,n-1]\) & \\ LPS & \(k=[2,\ldots,\lceil\frac{d}{2}\rceil]\) & \(n_{\text{outline}}=m\) \\ \end{tabular}
\end{table} TABLE II: Compared outlier detection methods and their parametrizations
Consequently, we expect Cluster Purging to perform well in a variety of domains under the premise that a reasonably-working clustering technique is known. Further, our proposed algorithms, especially the parametric variant, are efficient in terms of computational complexity, requiring only \(\mathcal{O}(n)\) time. While at least one clustering is still required as input, this efficiency can be a key advance in scenarios where prior clusterings of the data are available.
## Acknowledgments
We thank the anonymous reviewers for their valuable feedback. This work was partly funded by the iDev40 project. The iDev40 project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 783163. The JU receives support from the European Union's Horizon 2020 research and innovation programme. It is co-funded by the consortium members, grants from Austria, Germany, Belgium, Italy, Spain and Romania.
|
2308.04104 | Pre-outburst signal in the light curves of the recurrent novae RS Oph
and T CrB | Pre-outburst signal (a decrease of the optical brightness) just before the
outburst is clearly detected in the observations of the T CrB obtained before
and during the 1946 outburst. A similar decrease is also visible in the light
curve of RS Oph during the 2021 outburst. We suppose that this is due to
formation of a thick, dense envelope around the white dwarf, and we estimate
its size (1000 - 2000 km), mass (5.10$^{-8}$ - 6.10$^{-7}$ M$_\odot$) and
average density (5 - 16 g cm$^{-3}$). | R. K. Zamanov, V. Marchev, J. Marti, G. Y. Latev | 2023-08-08T07:38:52Z | http://arxiv.org/abs/2308.04104v1 | # Pre-outburst signal in the light curves of the recurrent novae RS Oph and T CrB
###### Abstract
Pre-outburst signal (a decrease of the optical brightness) just before the outburst is clearly detected in the observations of the T CrB obtained before and during the 1946 outburst. A similar decrease is also visible in the light curve of RS Oph during the 2021 outburst. We suppose that this is due to formation of a thick, dense envelope around the white dwarf, and we estimate its size (1000 - 2000 km), mass (\(5\times 10^{-8}-6\times 10^{-7}\) M\({}_{\odot}\)) and average density (5 - 16 g cm\({}^{-3}\)).
Stars: novae, cataclysmic variables - stars: individual: RS Oph, T CrB
## 1 Introduction
The Recurrent Novae (RNe) are classical novae that repeat their outbursts. RNe are ordinary novae systems for which the recurrence time scale happens to be from a decade to a century. They are binary stars where matter accretes from a donor star onto the surface of a white dwarf (WD), where the accumulated material will start a thermonuclear explosion that makes the nova eruption (e.g. Anupama 2008; Schaefer 2010). The two RNe discussed here (T CrB and RS Oph) belong to the group of the RNe with red giant companions and with orbital periods of about one year, \(P_{orb}=227.6\) d for T CrB (Fekel et al. 2000) and \(P_{orb}=453.6\) d for RS Oph (Brandi et al. 2009). T CrB and RS Oph are also classified as symbiotic stars, because the mass donor is a red giant. This type of nova is also referred to as a symbiotic recurrent nova (e.g. Bode 2010, Shore et al. 2011).
In both stars a decrease of the B band brightness is observed a month before the nova outburst. In this work, we propose a hypothesis explaining this drop of optical brightness.
## 2 Pre-outburst signal - RS Oph and T CrB
Adamakis et al. (2011) find a signal via wavelet analysis that can be used to predict a nova outburst. A drop in the B band magnitude (decrease of the B band brightness with \(\sim 1\) magnitude) just before the outburst is clearly detected in the photographic observations of the T CrB obtained before and during the 1946 outburst (Schaefer 2023). A similar decrease (however with smaller amplitude) is visible in the light curve of RS Oph (Fig. 1). We suppose that this is due to formation of a dense envelope around the white dwarf.
The accretion luminosity of an accreting white dwarf is:
\[L_{acc}=G\;\frac{M_{wd}\;\dot{M}_{a}}{R_{wd}}, \tag{1}\]
where G is the gravitational constant, \(M_{wd}\) is the mass of the white dwarf, \(R_{wd}\) is its radius, \(\dot{M}_{a}\) is the mass accretion rate. **Our hypothesis is that** a heavy (dense) envelope forms around the white dwarf. This dense envelope will later produce TNR and nova outburst. The envelope is impenetrable for the accreting matter and the \(L_{acc}\) decreases:
\[L_{acc}=G\;\frac{M_{wd}\;\dot{M}_{a}}{R_{wd}+\Delta R_{env}}, \tag{2}\]
where \(\Delta R_{env}\) is the size (thickness) of the envelope.
A sketch representing accreting white dwarf is drawn on Fig. 2. For most time of the outburst cycle the envelope is thin and the inner edge of the accretion disc reaches the surface of the white dwarf (Fig. 2a). About 30-50 days before the outburst the envelope becomes thick and dense. The inner edge of the accretion disc is not able to go down to the surface of the white dwarf. The brightness decreases (Eq. 2, Fig. 2b). When the pressure exceeds the critical value, the white dwarf explodes as a nova (Fig. 2c).
**RS Oph:** The mass of the white dwarf in RS Oph is estimated \(M_{wd}=1.35\pm 0.01\) M\({}_{\odot}\) on the basis of the supersoft X-ray flux (Kato et al. 2008). The mass-radius relation for WDs gives \(R_{wd}=2296\) km, using the Eggleton's formula as given in Verbunt & Rappaport (1988). The ignition mass, \(M_{ign}\), can be estimated from
\[P_{crit}=G\;\frac{M_{wd}\;M_{ign}}{4\pi\;R_{wd}^{4}}, \tag{3}\]
where \(P_{crit}\) is the critical pressure for ignition, which is \(\approx 10^{19}\) dyne cm\({}^{-2}\) (Jose et al. 2020). We estimate \(M_{ign}=9.7\times 10^{-7}\) M\({}_{\odot}\), and average mass accretion rate \(\dot{M}_{a}=4.8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\), for a 20 years interval between the nova outbursts. Following Eq. 2, for the B band brightness and \(L_{acc}\) to decrease by factor of 1.5, we estimate \(\Delta R_{env}\approx 1150\) km. This corresponds to an average density in the envelope 16.2 g cm\({}^{-3}\).
**T CrB:** The mass of the white dwarf in T CrB is estimated \(M_{wd}=1.37\) M\({}_{\odot}\) on the basis of the radial velocities of the H\(\alpha\) emission line (Stanishev et al. 2004). The mass-radius relation for WDs gives \(R_{wd}=2018\) km. In the same way as above, we calculate \(M_{ign}=5.7\times 10^{-7}\) M\({}_{\odot}\), and average mass accretion rate \(\dot{M}_{a}=7\times 10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\), for a 80 years interval between the nova outbursts. Following Eq. 2, for the \(L_{acc}\) and B band magnitude to decrease by factor of 2, we estimate \(\Delta R_{env}\approx 2000\) km. This corresponds to an average density in the envelope 4.8 g cm\({}^{-3}\), which is 4 times denser than the water (and slightly denser than the granite and aluminum).
It is worth noting, that Ilkiewicz et al. (2023) proposed that the superactive stage of T CrB in the period 2015 - 2023 is due to an activity similar to disc instability of the dwarf novae. The disc instability can be the reason for the density enhancement of the envelope.
The recurrent novae RS Oph and T CrB
Figure 1: AAVSO light curve of the recurrent nova RS Oph around the 2021 outburst. A drop of the brightness before the 2021 outburst is visible in B as well as in V band. The decrease of the brightness is with \(\sim\) 0.5 mag and is indicated with red arrows.
Figure 2: A sketch representing accreting white dwarf: **a)** the inner edge of the accretion disc reaches the surface of the white dwarf; **b)** a dense envelope forms and the inner edge of the accretion disc is not able to go down to the surface of the white dwarf; **c)** the mass of the envelope exceeds the critical value and produces a nova outburst.
Conclusions:We suppose that the decrease of the optical brightness before the nova outburst detected in the observations of the recurrent novae T CrB and RS Oph is a result of formation of a thick, dense envelope around the white dwarf. We estimate for this dense envelope size (1000 - 2000 km), mass (\(5\times 10^{-8}-6\times 10^{-7}\) M\({}_{\odot}\)) and density (5 - 16 g cm\({}^{-3}\)). The next outburst of T CrB is expected soon and multifrequency observations can be valuable to understand the structure of the envelope.
**Acknowledgments:** We acknowledge the grant PID2019-105510GB-C32 / AEI / 10.13039/501100011033 from State Agency for Research of the Spanish Ministry of Science and Innovation and FEDER funds. This research has made use of the AAVSO International Database contributed by observers worldwide.
|
2303.16672 | Force-Free and Autonomous Active Brownian Ratchets | Autonomous active Brownian ratchets rectify active Brownian particle motion
solely by means of a spatially modulated but stationary activity, without
external forces. We argue that such ratcheting requires at least a
two-dimensional geometry. The underlying principle is similar to the ratcheting
induced by steric obstacles in microswimmer baths: suitably polarized swimmers
get channeled, while the others get trapped in low-activity regions until they
loose direction. The maximum current is generally reached in the limit of large
propulsion speeds, in which the rectification efficiency vanishes. Maximum
efficiency is attained at intermediate activities and numerically found to be
on the order of a few percent, for ratchets with simple wedge-shaped
low-activity regions. | Constantin Rein, Martin Kolář, Klaus Kroy, Viktor Holubec | 2023-03-29T13:25:59Z | http://arxiv.org/abs/2303.16672v1 | # Force-Free and Autonomous Active Brownian Ratchets
###### Abstract
Autonomous active Brownian ratchets rectify active Brownian particle motion solely by means of a spatially modulated but stationary activity, without external forces. We argue that such ratcheting requires at least a two-dimensional geometry. The underlying principle is similar to the ratcheting induced by steric obstacles in microswimmer baths: suitably polarized swimmers get channeled, while the others get trapped in low-activity regions until they loose direction. The maximum current is generally reached in the limit of large propulsion speeds, in which the rectification efficiency vanishes. Maximum efficiency is attained at intermediate activities and numerically found to be on the order of a few percent, for ratchets with simple wedge-shaped low-activity regions.
## 1 Introduction
Brownian ratchets are subtle microscale transport devices [1, 2]. They combine two effects that individually do not promote directed transport, namely isotropic Brownian motion and asymmetric environments, such that a net directed particle current is produced [3, 4, 5]. Conventional designs with passive particles usually break the spatial symmetry by imposing an asymmetric potential. They also introduce a non-equilibrium element, namely an isotropic and often periodic driving mechanism that, by itself, does not introduce any directionality [4]. Typical examples comprise the rocking (or "flashing") of the potential or the overall temperature [3, 4, 6, 7]. More complex temperature fields have also been investigated [8, 9, 10, 11, 12].
The self-propulsion of an active Brownian particle (ABP) represents yet another non-equilibrium mechanism that one ought to be able to exploit for ratcheting. While it does transiently break the spatial and temporal symmetry of equilibrium Brownian motion [13], it does not, by itself, give rise to a net macroscopic current. One would however expect that one of the simplest realizations of an active Brownian ratchet should consist of an ABP exposed to a spatially asymmetric (periodic) activity landscape. Yet, even though a number of ratchet designs with active particles have been discussed in the literature [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], none of them was based solely on a stationary activity landscape. Instead, some relied on ABPs placed in a soft potential in one spatial dimension [19, 21], or in asymmetric hard potentials in two-dimensions [14, 16, 17, 18, 20]. The asymmetric potentials, so typical of conventional ratchets, can be relinquished entirely, though, if one exploits the tendency of ABPs to polarize towards low-activity regions and accumulate there [25, 26, 27, 28, 15]. The standard flashing potential can then be replaced by a dynamic activity landscape. Examples include propagating optical activation pulses that induce aligned or anti-aligned drifts, depending on the persistence length of the ABP motion relative to the pulse width and propagation speed [22, 23, 24]. In general, traveling activity waves induce traveling density and orientation waves of the ABPs, and can thus plainly be employed to sort ABPs, e.g., by size [15].
To sum up, ratcheting has been demonstrated for active particles in spatially asymmetric potential landscapes or in space-and-time dependent activity landscapes. However, no fundamental symmetry prevents ABPs from ratcheting also in _stationary_ spatially asymmetric activity landscapes. In the following, we show that such ratchets are indeed realizable and explore the maximum current and rectification efficiency of a class of simple shapes, numerically.
## 2 Model
We consider the motion of an ABP in a unit-square arena (thus taking its size as the natural length unit) with periodic boundary conditions in two dimen
sions (see Fig. 1). The state at time \(t\) is fully characterized by the position \(\mathbf{r}(t)=[x(t),y(t)]\) and polarization \(\mathbf{n}(t)=[\cos\theta(t),\sin\theta(t)]\) of the ABP. The translational and rotational Brownian motion are represented by mutually independent and unbiased (\(\left\langle\eta_{i}\right\rangle=0\)) Gaussian white noises \(\eta_{i}(t)\) of unit strength, \(\left\langle\eta_{i}(t)\eta_{j}(t^{\prime})\right\rangle=\delta_{ij}\delta(t-t ^{\prime})\), and diffusion constants \(D_{\mathbf{t}}\) and \(D_{\mathbf{t}}\), respectively. The stationary activity landscape enters via a superimposed deterministic speed field \(v[x,y]\). The dynamical equations for the ABP read
\[\dot{x} =v(x,y)\cos(\theta)+\sqrt{2D_{\mathbf{t}}}\eta_{x}, \tag{1a}\] \[\dot{y} =v(x,y)\sin(\theta)+\sqrt{2D_{\mathbf{t}}}\eta_{y},\] (1b) \[\dot{\theta} =\sqrt{2D_{\mathbf{t}}}\eta_{\theta}, \tag{1c}\]
We only consider activity fields symmetric in the \(y\)-direction, \(v(x,1/2+y)=v(x,1/2-y)\), so that \(\left\langle\dot{y}(t)\right\rangle=0\) and the steady-state current is a scalar \(I=\left\langle\dot{x}(t)\right\rangle\). Whenever \(I\) is consistently positive or negative, the device exhibits ratcheting.
A few general observations about the dynamics are gleaned directly from the above equations. First, the essential stochastic ingredient of the model is the rotational diffusion. If \(D_{\mathbf{T}}\) is taken to infinity, the ABP motion looses its persistence. The model then reduces to a passive gas locally equilibrated at a spatially modulated (effective) temperature \(T=D_{\mathbf{t}}+v^{2}/2D_{\mathbf{T}}\), with Boltzmann's constant and the friction coefficient set to unity. While such a gas can move thermophoretically in the presence of a temperature gradient, it cannot maintain a steady current in a periodic temperature profile. (We comment on the more subtle limit of a Knudsen gas [29, 30], at the end of the paper.) The ratcheting effect must thus entirely result from a clever combination of the more or less persistent motion in the high- and low-activity regions, respectively.
For conceptual purposes, it is sufficient to consider discrete landscapes with \(v\) being represented by a step function, since the dynamics is anyway low-pass filtered by the translational diffusion process. Any small scale details and discontinuities in \(v(x,y)\) will thereby effectively be washed out. Also notice that setting the maximum value of \(v\) to a very large (formally infinite) value amounts to the idealization of strictly ballistic dynamics in the high-activity (or simply "active") regions. Similarly, retaining a non-vanishing \(D_{\mathbf{t}}>0\) to avoid an absorbing state, the minimum value of \(v\) can safely be set to zero in the low-activity (or simply "passive") regions, without much loss of generality. This choice, which shall be adopted for the remainder, simply amounts to purely diffusive dynamics, inside the passive region. Notice that a periodic two-step function of a single scalar variable is necessarily symmetric. In one dimension, one thus clearly cannot achieve autonomous ratcheting with a corresponding stationary activity field--nor (as shown below) with any other.
In summary, translational diffusion acts as a regularization for step-wise constant activity profiles, so that the archetypal activity landscape discretely jumps between \(v=0\) and some finite or possibly even infinite value \(v\). In the latter case, the active region is traversed in no time, so that, the total dwell time \(\tau\) of the particle in the unit cell is equal to the time spent in the passive region. The latter is independent of \(v\) and, at first sight, of \(D_{\mathbf{T}}\). However, \(D_{\mathbf{T}}\) limits the "take-off" of ABPs emerging from the passive region, and in fact also the whole particle distribution at the active-passive boundary. For example, the ABP cannot take off if it emerges with a swim direction pointing back into the passive region. Also it can "tunnel" through narrow edges of the passive region. One therefore generally still expects the current \(I\simeq\tau^{-1}\) (in our unit length setup) and the dwell time \(\tau\) to depend on \(D_{\mathbf{t}}\) and \(D_{\mathbf{T}}\), even if one takes \(v\to\infty\), in the active region. It is however plausible, and indeed corroborated by our Brownian dynamics simulations of the model presented below that for a given geometric shape of the passive region, one can often find an optimum choice of \(D_{\mathbf{t}}\simeq D_{\mathbf{T}}\) (Fig. 2). Then \(\tau(D_{\mathbf{t}},D_{\mathbf{T}})\to\tau(D_{\mathbf{t}})\) depends solely on \(D_{\mathbf{t}}\), implying \(I\simeq D_{\mathbf{T}}\), with a purely geometric prefactor. The latter can only depend on dimensionless features of the shape (such as the parameters \(\delta\) and \(\varepsilon\) in Fig. 1). In other words, under such idealized conditions, the task of an optimum ratchet design is entirely reduced to a geometric optimization problem.
These general considerations based on an infinite step function \(v(x,y)\) may not always be practically useful, from an active-matter perspective. For instance, an experimental realization of our idealized ABP might possibly only allow for a maximum speed \(v\), below the asymptotic regime alluded to above (in which the dwell time in the arena equals the trapping time in the passive region). This will clearly reduce the ratchet current from its maximum value, and the dwell time will depend both on \(D_{\mathbf{t}}\simeq D_{\mathbf{t}}\) and the maximum attainable value of \(v\). This "attenuated" transport regime, with \(D_{\mathbf{t}}\simeq D_{\mathbf{T}}\simeq v\) may be of particular practical interest, if the active speed of the ABP is regarded as a costly input. The most desirable _modus operandi_ of the ratchet will then not anymore be that of maximum current \(I\simeq D_{\mathbf{t}}\), obtained in the limit \(v\to\infty\), because the ratio \(I/v\) vanishes in this limit. Instead, one will then typically be interested in conditions that optimize this ratio, which can be interpreted as the rectification efficiency of the active ratchet, very much in the spirit of ABP engines and bacterial motors [16, 17, 18]. The interested practitioner will then generally have to find the corresponding optimum parameter values \(D_{\mathbf{t}}\), \(D_{\mathbf{t}}\), and \(v\) for a given ratchet geometry, numerically.
The remainder of the paper is dedicated to a more comprehensive analysis of the above general considerations. In particular, we want to clarify why stationary active Brownian ratchets can only be realized in at least two space dimensions. We also estimate realistic values of the maximum dimensionless current \(I(v\to\infty)/D_{\mathbf{T}}\) and rectification efficiency \(I/v\), for a simple wedge geometry.
## 3 One-dimensional activity patterns
Already in one spatial dimension, spatially varying activity profiles accommodate non-intuitive effects. For example, the mean first passage time may depend non-monotonically on the distance from a target and the target finding probability can increase if the activity increases towards the target [31]. This seemingly contradicts the known fact that active particles spend less time in regions of higher activity. However, while the latter is a steady-state property, the former relates to transient behavior. In fact, when an ABP is oriented along an activity gradient, it accelerates and thus increases its chance to reach a target before it looses its orientation. Similarly, an ABP placed in the middle of a one-dimensional domain with a linear activity gradient reaches the high-activity end faster and more often than the low-activity end [32]. Although these effects look promising with regard to designing autonomous active Brownian ratchets, e.g., with a saw-tooth-shaped stationary activity landscape, there is a catch. In the cited experiments [31, 32], the state is repeatedly reset externally, by placing the particle back in its initial position upon reaching the target or the boundary of the arena. For a genuine ratchet, such _"deus-ex-machina"_ type outside interventions are clearly not a permissible option.
More formally, one can demonstrate the absence of ratcheting in one-dimensional activity landscapes, as follows. Activity landscape can sort and locally accumulate ABPs according to their orientation, but they do not reorient them. Crucially, and quite in contrast to potential landscapes, activity landscapes do not exert any forces or torques on the ABPs, which are a crucial mechanism underlying the ratcheting of ABPs in one-dimensional potential landscapes [19]. As all orientations are thus equally probable in an unbiased ensemble, the spatially integrated total polarization must vanish. Together with the continuity equation for particle number conservation [33], this entails that the net current vanishes, too. More concretely, one may evoke the continuity of the local polarization profile as a function of position, for piece-wise continuous activity profiles [26, 27, 28]. From this one concludes that, for a vanishing total polarization, there must be at least be one position \(x_{0}\) in the polarization profile at which the time-averaged orientation vanishes. The time averaged current \(I\) at this point is given by the time-integral over \(v[x(t)=x_{0}]\cos\theta(t)\). Up to a constant factor, this is just the vanishing time-averaged orientation. And since, in one spatial dimension, the continuity condition implies that the steady state current is spatially constant, \(I\) vanishes everywhere if it vanishes locally, at \(x_{0}\). We have corroborated this conclusion by extensive Brownian dynamics simulations and by numerical solution of the Fokker-Planck equation, associated with Eq. (1), using the method of Ref. [34].
## 4 Two-dimensional activity patterns
Compared to one-dimensional activity landscapes, the situation is much different in two and higher-dimensional activity landscapes. The main reason is that the inevitable zeroes of the polarization do now no longer constrain the overall current to vanish, unless they cover a whole vertical line \((x_{0},\{y\})\). The latter is by no means required by the condition on an overall vanishing polarization. Around an isolated point of vanishing current, the resulting systematic flow field (or, equivalently, polarization field) takes the form of a vortex, as seen in Fig. 1. The sorting and accumulation of ABPs according to their orientation along the \(x\)-direction, which is already possible in one-dimensional activity landscapes [26, 27, 28], and exploited in non-stationary active Brownian ratchets [22, 23, 15, 24], is now modulated along the second spatial direction \(y\). A particle moving along the \(y\)-direction therefore experiences an effectively time-modulated activity pattern along the transport direction \(x\), which has a similar rectifying effect as a dynamical one-dimensional activity profile.
The stationary but spatially periodically modulated activity-landscape \(v(x,y)\) shown in Fig. 1 provides a proof-of-principle example and serves as an instructive illustration of a working ratchet. It features a piece-wise constant activity field with a wedge-shaped passive region, where \(v(x,y)=0\), in an otherwise moderately active unit square with constant \(v(x,y)=D_{\Gamma}\). The landscape is asymmetric along the \(x\)-direction and mirror-symmetric along the \(y\)-direction. The dimensionless numbers \(\delta_{x}\), \(\delta_{y}\), and \(w=\varepsilon(1-2\delta_{x})\), with \(\varepsilon\in[0,1]\), denote the distances of the edges from the periodic boundaries and the width of the wedge along its mirror-symmetry axis, respectively. The extreme geometries corresponding to an infinitely thin passive region (\(\varepsilon=0\)) and a convex, triangular passive region (\(\varepsilon=1\)) both yield sub-optimal ratchets.
While even this simple wedge model is not exactly solvable, its performance can qualitatively be understood, using simple physical arguments. First, the above-mentioned saturation of the ratchet current for infinite speed \(v\to\infty\) in the active region is simply due to the fact that the time spent by the ABP in the active region becomes negligible compared to the time \(\tau\) spent diffusing in the passive region. This limit is thus amenable to event-driven simulations. Below, we go one step further and exploit it to construct a simplified geometric toy model that can provide semi-analytical estimates for the ratcheting current. Unfortunately, as already pointed out above, the conceptually convenient large-speed limit is somewhat academic. The practitioner will be interested in more affordable, finite values of \(v\). Therefore, one should also consider the rectification efficiency \(I/v\), which is the current produced by the ratchet relative to that of a perfectly polarized ABP.
To understand the pertinence of the limits of infinite or vanishing diffusivities \(D_{\Gamma}\), \(D_{\Gamma}\), recall that ratcheting is all about the geometric rectification of stochastic motion. In the limit \(D_{\Gamma}\to 0\) (perfect persistence), the initial polarization is however entirely conserved, while the limit \(D_{\Gamma}\to\infty\) (vanishing persistence) corresponds to thermophoresis within an effective temperature field. So both limits do not correspond to genuine active ratcheting. Similarly, passive regions, with vanishing speed \(v=0\), would
all become absorbing for \(D_{\rm t}\to 0\), while in active regions with a finite \(v<\infty\), \(D_{\rm t}\to\infty\) would wipe out the persistent active motion. Again, both limits are irrelevant for the discussion of active ratcheting. And even though one could set \(D_{\rm t}=0\) without creating an absorbing state if a non-vanishing speed \(v>0\) was maintained in the passive (or less active) region, this choice would be unnatural, as it requires passive regions with vanishing (or even "small") \(v\) to be administratively forbidden. On the other hand, allowing for some finite \(D_{\rm t}\lesssim D_{\rm r}\) is not very consequential for the transport in the (more) active regions, where it merely partially degrades the persistence induced by the activity. This exposes \(D_{\rm t}\) as a parameter of minor physical relevance except for its regularizing role in the passive regions. There are however two more reasons for including a non-vanishing \(D_{\rm t}\), in the discussion. Firstly, it will actually matter for the comparison to practical physical realizations of an ABP ratchet. And secondly, it also serves to regularize some fine-grained details of the ratchet geometry, thereby putting a limit on an otherwise potentially limitless ornamentation of the ratchet design that would in practice have to be cut off by a physical particle radius. In contrast to the indispensable finite rotational diffusivity \(D_{\rm r}\), the translational diffusivity \(D_{\rm t}\) thus plays a rather technical role, as a model regularization parameter.
In conclusion, a pertinent discussion of a stationary ABP ratchet in two dimensions is best conducted for finite diffusivities \(D_{\rm r}\) and \(D_{\rm t}\). While \(D_{\rm t}^{-1}\) may at first suggest itself as the natural time unit of the ratchet (its dwell time), it turns out that its physical impact can, for a conceptual analysis, effectively be taken largely out of the game. The trick is to set it to an optimum value that maximizes the rectification efficiency \(I/v\). Our numerical analysis (see Fig. 2) confirms the expectation that this "best" value is unique and on the order of \(D_{\rm r}\), for the simple geometry shown in Fig. 1. Its physical origin may be understood from the role played by \(D_{\rm t}\) for controlling the ABP's escape time from the passive region. As already pointed out, above, if \(D_{\rm t}\gg D_{\rm r}\), the ABP will not have lost its polarization when it leaves the passive region, and therefore typically swim right back into it, unless that region is narrow enough to be traversed with a substantial ("tunneling") probability. Additionally, the dominance of translational diffusion for \(D_{\rm t}\gg D_{\rm r}\) will unduly degrade the persistence in the active region beyond the inevitable minimum, set by \(D_{\rm r}\). In contrast, if \(D_{\rm t}\ll D_{\rm r}\), the regularizing effect of the translational diffusion onto the absorbing state may become less than optimal, as the initial particle polarization will then have been lost long before the ABP reemerges from the passive region. Altogether, this suggests an optimum value of \(D_{\rm t}\) on the order of \(D_{\rm r}\), as indeed numerically confirmed in Fig. 2.
To summarize, the natural length unit of the stationary active ratchet is set by the domain size, its natural time unit by the inverse rotational diffusion coefficient \(D_{\rm T}^{-1}\). And it is conceptually convenient (if not generally highly advisable) to work with an optimized translational diffusivity \(D_{\rm t}\simeq D_{\rm r}\) of comparable magnitude. The natural scale for the maximum ratchet current \(I\simeq\tau^{-1}\simeq D_{\rm r}\) is then \(D_{\rm r}\) itself, while that of the natural efficiency \(I/v\) is \((\tau v)^{-1}\simeq D_{\rm r}/v\). In practice, both quantities may be expected to be somewhat reduced by a dimensionless geometrical shape factor. The crucial message in then that determining the optimum current \(I/D_{\rm r}\) and efficiency \(I/v\) boils down to an infinite dimensional geometric optimization problem intertwined with the "thermodynamic" optimizations of the parameters \(D_{\rm t}\) and \(D_{\rm t}\), \(v/D_{\rm r}\), respectively.
## 4 Numerical study
To provide a specific but instructive example, Fig. 1 illustrates the working principle of the active Brownian ratchet and its polarization field \({\bf n}(t)\) for a wedge-shaped passive region in the unit square, with periodic boundary conditions. As already alluded to above, the orientation field is indeed seen to form vortices around the points with vanishing average orientation, which help to defy the no-go theorem for one-dimensional active ratchets. To create the figure, we solved Eq. (1) by a Brownian dynamics simulation with time-step \(dt=10^{-4}/v\). The central observable is the ratchet current \(I=x(T)/T\), evaluated as the final traversed \(x\)-distance of the ABP divided
Figure 1: Unit cell of a (unit width) two-dimensional square ratchet with \(\delta_{x}=\delta_{y}=0.1\), \(\varepsilon=0.75\), \(v=D_{\rm r}\), and \(D_{\rm t}=10^{-4}D_{\rm r}\). The background color encodes the probability density for the position of the ABP that predominantly dwells in the wedge-shaped passive region. Arrows show the mean orientation \(\langle{\bf n}\rangle\) of the ABP obtained from Brownian dynamics simulations, colors coding for the angular variance \(1-\left(\langle n_{x}\rangle^{2}+\langle n_{y}\rangle^{2}\right)^{1/2}\); small values indicate strong alignment and O(1)-values a random orientation.
by the total simulation time \(T=10^{7}/v\). We checked that the vertical current \(y(T)/T\) in the \(y\)-direction vanishes, as expected. As demonstrated in Refs. [26, 27, 28], along the active-passive boundary, the ABP points on average towards the passive region. This may seem surprising, since it seems to imply a net particle influx into the passive region. It is an illusion, however, since the swim pressure acting onto an active-passive boundary is not exerted across it [26]. Actually, the particle can therefore "escape" from the passive region, against this swim pressure. If it escapes along the tip-side (right in Fig. 1), it likely ends up in the indented concave part of the passive region (left in Fig. 1). On the other hand, if the ABP escapes in the vertical direction towards the horizontal active channels of width \(2\delta_{y}\) (top and bottom in Fig. 1), it can generate a net current from right to left. As a result, the passive region blocks particle paths to the right more than those to the left. Remarkably, active Brownian ratchets relying on potential forces acting like hard walls [14, 16, 18, 20, 35, 36] are based on the very same principle. The important difference here is that our setup does not involve any potential forces, and the ABP can thus freely pass back and forth between the passive and active region. With hard walls, the ABP would slide along the wedge until it gets trapped in the pocket or escapes into the channel, thereby generating a net ratchet current. In our force-free active ratchet, the sliding motion is replaced by the diffusive spreading inside the passive region.
For the setup illustrated in Fig. 1, we also investigated the rectification efficiency \(I/v\) for finite activity, \(v<\infty\), as a function of the diffusivities \(D_{\text{T}}\) and \(D_{\text{t}}\). In accord with our foregoing qualitative considerations, the numerical results shown in Fig. 2 feature a maximum around \(I/v\sim 0.014\) for \(D_{\text{T}}\sim 0.3v\) and \(D_{\text{t}}\sim 0.001v\). These optimum values are specific for the chosen geometry and cannot be found without performing the numerical simulation.
A more challenging task is to find the most efficient ratchet geometry. Here, we restrict this infinite dimensional optimization problem to the class of wedge or arrowhead shapes illustrated in Fig. 1. We ask for the optimum depth of the concave indentation, which is parametrized by \(\varepsilon\). For shallow indentations, the ABP spends more time in the passive region as needed to loose its polarization. This reduces the current and the rectification efficiency compared to a design with a stronger indentation. However, for very deep indentations, the passive region becomes too narrow to allow for a substantial reorientation of the traversing ABP, and the corresponding "tunneling" of the polarization eventually nullifies the ratcheting effect (\(I\propto\varepsilon\to 0\)). In other words, there is necessarily a non-monotonic dependence of the rectification efficiency on \(z\). As illustrated in Fig. 3, this implies that the intermediate optimum value of \(\varepsilon\), once again, needs to be found numerically. This result also nicely demonstrates the difference between our force-free active ratchet and its siblings operating with potential forces. In particular, for ratchets with hard walls around an exclusion zone of the same shape as our passive region, the ratcheting would always be maintained, regardless of the wall thickness. The figure demonstrates that the non-monotonic dependence of the rectification strength on the indentation depth is robust against the fine tuning of the diffusivities, and that the optimization depends on the interplay between the geometry and the inverse Peclet numbers \(D_{\text{t}}/v\) and \(D_{\text{T}}/v\).
Beyond the indentation depth, one can also consider the effect of the parameter \(\delta_{y}\) for the lateral width of the horizontal active channels. The current decreases both as \(\delta_{y}\to 0\), when the channel width vanishes, and for \(\delta_{y}\gtrsim 1/2\), when the passive volume becomes marginal relative to the overall domain size. Similarly, as for \(\varepsilon\), \(D_{\text{T}}/v\) and \(D_{\text{t}}/v\), the rectification efficiency \(I/v\) thus also exhibits a maximum as a function of \(\delta_{y}\). Finally, the remaining parameter \(\delta_{x}\) measures the overall width of the passive region in the \(x\)-direction. When \(\delta_{x}\to 1/2\), the width of the passive region vanishes, and therefore also the current \(I\), similarly as for \(\varepsilon\to 0\). On the other hand, the current monotonically increases with decreasing \(\delta_{x}\to 0\), until the passive region spans across the whole domain. Together, the shape parameters \(\delta_{x}\), \(\delta_{y}\), and \(\varepsilon\) control how pointed and asymmetric the passive region may become. Generally speaking, \(I/v\) grows with increasing asymmetry.
## 4 Geometric toy model
A more mechanistic insight into the effects of the ratchet geometry on the current can be obtained from a schematic, purely geometrical toy model. It is defined by the idealized rules that the particle moves with infinite speed \(v\to\infty\) in the active region and rotates and spreads sufficiently fast throughout the passive region to emerge from its surface with uniform spatial and orientational distributions, after a dwell time \(\tau\). The ensuing simplifications enable us to bypass the computationally expensive Brownian dynamics simulations for qualitative estimates. The path of the ABP in the
Figure 2: Rectification efficiency \(I/v\) as function of the inverse Péclet numbers \(D_{\text{T}}/v\) and \(D_{\text{t}}/v\), for the active Brownian ratchet depicted in Fig. 1.
active region is then uniquely determined by the ratchet geometry alone. Once the ABP leaves the passive region with randomized orientation and position, it immediately hits either another part of the same passive region or one of its periodoc images, as sketched in Fig. 4.
One can therefore evaluate the probabilities \(P_{\leftarrow}\), \(P_{\rightarrow}\), and \(P_{\uparrow\downarrow}\) that the ABP leaving the passive region travels to the left, right, or merely vertically, respectively. If the dwell time \(\tau\) is approximated by the average reorientation time \(\tau=D_{\mathrm{T}}^{-1}\) of the ABP, as would be the case for an optimum choice of \(D_{\mathrm{t}}\), one estimates the current as
\[I=D_{\mathrm{T}}(P_{\leftarrow}-P_{\rightarrow}). \tag{2}\]
The resulting probabilities are shown in Fig. 4 as functions of the dimensionless horizontal width \(\varepsilon\) of the symmetry axis of the wedge-shaped passive region. One sees that \(P_{\leftarrow}>P_{\rightarrow}\) for all values of \(\varepsilon\), so that the model always predicts a leftward current \(I\) that is numerically roughly comparable to the optimum currents obtained from the Brownian dynamics simulations. It naturally overestimates the current for extreme values of \(\varepsilon\), corresponding to concave and vanishing passive volumes, respectively. The actual reorientation of the ABP is then much less efficient than assumed by the stylized model, so that the comparison further corroborates the primary role played by the optimized destruction of the particle polarization in the passive region, for the rectificaton efficiency of the ratchet.
An iterative evaluation of the toy model provides further insight into the role played by the active channels separating the passive image regions. Starting from an initially uniform position distribution, one can find the distribution of positions where an ABP ensemble leaving the active-passive boundary will become trapped on the boundary again. The resulting position distribution can be used as the initial condition for the next step, again assuming uniformly distributed orientations, for simplicity. After many iterations of this procedure, the position distribution no longer changes and one can consider it as an approximate stationary position distribution of the ABP. The resulting stationary distribution is similar to the one obtained from the Brownian dynamics simulations, depicted in Fig. 1. It exhibits a maximum in the indentation pocket of the passive region and, for \(\delta_{x}=\delta_{y}=0\), also at the reverse indentations connecting the passive region with its periodic images. These particle accumulations hint at the sensitivity of ratcheting to the precise geometry, as they would leak out into the horizontal active channels to constitute the ratchet current, for \(\delta_{y}>0\).
Let us finally come back to the similarities and differences between our toy model and gases in similar geometries. Dense gases or fluids, in which frequent mutual particle collisions can be relied on for establishing local equilibrium, should not exhibit ratcheting in spatially periodic setups, like ours. But in so-called rarefied or Knudsen [29] gases, without an efficient local equilibration mechanism, particles move ballistically in the space between boundaries, similarly as ABPs in the active region of our geometric toy model, so that the analysis of transport largely boils down to the problem of boundary conditions. This is then a more subtle issue [30, 37] that would deserve further study.
## 4 Conclusion
Spatially inhomogeneous activity profiles can be used to sort active Brownian particles according to their orientations [26, 27, 28]. In one spatial dimension, the requirements for the overall system's polarization to vanish, together with particle conservation, prevent ratcheting in time-constant spatially-periodic activity landscapes. In two and more dimensions, such active ratcheting is possible. We analyzed a proof-of-principle realization of a wedge shaped
Figure 4: Geometric toy model for the ratchet of Fig. 1; **a:** dependent on the emission site and direction from the passive region, the ABP contributes a current to the left (red), to the right (blue), or a “neutral” vertical current; **b:** overall probabilities for the traversals depicted in **a**.
Figure 3: Rectification efficiency \(I/v\) as a function of the indentation depth, parametrized by \(1-\varepsilon\). Various combinations of \(D_{\mathrm{T}}\) and \(D_{\mathrm{t}}\) are shown, with colors coding for the value of \(D_{\mathrm{T}}/v\): \(0.1\) (red), \(0.3\) (blue) and \(1\) (green), and line style for \(D_{\mathrm{t}}/v:10^{-4}\) (solid) and \(10^{-3}\) (dotted).
two-dimensional autonomous force-free active Brownian ratchet. It demonstrates that active ratcheting does not require a dynamic activity profile, nor help from potential forces or walls.
Our study can be generalized in several ways. For example, it seems worthwhile to find out whether the wedge-shaped ratchet design maximizes the current or can be surpassed by more optimized geometries. Another potentially interesting extension could be to ABPs with translational and/or orientational inertia [38]. And, eventually, it would be nice if the ratcheting currents in rarefied gases, hinted at by our toy model, could be experimentally demonstrated.
###### Acknowledgements.
We acknowledge financial support by the pre-doc award program at Leipzig University, as well as by the Czech Science Foundation (project No. 20-02955J), and Charles University (project PRIMUS/22/SCI/009).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.